text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_register_or_upload(post_data, files, user, repository):
"""Process a `register` or `upload` comment issued via distutils. This method is called with the authenticated user. """ |
name = post_data.get('name')
version = post_data.get('version')
if settings.LOCALSHOP_VERSIONING_TYPE:
scheme = get_versio_versioning_scheme(settings.LOCALSHOP_VERSIONING_TYPE)
try:
Version(version, scheme=scheme)
except AttributeError:
response = HttpResponseBadRequest(
reason="Invalid version supplied '{!s}' for '{!s}' scheme.".format(
version, settings.LOCALSHOP_VERSIONING_TYPE))
return response
if not name or not version:
logger.info("Missing name or version for package")
return HttpResponseBadRequest('No name or version given')
try:
condition = Q()
for search_name in get_search_names(name):
condition |= Q(name__iexact=search_name)
package = repository.packages.get(condition)
# Error out when we try to override a mirror'ed package for now
# not sure what the best thing is
if not package.is_local:
return HttpResponseBadRequest(
'%s is a pypi package!' % package.name)
try:
release = package.releases.get(version=version)
except ObjectDoesNotExist:
release = None
except ObjectDoesNotExist:
package = None
release = None
# Validate the data
form = forms.ReleaseForm(post_data, instance=release)
if not form.is_valid():
return HttpResponseBadRequest(reason=form.errors.values()[0][0])
if not package:
pkg_form = forms.PackageForm(post_data, repository=repository)
if not pkg_form.is_valid():
return HttpResponseBadRequest(
reason=six.next(six.itervalues(pkg_form.errors))[0])
package = pkg_form.save()
release = form.save(commit=False)
release.package = package
release.save()
# If this is an upload action then process the uploaded file
if files:
files = {
'distribution': files['content']
}
filename = files['distribution']._name
try:
release_file = release.files.get(filename=filename)
if settings.LOCALSHOP_RELEASE_OVERWRITE is False:
message = 'That it already released, please bump version.'
return HttpResponseBadRequest(message)
except ObjectDoesNotExist:
release_file = models.ReleaseFile(
release=release, filename=filename)
form_file = forms.ReleaseFileForm(
post_data, files, instance=release_file)
if not form_file.is_valid():
return HttpResponseBadRequest('ERRORS %s' % form_file.errors)
release_file = form_file.save(commit=False)
release_file.save()
return HttpResponse() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(self):
"""Start a celery task to download the release file from pypi. If `settings.LOCALSHOP_ISOLATED` is True then download the file in-process. """ |
from .tasks import download_file
if not settings.LOCALSHOP_ISOLATED:
download_file.delay(pk=self.pk)
else:
download_file(pk=self.pk) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dispatch_queue(loader):
# type: (DataLoader) -> None """ Given the current state of a Loader instance, perform a batch load from its current queue. """ |
# Take the current loader queue, replacing it with an empty queue.
queue = loader._queue
loader._queue = []
# If a maxBatchSize was provided and the queue is longer, then segment the
# queue into multiple batches, otherwise treat the queue as a single batch.
max_batch_size = loader.max_batch_size
if max_batch_size and max_batch_size < len(queue):
chunks = get_chunks(queue, max_batch_size)
for chunk in chunks:
dispatch_queue_batch(loader, chunk)
else:
dispatch_queue_batch(loader, queue) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def failed_dispatch(loader, queue, error):
# type: (DataLoader, Iterable[Loader], Exception) -> None """ Do not cache individual loads if the entire batch dispatch fails, but still reject each request so they do not hang. """ |
for l in queue:
loader.clear(l.key)
l.reject(error) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load(self, key=None):
# type: (Hashable) -> Promise """ Loads a key, returning a `Promise` for the value represented by that key. """ |
if key is None:
raise TypeError(
(
"The loader.load() function must be called with a value,"
+ "but got: {}."
).format(key)
)
cache_key = self.get_cache_key(key)
# If caching and there is a cache-hit, return cached Promise.
if self.cache:
cached_promise = self._promise_cache.get(cache_key)
if cached_promise:
return cached_promise
# Otherwise, produce a new Promise for this value.
promise = Promise(partial(self.do_resolve_reject, key)) # type: ignore
# If caching, cache this promise.
if self.cache:
self._promise_cache[cache_key] = promise
return promise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_many(self, keys):
# type: (Iterable[Hashable]) -> Promise """ Loads multiple keys, promising an array of values This is equivalent to the more verbose: """ |
if not isinstance(keys, Iterable):
raise TypeError(
(
"The loader.loadMany() function must be called with Array<key> "
+ "but got: {}."
).format(keys)
)
return Promise.all([self.load(key) for key in keys]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear(self, key):
# type: (Hashable) -> DataLoader """ Clears the value at `key` from the cache, if it exists. Returns itself for method chaining. """ |
cache_key = self.get_cache_key(key)
self._promise_cache.pop(cache_key, None)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prime(self, key, value):
# type: (Hashable, Any) -> DataLoader """ Adds the provied key and value to the cache. If the key already exists, no change is made. Returns itself for method chaining. """ |
cache_key = self.get_cache_key(key)
# Only add the key if it does not already exist.
if cache_key not in self._promise_cache:
# Cache a rejected promise if the value is an Error, in order to match
# the behavior of load(key).
if isinstance(value, Exception):
promise = Promise.reject(value)
else:
promise = Promise.resolve(value)
self._promise_cache[cache_key] = promise
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_complete_version(version=None):
"""Returns a tuple of the promise version. If version argument is non-empty, then checks for correctness of the tuple provided. """ |
if version is None:
from promise import VERSION
return VERSION
else:
assert len(version) == 5
assert version[3] in ("alpha", "beta", "rc", "final")
return version |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _xcorr_interp(ccc, dt):
""" Intrpolate around the maximum correlation value for sub-sample precision. :param ccc: Cross-correlation array :type ccc: numpy.ndarray :param dt: sample interval :type dt: float :return: Position of interpolated maximum in seconds from start of ccc :rtype: float """ |
if ccc.shape[0] == 1:
cc = ccc[0]
else:
cc = ccc
# Code borrowed from obspy.signal.cross_correlation.xcorr_pick_correction
cc_curvature = np.concatenate((np.zeros(1), np.diff(cc, 2), np.zeros(1)))
cc_t = np.arange(0, len(cc) * dt, dt)
peak_index = cc.argmax()
first_sample = peak_index
# XXX this could be improved..
while first_sample > 0 and cc_curvature[first_sample - 1] <= 0:
first_sample -= 1
last_sample = peak_index
while last_sample < len(cc) - 1 and cc_curvature[last_sample + 1] <= 0:
last_sample += 1
num_samples = last_sample - first_sample + 1
if num_samples < 3:
msg = "Less than 3 samples selected for fit to cross " + \
"correlation: %s" % num_samples
raise IndexError(msg)
if num_samples < 5:
msg = "Less than 5 samples selected for fit to cross " + \
"correlation: %s" % num_samples
warnings.warn(msg)
coeffs, residual = scipy.polyfit(
cc_t[first_sample:last_sample + 1],
cc[first_sample:last_sample + 1], deg=2, full=True)[:2]
# check results of fit
if coeffs[0] >= 0:
msg = "Fitted parabola opens upwards!"
warnings.warn(msg)
if residual > 0.1:
msg = "Residual in quadratic fit to cross correlation maximum " + \
"larger than 0.1: %s" % residual
warnings.warn(msg)
# X coordinate of vertex of parabola gives time shift to correct
# differential pick time. Y coordinate gives maximum correlation
# coefficient.
shift = -coeffs[1] / 2.0 / coeffs[0]
coeff = (4 * coeffs[0] * coeffs[2] - coeffs[1] ** 2) / (4 * coeffs[0])
return shift, coeff |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _day_loop(detection_streams, template, min_cc, detections, horizontal_chans, vertical_chans, interpolate, cores, parallel, debug=0):
""" Function to loop through multiple detections for one template. Designed to run for the same day of data for I/O simplicity, but as you are passing stream objects it could run for all the detections ever, as long as you have the RAM! :type detection_streams: list :param detection_streams: List of all the detections for this template that you want to compute the optimum pick for. Individual things in list should be of :class:`obspy.core.stream.Stream` type. :type template: obspy.core.stream.Stream :param template: The original template used to detect the detections passed :type min_cc: float :param min_cc: Minimum cross-correlation value to be allowed for a pick. :type detections: list :param detections: List of detections to associate events with an input detection. :type horizontal_chans: list :param horizontal_chans: List of channel endings for horizontal-channels, on which S-picks will be made. :type vertical_chans: list :param vertical_chans: List of channel endings for vertical-channels, on which P-picks will be made. :type interpolate: bool :param interpolate: Interpolate the correlation function to achieve sub-sample precision. :type debug: int :param debug: debug output level 0-5. :returns: Catalog object containing Event objects for each detection created by this template. :rtype: :class:`obspy.core.event.Catalog` """ |
if len(detection_streams) == 0:
return Catalog()
if not cores:
num_cores = cpu_count()
else:
num_cores = cores
if num_cores > len(detection_streams):
num_cores = len(detection_streams)
if parallel:
pool = Pool(processes=num_cores)
debug_print('Made pool of %i workers' % num_cores, 4, debug)
# Parallel generation of events for each detection:
# results will be a list of (i, event class)
results = [pool.apply_async(
_channel_loop, (detection_streams[i], ),
{'template': template, 'min_cc': min_cc,
'detection_id': detections[i].id, 'interpolate': interpolate,
'i': i, 'pre_lag_ccsum': detections[i].detect_val,
'detect_chans': detections[i].no_chans,
'horizontal_chans': horizontal_chans,
'vertical_chans': vertical_chans})
for i in range(len(detection_streams))]
pool.close()
try:
events_list = [p.get() for p in results]
except KeyboardInterrupt as e: # pragma: no cover
pool.terminate()
raise e
pool.join()
events_list.sort(key=lambda tup: tup[0]) # Sort based on index.
else:
events_list = []
for i in range(len(detection_streams)):
events_list.append(_channel_loop(
detection=detection_streams[i], template=template,
min_cc=min_cc, detection_id=detections[i].id,
interpolate=interpolate, i=i,
pre_lag_ccsum=detections[i].detect_val,
detect_chans=detections[i].no_chans,
horizontal_chans=horizontal_chans,
vertical_chans=vertical_chans, debug=debug))
temp_catalog = Catalog()
temp_catalog.events = [event_tup[1] for event_tup in events_list]
return temp_catalog |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_trigger_parameters(filename):
"""Read the trigger parameters into trigger_parameter classes. :type filename: str :param filename: Parameter file :returns: List of :class:`eqcorrscan.utils.trigger.TriggerParameters` :rtype: list .. rubric:: Example """ |
parameters = []
f = open(filename, 'r')
print('Reading parameters with the following header:')
for line in f:
if line[0] == '#':
print(line.rstrip('\n').lstrip('\n'))
else:
parameter_dict = ast.literal_eval(line)
# convert the dictionary to the class
trig_par = TriggerParameters(parameter_dict)
parameters.append(trig_par)
f.close()
return parameters |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _channel_loop(tr, parameters, max_trigger_length=60, despike=False, debug=0):
""" Internal loop for parellel processing. :type tr: obspy.core.trace :param tr: Trace to look for triggers in. :type parameters: list :param parameters: List of TriggerParameter class for trace. :type max_trigger_length: float :type despike: bool :type debug: int :return: trigger :rtype: list """ |
for par in parameters:
if par['station'] == tr.stats.station and \
par['channel'] == tr.stats.channel:
parameter = par
break
else:
msg = 'No parameters set for station ' + str(tr.stats.station)
warnings.warn(msg)
return []
triggers = []
if debug > 0:
print(tr)
tr.detrend('simple')
if despike:
median_filter(tr)
if parameter['lowcut'] and parameter['highcut']:
tr.filter('bandpass', freqmin=parameter['lowcut'],
freqmax=parameter['highcut'])
elif parameter['lowcut']:
tr.filter('highpass', freq=parameter['lowcut'])
elif parameter['highcut']:
tr.filter('lowpass', freq=parameter['highcut'])
# find triggers for each channel using recursive_sta_lta
df = tr.stats.sampling_rate
cft = recursive_sta_lta(tr.data, int(parameter['sta_len'] * df),
int(parameter['lta_len'] * df))
if max_trigger_length:
trig_args = {'max_len_delete': True}
trig_args['max_len'] = int(max_trigger_length *
df + 0.5)
if debug > 3:
plot_trigger(tr, cft, parameter['thr_on'], parameter['thr_off'])
tmp_trigs = trigger_onset(cft, float(parameter['thr_on']),
float(parameter['thr_off']),
**trig_args)
for on, off in tmp_trigs:
cft_peak = tr.data[on:off].max()
cft_std = tr.data[on:off].std()
on = tr.stats.starttime + \
float(on) / tr.stats.sampling_rate
off = tr.stats.starttime + \
float(off) / tr.stats.sampling_rate
triggers.append((on.timestamp, off.timestamp,
tr.id, cft_peak,
cft_std))
return triggers |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, filename, append=True):
"""Write the parameters to a file as a human-readable series of dicts. :type filename: str :param filename: File to write to :type append: bool :param append: Append to already existing file or over-write. """ |
header = ' '.join(['# User:', getpass.getuser(),
'\n# Creation date:', str(UTCDateTime()),
'\n# EQcorrscan version:',
str(eqcorrscan.__version__),
'\n\n\n'])
if append:
f = open(filename, 'a')
else:
f = open(filename, 'w')
f.write(header)
parameters = self.__dict__
f.write(str(parameters))
f.write('\n')
f.close()
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_lib_name(lib):
""" Helper function to get an architecture and Python version specific library filename. """ |
# append any extension suffix defined by Python for current platform
ext_suffix = sysconfig.get_config_var("EXT_SUFFIX")
# in principle "EXT_SUFFIX" is what we want.
# "SO" seems to be deprecated on newer python
# but: older python seems to have empty "EXT_SUFFIX", so we fall back
if not ext_suffix:
try:
ext_suffix = sysconfig.get_config_var("SO")
except Exception as e:
msg = ("Empty 'EXT_SUFFIX' encountered while building CDLL "
"filename and fallback to 'SO' variable failed "
"(%s)." % str(e))
warnings.warn(msg)
pass
if ext_suffix:
libname = lib + ext_suffix
return libname |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_cdll(name):
""" Helper function to load a shared library built during installation with ctypes. :type name: str :param name: Name of the library to load (e.g. 'mseed'). :rtype: :class:`ctypes.CDLL` """ |
# our custom defined part of the extension file name
libname = _get_lib_name(name)
libdir = os.path.join(os.path.dirname(__file__), 'lib')
libpath = os.path.join(libdir, libname)
static_fftw = os.path.join(libdir, 'libfftw3-3.dll')
static_fftwf = os.path.join(libdir, 'libfftw3f-3.dll')
try:
fftw_lib = ctypes.CDLL(str(static_fftw)) # noqa: F841
fftwf_lib = ctypes.CDLL(str(static_fftwf)) # noqa: F841
except:
pass
try:
cdll = ctypes.CDLL(str(libpath))
except Exception as e:
msg = 'Could not load shared library "%s".\n\n %s' % (libname, str(e))
raise ImportError(msg)
return cdll |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cross_net(stream, env=False, debug=0, master=False):
""" Generate picks using a simple envelope cross-correlation. Picks are made for each channel based on optimal moveout defined by maximum cross-correlation with master trace. Master trace will be the first trace in the stream if not set. Requires good inter-station coherance. :type stream: obspy.core.stream.Stream :param stream: Stream to pick :type env: bool :param env: To compute cross-correlations on the envelope or not. :type debug: int :param debug: Debug level from 0-5 :type master: obspy.core.trace.Trace :param master: Trace to use as master, if False, will use the first trace in stream. :returns: :class:`obspy.core.event.event.Event` .. rubric:: Example EQcorrscan .. warning:: This routine is not designed for accurate picking, rather it can be used for a first-pass at picks to obtain simple locations. Based on the waveform-envelope cross-correlation method. """ |
event = Event()
event.origins.append(Origin())
event.creation_info = CreationInfo(author='EQcorrscan',
creation_time=UTCDateTime())
event.comments.append(Comment(text='cross_net'))
samp_rate = stream[0].stats.sampling_rate
if not env:
if debug > 2:
print('Using the raw data')
st = stream.copy()
st.resample(samp_rate)
else:
st = stream.copy()
if debug > 2:
print('Computing envelope')
for tr in st:
tr.resample(samp_rate)
tr.data = envelope(tr.data)
if not master:
master = st[0]
else:
master = master
master.data = np.nan_to_num(master.data)
for i, tr in enumerate(st):
tr.data = np.nan_to_num(tr.data)
if debug > 2:
msg = ' '.join(['Comparing', tr.stats.station, tr.stats.channel,
'with the master'])
print(msg)
shift_len = int(0.3 * len(tr))
if debug > 2:
print('Shift length is set to ' + str(shift_len) + ' samples')
index, cc = xcorr(master, tr, shift_len)
wav_id = WaveformStreamID(station_code=tr.stats.station,
channel_code=tr.stats.channel,
network_code=tr.stats.network)
event.picks.append(Pick(time=tr.stats.starttime +
(index / tr.stats.sampling_rate),
waveform_id=wav_id,
phase_hint='S',
onset='emergent'))
if debug > 2:
print(event.picks[i])
event.origins[0].time = min([pick.time for pick in event.picks]) - 1
# event.origins[0].latitude = float('nan')
# event.origins[0].longitude = float('nan')
# Set arbitrary origin time
del st
return event |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cross_chan_coherence(st1, st2, allow_shift=False, shift_len=0.2, i=0, xcorr_func='time_domain'):
""" Calculate cross-channel coherency. Determine the cross-channel coherency between two streams of multichannel seismic data. :type st1: obspy.core.stream.Stream :param st1: Stream one :type st2: obspy.core.stream.Stream :param st2: Stream two :type allow_shift: bool :param allow_shift: Whether to allow the optimum alignment to be found for coherence, defaults to `False` for strict coherence :type shift_len: float :param shift_len: Seconds to shift, only used if `allow_shift=True` :type i: int :param i: index used for parallel async processing, returned unaltered :type xcorr_func: str, callable :param xcorr_func: The method for performing correlations. Accepts either a string or callabe. See :func:`eqcorrscan.utils.correlate.register_array_xcorr` for more details :returns: cross channel coherence, float - normalized by number of channels, and i, where i is int, as input. :rtype: tuple """ |
cccoh = 0.0
kchan = 0
array_xcorr = get_array_xcorr(xcorr_func)
for tr in st1:
tr2 = st2.select(station=tr.stats.station,
channel=tr.stats.channel)
if len(tr2) > 0 and tr.stats.sampling_rate != \
tr2[0].stats.sampling_rate:
warnings.warn('Sampling rates do not match, not using: %s.%s'
% (tr.stats.station, tr.stats.channel))
if len(tr2) > 0 and allow_shift:
index, corval = xcorr(tr, tr2[0],
int(shift_len * tr.stats.sampling_rate))
cccoh += corval
kchan += 1
elif len(tr2) > 0:
min_len = min(len(tr.data), len(tr2[0].data))
cccoh += array_xcorr(
np.array([tr.data[0:min_len]]), tr2[0].data[0:min_len],
[0])[0][0][0]
kchan += 1
if kchan:
cccoh /= kchan
return np.round(cccoh, 6), i
else:
warnings.warn('No matching channels')
return 0, i |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distance_matrix(stream_list, allow_shift=False, shift_len=0, cores=1):
""" Compute distance matrix for waveforms based on cross-correlations. Function to compute the distance matrix for all templates - will give distance as 1-abs(cccoh), e.g. a well correlated pair of templates will have small distances, and an equally well correlated reverse image will have the same distance as a positively correlated image - this is an issue. :type stream_list: list :param stream_list: List of the :class:`obspy.core.stream.Stream` to compute the distance matrix for :type allow_shift: bool :param allow_shift: To allow templates to shift or not? :type shift_len: float :param shift_len: How many seconds for templates to shift :type cores: int :param cores: Number of cores to parallel process using, defaults to 1. :returns: distance matrix :rtype: :class:`numpy.ndarray` .. warning:: Because distance is given as :math:`1-abs(coherence)`, negatively correlated and positively correlated objects are given the same distance. """ |
# Initialize square matrix
dist_mat = np.array([np.array([0.0] * len(stream_list))] *
len(stream_list))
for i, master in enumerate(stream_list):
# Start a parallel processing pool
pool = Pool(processes=cores)
# Parallel processing
results = [pool.apply_async(cross_chan_coherence,
args=(master, stream_list[j], allow_shift,
shift_len, j))
for j in range(len(stream_list))]
pool.close()
# Extract the results when they are done
dist_list = [p.get() for p in results]
# Close and join all the processes back to the master process
pool.join()
# Sort the results by the input j
dist_list.sort(key=lambda tup: tup[1])
# Sort the list into the dist_mat structure
for j in range(i, len(stream_list)):
if i == j:
dist_mat[i, j] = 0.0
else:
dist_mat[i, j] = 1 - dist_list[j][0]
# Reshape the distance matrix
for i in range(1, len(stream_list)):
for j in range(i):
dist_mat[i, j] = dist_mat.T[i, j]
return dist_mat |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cluster(template_list, show=True, corr_thresh=0.3, allow_shift=False, shift_len=0, save_corrmat=False, cores='all', debug=1):
""" Cluster template waveforms based on average correlations. Function to take a set of templates and cluster them, will return groups as lists of streams. Clustering is done by computing the cross-channel correlation sum of each stream in stream_list with every other stream in the list. :mod:`scipy.cluster.hierarchy` functions are then used to compute the complete distance matrix, where distance is 1 minus the normalised cross-correlation sum such that larger distances are less similar events. Groups are then created by clustering the distance matrix at distances less than 1 - corr_thresh. Will compute the distance matrix in parallel, using all available cores :type template_list: list :param template_list: List of tuples of the template (:class:`obspy.core.stream.Stream`) and the template id to compute clustering for :type show: bool :param show: plot linkage on screen if True, defaults to True :type corr_thresh: float :param corr_thresh: Cross-channel correlation threshold for grouping :type allow_shift: bool :param allow_shift: Whether to allow the templates to shift when correlating :type shift_len: float :param shift_len: How many seconds to allow the templates to shift :type save_corrmat: bool :param save_corrmat: If True will save the distance matrix to dist_mat.npy in the local directory. :type cores: int :param cores: number of cores to use when computing the distance matrix, defaults to 'all' which will work out how many cpus are available and hog them. :type debug: int :param debug: Level of debugging from 1-5, higher is more output, currently only level 1 implemented. :returns: List of groups. Each group is a list of :class:`obspy.core.stream.Stream` making up that group. """ |
if cores == 'all':
num_cores = cpu_count()
else:
num_cores = cores
# Extract only the Streams from stream_list
stream_list = [x[0] for x in template_list]
# Compute the distance matrix
if debug >= 1:
print('Computing the distance matrix using %i cores' % num_cores)
dist_mat = distance_matrix(stream_list, allow_shift, shift_len,
cores=num_cores)
if save_corrmat:
np.save('dist_mat.npy', dist_mat)
if debug >= 1:
print('Saved the distance matrix as dist_mat.npy')
dist_vec = squareform(dist_mat)
if debug >= 1:
print('Computing linkage')
Z = linkage(dist_vec)
if show:
if debug >= 1:
print('Plotting the dendrogram')
dendrogram(Z, color_threshold=1 - corr_thresh,
distance_sort='ascending')
plt.show()
# Get the indices of the groups
if debug >= 1:
print('Clustering')
indices = fcluster(Z, t=1 - corr_thresh, criterion='distance')
# Indices start at 1...
group_ids = list(set(indices)) # Unique list of group ids
if debug >= 1:
msg = ' '.join(['Found', str(len(group_ids)), 'groups'])
print(msg)
# Convert to tuple of (group id, stream id)
indices = [(indices[i], i) for i in range(len(indices))]
# Sort by group id
indices.sort(key=lambda tup: tup[0])
groups = []
if debug >= 1:
print('Extracting and grouping')
for group_id in group_ids:
group = []
for ind in indices:
if ind[0] == group_id:
group.append(template_list[ind[1]])
elif ind[0] > group_id:
# Because we have sorted by group id, when the index is greater
# than the group_id we can break the inner loop.
# Patch applied by CJC 05/11/2015
groups.append(group)
break
# Catch the final group
groups.append(group)
return groups |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SVD(stream_list, full=False):
""" Depreciated. Use svd. """ |
warnings.warn('Depreciated, use svd instead.')
return svd(stream_list=stream_list, full=full) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def svd(stream_list, full=False):
""" Compute the SVD of a number of templates. Returns the singular vectors and singular values of the templates. :type stream_list: List of :class: obspy.Stream :param stream_list: List of the templates to be analysed :type full: bool :param full: Whether to compute the full input vector matrix or not. :return: SValues(list) for each channel, SVectors(list of ndarray), \ UVectors(list of ndarray) for each channel, \ stachans, List of String (station.channel) .. note:: We recommend that you align the data before computing the \ SVD, e.g., the P-arrival on all templates for the same channel \ should appear at the same time in the trace. See the \ stacking.align_traces function for a way to do this. .. note:: Uses the numpy.linalg.svd function, their U, s and V are mapped \ to UVectors, SValues and SVectors respectively. Their V (and ours) \ corresponds to V.H. """ |
# Convert templates into ndarrays for each channel
# First find all unique channels:
stachans = list(set([(tr.stats.station, tr.stats.channel)
for st in stream_list for tr in st]))
stachans.sort()
# Initialize a list for the output matrices, one matrix per-channel
svalues = []
svectors = []
uvectors = []
for stachan in stachans:
lengths = []
for st in stream_list:
tr = st.select(station=stachan[0],
channel=stachan[1])
if len(tr) > 0:
tr = tr[0]
else:
warnings.warn('Stream does not contain %s'
% '.'.join(list(stachan)))
continue
lengths.append(len(tr.data))
min_length = min(lengths)
for stream in stream_list:
chan = stream.select(station=stachan[0],
channel=stachan[1])
if chan:
if len(chan[0].data) > min_length:
if abs(len(chan[0].data) - min_length) > 0.1 * \
chan[0].stats.sampling_rate:
raise IndexError('More than 0.1 s length '
'difference, align and fix')
warnings.warn('Channels are not equal length, trimming')
chan[0].data = chan[0].data[0:min_length]
if 'chan_mat' not in locals():
chan_mat = chan[0].data
else:
chan_mat = np.vstack((chan_mat, chan[0].data))
if not len(chan_mat.shape) > 1:
warnings.warn('Matrix of traces is less than 2D for %s'
% '.'.join(list(stachan)))
continue
# Be sure to transpose chan_mat as waveforms must define columns
chan_mat = np.asarray(chan_mat)
u, s, v = np.linalg.svd(chan_mat.T, full_matrices=full)
svalues.append(s)
svectors.append(v)
uvectors.append(u)
del (chan_mat)
return uvectors, svalues, svectors, stachans |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def empirical_SVD(stream_list, linear=True):
""" Depreciated. Use empirical_svd. """ |
warnings.warn('Depreciated, use empirical_svd instead.')
return empirical_svd(stream_list=stream_list, linear=linear) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def empirical_svd(stream_list, linear=True):
""" Empirical subspace detector generation function. Takes a list of templates and computes the stack as the first order subspace detector, and the differential of this as the second order subspace detector following the empirical subspace method of `Barrett & Beroza, 2014 - SRL <http://srl.geoscienceworld.org/content/85/3/594.extract>`_. :type stream_list: list :param stream_list: list of streams to compute the subspace detectors from, where streams are :class:`obspy.core.stream.Stream` objects. :type linear: bool :param linear: Set to true by default to compute the linear stack as the \ first subspace vector, False will use the phase-weighted stack as the \ first subspace vector. :returns: list of two :class:`obspy.core.stream.Stream` s """ |
# Run a check to ensure all traces are the same length
stachans = list(set([(tr.stats.station, tr.stats.channel)
for st in stream_list for tr in st]))
for stachan in stachans:
lengths = []
for st in stream_list:
lengths.append(len(st.select(station=stachan[0],
channel=stachan[1])[0]))
min_length = min(lengths)
for st in stream_list:
tr = st.select(station=stachan[0],
channel=stachan[1])[0]
if len(tr.data) > min_length:
sr = tr.stats.sampling_rate
if abs(len(tr.data) - min_length) > (0.1 * sr):
msg = 'More than 0.1 s length difference, align and fix'
raise IndexError(msg)
msg = ' is not the same length as others, trimming the end'
warnings.warn(str(tr) + msg)
tr.data = tr.data[0:min_length]
if linear:
first_subspace = stacking.linstack(stream_list)
else:
first_subspace = stacking.PWS_stack(streams=stream_list)
second_subspace = first_subspace.copy()
for i in range(len(second_subspace)):
second_subspace[i].data = np.diff(second_subspace[i].data)
delta = second_subspace[i].stats.delta
second_subspace[i].stats.starttime += 0.5 * delta
return [first_subspace, second_subspace] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SVD_2_stream(uvectors, stachans, k, sampling_rate):
""" Depreciated. Use svd_to_stream """ |
warnings.warn('Depreciated, use svd_to_stream instead.')
return svd_to_stream(uvectors=uvectors, stachans=stachans, k=k,
sampling_rate=sampling_rate) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def svd_to_stream(uvectors, stachans, k, sampling_rate):
""" Convert the singular vectors output by SVD to streams. One stream will be generated for each singular vector level, for all channels. Useful for plotting, and aiding seismologists thinking of waveforms! :type svectors: list :param svectors: List of :class:`numpy.ndarray` Singular vectors :type stachans: list :param stachans: List of station.channel Strings :type k: int :param k: Number of streams to return = number of SV's to include :type sampling_rate: float :param sampling_rate: Sampling rate in Hz :returns: svstreams, List of :class:`obspy.core.stream.Stream`, with svStreams[0] being composed of the highest rank singular vectors. """ |
svstreams = []
for i in range(k):
svstream = []
for j, stachan in enumerate(stachans):
if len(uvectors[j]) <= k:
warnings.warn('Too few traces at %s for a %02d dimensional '
'subspace. Detector streams will not include '
'this channel.' % ('.'.join(stachan[0],
stachan[1]), k))
else:
svstream.append(Trace(uvectors[j][i],
header={'station': stachan[0],
'channel': stachan[1],
'sampling_rate': sampling_rate}))
svstreams.append(Stream(svstream))
return svstreams |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def corr_cluster(trace_list, thresh=0.9):
""" Group traces based on correlations above threshold with the stack. Will run twice, once with a lower threshold to remove large outliers that would negatively affect the stack, then again with your threshold. :type trace_list: list :param trace_list: List of :class:`obspy.core.stream.Trace` to compute similarity between :type thresh: float :param thresh: Correlation threshold between -1-1 :returns: :class:`numpy.ndarray` of bool of whether that trace correlates well enough (above your given threshold) with the stack. .. note:: We recommend that you align the data before computing the clustering, e.g., the P-arrival on all templates for the same channel should appear at the same time in the trace. See the :func:`eqcorrscan.utils.stacking.align_traces` function for a way to do this. """ |
stack = stacking.linstack([Stream(tr) for tr in trace_list])[0]
output = np.array([False] * len(trace_list))
group1 = []
array_xcorr = get_array_xcorr()
for i, tr in enumerate(trace_list):
if array_xcorr(
np.array([tr.data]), stack.data, [0])[0][0][0] > 0.6:
output[i] = True
group1.append(tr)
if not group1:
warnings.warn('Nothing made it past the first 0.6 threshold')
return output
stack = stacking.linstack([Stream(tr) for tr in group1])[0]
group2 = []
for i, tr in enumerate(trace_list):
if array_xcorr(
np.array([tr.data]), stack.data, [0])[0][0][0] > thresh:
group2.append(tr)
output[i] = True
else:
output[i] = False
return output |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dist_mat_km(catalog):
""" Compute the distance matrix for all a catalog using epicentral separation. Will give physical distance in kilometers. :type catalog: obspy.core.event.Catalog :param catalog: Catalog for which to compute the distance matrix :returns: distance matrix :rtype: :class:`numpy.ndarray` """ |
# Initialize square matrix
dist_mat = np.array([np.array([0.0] * len(catalog))] *
len(catalog))
# Calculate distance vector for each event
for i, master in enumerate(catalog):
mast_list = []
if master.preferred_origin():
master_ori = master.preferred_origin()
else:
master_ori = master.origins[-1]
master_tup = (master_ori.latitude,
master_ori.longitude,
master_ori.depth // 1000)
for slave in catalog:
if slave.preferred_origin():
slave_ori = slave.preferred_origin()
else:
slave_ori = slave.origins[-1]
slave_tup = (slave_ori.latitude,
slave_ori.longitude,
slave_ori.depth // 1000)
mast_list.append(dist_calc(master_tup, slave_tup))
# Sort the list into the dist_mat structure
for j in range(i, len(catalog)):
dist_mat[i, j] = mast_list[j]
# Reshape the distance matrix
for i in range(1, len(catalog)):
for j in range(i):
dist_mat[i, j] = dist_mat.T[i, j]
return dist_mat |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def space_cluster(catalog, d_thresh, show=True):
""" Cluster a catalog by distance only. Will compute the matrix of physical distances between events and utilize the :mod:`scipy.clustering.hierarchy` module to perform the clustering. :type catalog: obspy.core.event.Catalog :param catalog: Catalog of events to clustered :type d_thresh: float :param d_thresh: Maximum inter-event distance threshold :returns: list of :class:`obspy.core.event.Catalog` objects :rtype: list """ |
# Compute the distance matrix and linkage
dist_mat = dist_mat_km(catalog)
dist_vec = squareform(dist_mat)
Z = linkage(dist_vec, method='average')
# Cluster the linkage using the given threshold as the cutoff
indices = fcluster(Z, t=d_thresh, criterion='distance')
group_ids = list(set(indices))
indices = [(indices[i], i) for i in range(len(indices))]
if show:
# Plot the dendrogram...if it's not way too huge
dendrogram(Z, color_threshold=d_thresh,
distance_sort='ascending')
plt.show()
# Sort by group id
indices.sort(key=lambda tup: tup[0])
groups = []
for group_id in group_ids:
group = Catalog()
for ind in indices:
if ind[0] == group_id:
group.append(catalog[ind[1]])
elif ind[0] > group_id:
# Because we have sorted by group id, when the index is greater
# than the group_id we can break the inner loop.
# Patch applied by CJC 05/11/2015
groups.append(group)
break
groups.append(group)
return groups |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def space_time_cluster(catalog, t_thresh, d_thresh):
""" Cluster detections in space and time. Use to separate repeaters from other events. Clusters by distance first, then removes events in those groups that are at different times. :type catalog: obspy.core.event.Catalog :param catalog: Catalog of events to clustered :type t_thresh: float :param t_thresh: Maximum inter-event time threshold in seconds :type d_thresh: float :param d_thresh: Maximum inter-event distance in km :returns: list of :class:`obspy.core.event.Catalog` objects :rtype: list """ |
initial_spatial_groups = space_cluster(catalog=catalog, d_thresh=d_thresh,
show=False)
# Need initial_spatial_groups to be lists at the moment
initial_spatial_lists = []
for group in initial_spatial_groups:
initial_spatial_lists.append(list(group))
# Check within these groups and throw them out if they are not close in
# time.
groups = []
for group in initial_spatial_lists:
for master in group:
for event in group:
if abs(event.preferred_origin().time -
master.preferred_origin().time) > t_thresh:
# If greater then just put event in on it's own
groups.append([event])
group.remove(event)
groups.append(group)
return [Catalog(group) for group in groups] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def re_thresh_csv(path, old_thresh, new_thresh, chan_thresh):
""" Remove detections by changing the threshold. Can only be done to remove detection by increasing threshold, threshold lowering will have no effect. :type path: str :param path: Path to the .csv detection file :type old_thresh: float :param old_thresh: Old threshold MAD multiplier :type new_thresh: float :param new_thresh: New threshold MAD multiplier :type chan_thresh: int :param chan_thresh: Minimum number of channels for a detection :returns: List of detections :rtype: list .. rubric:: Example Read in 22 detections Left with 17 detections .. Note:: This is a legacy function, and will read detections from all versions. .. Warning:: Only works if thresholding was done by MAD. """ |
from eqcorrscan.core.match_filter import read_detections
warnings.warn('Legacy function, please use '
'eqcorrscan.core.match_filter.Party.rethreshold.')
old_detections = read_detections(path)
old_thresh = float(old_thresh)
new_thresh = float(new_thresh)
# Be nice, ensure that the thresholds are float
detections = []
detections_in = 0
detections_out = 0
for detection in old_detections:
detections_in += 1
con1 = (new_thresh / old_thresh) * detection.threshold
con2 = detection.no_chans >= chan_thresh
requirted_thresh = (new_thresh / old_thresh) * detection.threshold
con3 = abs(detection.detect_val) >= requirted_thresh
if all([con1, con2, con3]):
detections_out += 1
detections.append(detection)
print('Read in %i detections' % detections_in)
print('Left with %i detections' % detections_out)
return detections |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pool_boy(Pool, traces, **kwargs):
""" A context manager for handling the setup and cleanup of a pool object. :param Pool: any Class (not instance) that implements the multiprocessing Pool interface :param traces: The number of traces to process :type traces: int """ |
# All parallel processing happens on a per-trace basis, we shouldn't create
# more workers than there are traces
n_cores = kwargs.get('cores', cpu_count())
if n_cores is None:
n_cores = cpu_count()
if n_cores > traces:
n_cores = traces
pool = Pool(n_cores)
yield pool
pool.close()
pool.join() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _general_multithread(func):
""" return the general multithreading function using func """ |
def multithread(templates, stream, *args, **kwargs):
with pool_boy(ThreadPool, len(stream), **kwargs) as pool:
return _pool_normxcorr(templates, stream, pool=pool, func=func)
return multithread |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_array_xcorr(name, func=None, is_default=False):
""" Decorator for registering correlation functions. Each function must have the same interface as numpy_normxcorr, which is *f(templates, stream, pads, *args, **kwargs)* any number of specific kwargs can be used. Register_normxcorr can be used as a decorator (with or without arguments) or as a callable. :param name: The name of the function for quick access, or the callable that will be wrapped when used as a decorator. :type name: str, callable :param func: The function to register :type func: callable, optional :param is_default: True if this function should be marked as default normxcorr :type is_default: bool :return: callable """ |
valid_methods = set(list(XCOR_ARRAY_METHODS) + list(XCORR_STREAM_METHODS))
cache = {}
def register(register_str):
"""
Register a function as an implementation.
:param register_str: The registration designation
:type register_str: str
"""
if register_str not in valid_methods:
msg = 'register_name must be in %s' % valid_methods
raise ValueError(msg)
def _register(func):
cache[register_str] = func
setattr(cache['func'], register_str, func)
return func
return _register
def wrapper(func, func_name=None):
# register the functions in the XCOR
fname = func_name or name.__name__ if callable(name) else str(name)
XCOR_FUNCS[fname] = func
# if is_default: # set function as default
# XCOR_FUNCS['default'] = func
# attach some attrs, this is a bit of a hack to avoid pickle problems
func.register = register
cache['func'] = func
func.multithread = _general_multithread(func)
func.multiprocess = _general_multiprocess(func)
func.concurrent = _general_multithread(func)
func.stream_xcorr = _general_serial(func)
func.array_xcorr = func
func.registered = True
if is_default: # set function as default
XCOR_FUNCS['default'] = copy.deepcopy(func)
return func
# used as a decorator
if callable(name):
return wrapper(name)
# used as a normal function (called and passed a function)
if callable(func):
return wrapper(func, func_name=name)
# called, then used as a decorator
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_registerd_func(name_or_func):
""" get a xcorr function from a str or callable. """ |
# get the function or register callable
if callable(name_or_func):
func = register_array_xcorr(name_or_func)
else:
func = XCOR_FUNCS[name_or_func or 'default']
assert callable(func), 'func is not callable'
# ensure func has the added methods
if not hasattr(func, 'registered'):
func = register_array_xcorr(func)
return func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def numpy_normxcorr(templates, stream, pads, *args, **kwargs):
""" Compute the normalized cross-correlation using numpy and bottleneck. :param templates: 2D Array of templates :type templates: np.ndarray :param stream: 1D array of continuous data :type stream: np.ndarray :param pads: List of ints of pad lengths in the same order as templates :type pads: list :return: np.ndarray of cross-correlations :return: np.ndarray channels used """ |
import bottleneck
from scipy.signal.signaltools import _centered
# Generate a template mask
used_chans = ~np.isnan(templates).any(axis=1)
# Currently have to use float64 as bottleneck runs into issues with other
# types: https://github.com/kwgoodman/bottleneck/issues/164
stream = stream.astype(np.float64)
templates = templates.astype(np.float64)
template_length = templates.shape[1]
stream_length = len(stream)
fftshape = next_fast_len(template_length + stream_length - 1)
# Set up normalizers
stream_mean_array = bottleneck.move_mean(
stream, template_length)[template_length - 1:]
stream_std_array = bottleneck.move_std(
stream, template_length)[template_length - 1:]
# because stream_std_array is in denominator or res, nan all 0s
stream_std_array[stream_std_array == 0] = np.nan
# Normalize and flip the templates
norm = ((templates - templates.mean(axis=-1, keepdims=True)) / (
templates.std(axis=-1, keepdims=True) * template_length))
norm_sum = norm.sum(axis=-1, keepdims=True)
stream_fft = np.fft.rfft(stream, fftshape)
template_fft = np.fft.rfft(np.flip(norm, axis=-1), fftshape, axis=-1)
res = np.fft.irfft(template_fft * stream_fft,
fftshape)[:, 0:template_length + stream_length - 1]
res = ((_centered(res, stream_length - template_length + 1)) -
norm_sum * stream_mean_array) / stream_std_array
res[np.isnan(res)] = 0.0
# res[np.isinf(res)] = 0.0
for i, pad in enumerate(pads): # range(len(pads)):
res[i] = np.append(res[i], np.zeros(pad))[pad:]
return res.astype(np.float32), used_chans |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def time_multi_normxcorr(templates, stream, pads, threaded=False, *args, **kwargs):
""" Compute cross-correlations in the time-domain using C routine. :param templates: 2D Array of templates :type templates: np.ndarray :param stream: 1D array of continuous data :type stream: np.ndarray :param pads: List of ints of pad lengths in the same order as templates :type pads: list :param threaded: Whether to use the threaded routine or not :type threaded: bool :return: np.ndarray of cross-correlations :return: np.ndarray channels used """ |
used_chans = ~np.isnan(templates).any(axis=1)
utilslib = _load_cdll('libutils')
argtypes = [
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS')),
ctypes.c_int, ctypes.c_int,
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS')),
ctypes.c_int,
np.ctypeslib.ndpointer(dtype=np.float32, ndim=1,
flags=native_str('C_CONTIGUOUS'))]
restype = ctypes.c_int
if threaded:
func = utilslib.multi_normxcorr_time_threaded
argtypes.append(ctypes.c_int)
else:
func = utilslib.multi_normxcorr_time
func.argtypes = argtypes
func.restype = restype
# Need to de-mean everything
templates_means = templates.mean(axis=1).astype(np.float32)[:, np.newaxis]
stream_mean = stream.mean().astype(np.float32)
templates = templates.astype(np.float32) - templates_means
stream = stream.astype(np.float32) - stream_mean
template_len = templates.shape[1]
n_templates = templates.shape[0]
image_len = stream.shape[0]
ccc = np.ascontiguousarray(
np.empty((image_len - template_len + 1) * n_templates), np.float32)
t_array = np.ascontiguousarray(templates.flatten(), np.float32)
time_args = [t_array, template_len, n_templates,
np.ascontiguousarray(stream, np.float32), image_len, ccc]
if threaded:
time_args.append(kwargs.get('cores', cpu_count()))
func(*time_args)
ccc[np.isnan(ccc)] = 0.0
ccc = ccc.reshape((n_templates, image_len - template_len + 1))
for i in range(len(pads)):
ccc[i] = np.append(ccc[i], np.zeros(pads[i]))[pads[i]:]
templates += templates_means
stream += stream_mean
return ccc, used_chans |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _time_threaded_normxcorr(templates, stream, *args, **kwargs):
""" Use the threaded time-domain routine for concurrency :type templates: list :param templates: A list of templates, where each one should be an obspy.Stream object containing multiple traces of seismic data and the relevant header information. :type stream: obspy.core.stream.Stream :param stream: A single Stream object to be correlated with the templates. :returns: New list of :class:`numpy.ndarray` objects. These will contain the correlation sums for each template for this day of data. :rtype: list :returns: list of ints as number of channels used for each cross-correlation. :rtype: list :returns: list of list of tuples of station, channel for all cross-correlations. :rtype: list """ |
no_chans = np.zeros(len(templates))
chans = [[] for _ in range(len(templates))]
array_dict_tuple = _get_array_dicts(templates, stream)
stream_dict, template_dict, pad_dict, seed_ids = array_dict_tuple
cccsums = np.zeros([len(templates),
len(stream[0]) - len(templates[0][0]) + 1])
for seed_id in seed_ids:
tr_cc, tr_chans = time_multi_normxcorr(
template_dict[seed_id], stream_dict[seed_id], pad_dict[seed_id],
True)
cccsums = np.sum([cccsums, tr_cc], axis=0)
no_chans += tr_chans.astype(np.int)
for chan, state in zip(chans, tr_chans):
if state:
chan.append((seed_id.split('.')[1],
seed_id.split('.')[-1].split('_')[0]))
return cccsums, no_chans, chans |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _fftw_stream_xcorr(templates, stream, *args, **kwargs):
""" Apply fftw normxcorr routine concurrently. :type templates: list :param templates: A list of templates, where each one should be an obspy.Stream object containing multiple traces of seismic data and the relevant header information. :type stream: obspy.core.stream.Stream :param stream: A single Stream object to be correlated with the templates. :returns: New list of :class:`numpy.ndarray` objects. These will contain the correlation sums for each template for this day of data. :rtype: list :returns: list of ints as number of channels used for each cross-correlation. :rtype: list :returns: list of list of tuples of station, channel for all cross-correlations. :rtype: list """ |
# number of threads:
# default to using inner threads
# if `cores` or `cores_outer` passed in then use that
# else if OMP_NUM_THREADS set use that
# otherwise use all available
num_cores_inner = kwargs.get('cores')
num_cores_outer = kwargs.get('cores_outer')
if num_cores_inner is None and num_cores_outer is None:
num_cores_inner = int(os.getenv("OMP_NUM_THREADS", cpu_count()))
num_cores_outer = 1
elif num_cores_inner is not None and num_cores_outer is None:
num_cores_outer = 1
elif num_cores_outer is not None and num_cores_inner is None:
num_cores_inner = 1
chans = [[] for _i in range(len(templates))]
array_dict_tuple = _get_array_dicts(templates, stream)
stream_dict, template_dict, pad_dict, seed_ids = array_dict_tuple
assert set(seed_ids)
cccsums, tr_chans = fftw_multi_normxcorr(
template_array=template_dict, stream_array=stream_dict,
pad_array=pad_dict, seed_ids=seed_ids, cores_inner=num_cores_inner,
cores_outer=num_cores_outer)
no_chans = np.sum(np.array(tr_chans).astype(np.int), axis=0)
for seed_id, tr_chan in zip(seed_ids, tr_chans):
for chan, state in zip(chans, tr_chan):
if state:
chan.append((seed_id.split('.')[1],
seed_id.split('.')[-1].split('_')[0]))
return cccsums, no_chans, chans |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_stream_xcorr(name_or_func=None, concurrency=None):
""" Return a function for performing normalized cross correlation on lists of streams. :param name_or_func: Either a name of a registered function or a callable that implements the standard array_normxcorr signature. :param concurrency: Optional concurrency strategy, options are below. :return: A callable with the interface of stream_normxcorr :Concurrency options: - multithread - use a threadpool for concurrency; - multiprocess - use a process pool for concurrency; - concurrent - use a customized concurrency strategy for the function, if not defined threading will be used. """ |
func = _get_registerd_func(name_or_func)
concur = concurrency or 'stream_xcorr'
if not hasattr(func, concur):
msg = '%s does not support concurrency %s' % (func.__name__, concur)
raise ValueError(msg)
return getattr(func, concur) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_array_dicts(templates, stream, copy_streams=True):
""" prepare templates and stream, return dicts """ |
# Do some reshaping
# init empty structures for data storage
template_dict = {}
stream_dict = {}
pad_dict = {}
t_starts = []
stream.sort(['network', 'station', 'location', 'channel'])
for template in templates:
template.sort(['network', 'station', 'location', 'channel'])
t_starts.append(min([tr.stats.starttime for tr in template]))
# get seed ids, make sure these are collected on sorted streams
seed_ids = [tr.id + '_' + str(i) for i, tr in enumerate(templates[0])]
# pull common channels out of streams and templates and put in dicts
for i, seed_id in enumerate(seed_ids):
temps_with_seed = [template[i].data for template in templates]
t_ar = np.array(temps_with_seed).astype(np.float32)
template_dict.update({seed_id: t_ar})
stream_dict.update(
{seed_id: stream.select(
id=seed_id.split('_')[0])[0].data.astype(np.float32)})
pad_list = [
int(round(template[i].stats.sampling_rate *
(template[i].stats.starttime - t_starts[j])))
for j, template in zip(range(len(templates)), templates)]
pad_dict.update({seed_id: pad_list})
return stream_dict, template_dict, pad_dict, seed_ids |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def median_filter(tr, multiplier=10, windowlength=0.5, interp_len=0.05, debug=0):
""" Filter out spikes in data above a multiple of MAD of the data. Currently only has the ability to replaces spikes with linear interpolation. In the future we would aim to fill the gap with something more appropriate. Works in-place on data. :type tr: obspy.core.trace.Trace :param tr: trace to despike :type multiplier: float :param multiplier: median absolute deviation multiplier to find spikes above. :type windowlength: float :param windowlength: Length of window to look for spikes in in seconds. :type interp_len: float :param interp_len: Length in seconds to interpolate around spikes. :type debug: int :param debug: Debug output level between 0 and 5, higher is more output. :returns: :class:`obspy.core.trace.Trace` .. warning:: Not particularly effective, and may remove earthquake signals, use with caution. """ |
num_cores = cpu_count()
if debug >= 1:
data_in = tr.copy()
# Note - might be worth finding spikes in filtered data
filt = tr.copy()
filt.detrend('linear')
try:
filt.filter('bandpass', freqmin=10.0,
freqmax=(tr.stats.sampling_rate / 2) - 1)
except Exception as e:
print("Could not filter due to error: {0}".format(e))
data = filt.data
del filt
# Loop through windows
_windowlength = int(windowlength * tr.stats.sampling_rate)
_interp_len = int(interp_len * tr.stats.sampling_rate)
peaks = []
with Timer() as t:
pool = Pool(processes=num_cores)
results = [pool.apply_async(_median_window,
args=(data[chunk * _windowlength:
(chunk + 1) * _windowlength],
chunk * _windowlength, multiplier,
tr.stats.starttime + windowlength,
tr.stats.sampling_rate,
debug))
for chunk in range(int(len(data) / _windowlength))]
pool.close()
for p in results:
peaks += p.get()
pool.join()
for peak in peaks:
tr.data = _interp_gap(tr.data, peak[1], _interp_len)
print("Despiking took: %s s" % t.secs)
if debug >= 1:
plt.plot(data_in.data, 'r', label='raw')
plt.plot(tr.data, 'k', label='despiked')
plt.legend()
plt.show()
return tr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _median_window(window, window_start, multiplier, starttime, sampling_rate, debug=0):
""" Internal function to aid parallel processing :type window: numpy.ndarry :param window: Data to look for peaks in. :type window_start: int :param window_start: Index of window start point in larger array, used \ for peak indexing. :type multiplier: float :param multiplier: Multiple of MAD to use as threshold :type starttime: obspy.core.utcdatetime.UTCDateTime :param starttime: Starttime of window, used in debug plotting. :type sampling_rate: float :param sampling_rate in Hz, used for debug plotting :type debug: int :param debug: debug level, if want plots, >= 4. :returns: peaks :rtype: list """ |
MAD = np.median(np.abs(window))
thresh = multiplier * MAD
if debug >= 2:
print('Threshold for window is: ' + str(thresh) +
'\nMedian is: ' + str(MAD) +
'\nMax is: ' + str(np.max(window)))
peaks = find_peaks2_short(arr=window,
thresh=thresh, trig_int=5, debug=0)
if debug >= 4 and peaks:
peaks_plot(window, starttime, sampling_rate,
save=False, peaks=peaks)
if peaks:
peaks = [(peak[0], peak[1] + window_start) for peak in peaks]
else:
peaks = []
return peaks |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _interp_gap(data, peak_loc, interp_len):
""" Internal function for filling gap with linear interpolation :type data: numpy.ndarray :param data: data to remove peak in :type peak_loc: int :param peak_loc: peak location position :type interp_len: int :param interp_len: window to interpolate :returns: Trace works in-place :rtype: :class:`obspy.core.trace.Trace` """ |
start_loc = peak_loc - int(0.5 * interp_len)
end_loc = peak_loc + int(0.5 * interp_len)
if start_loc < 0:
start_loc = 0
if end_loc > len(data) - 1:
end_loc = len(data) - 1
fill = np.linspace(data[start_loc], data[end_loc], end_loc - start_loc)
data[start_loc:end_loc] = fill
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def template_remove(tr, template, cc_thresh, windowlength, interp_len, debug=0):
""" Looks for instances of template in the trace and removes the matches. :type tr: obspy.core.trace.Trace :param tr: Trace to remove spikes from. :type template: osbpy.core.trace.Trace :param template: Spike template to look for in data. :type cc_thresh: float :param cc_thresh: Cross-correlation threshold (-1 - 1). :type windowlength: float :param windowlength: Length of window to look for spikes in in seconds. :type interp_len: float :param interp_len: Window length to remove and fill in seconds. :type debug: int :param debug: Debug level. :returns: tr, works in place. :rtype: :class:`obspy.core.trace.Trace` """ |
data_in = tr.copy()
_interp_len = int(tr.stats.sampling_rate * interp_len)
if _interp_len < len(template.data):
warnings.warn('Interp_len is less than the length of the template,'
'will used the length of the template!')
_interp_len = len(template.data)
if isinstance(template, Trace):
template = template.data
with Timer() as t:
cc = normxcorr2(image=tr.data.astype(np.float32),
template=template.astype(np.float32))
if debug > 3:
plt.plot(cc.flatten(), 'k', label='cross-correlation')
plt.legend()
plt.show()
peaks = find_peaks2_short(arr=cc.flatten(), thresh=cc_thresh,
trig_int=windowlength * tr.stats.
sampling_rate)
for peak in peaks:
tr.data = _interp_gap(data=tr.data,
peak_loc=peak[1] + int(0.5 * _interp_len),
interp_len=_interp_len)
print("Despiking took: %s s" % t.secs)
if debug > 2:
plt.plot(data_in.data, 'r', label='raw')
plt.plot(tr.data, 'k', label='despiked')
plt.legend()
plt.show()
return tr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_data(archive, arc_type, day, stachans, length=86400):
""" Function to read the appropriate data from an archive for a day. :type archive: str :param archive: The archive source - if arc_type is seishub, this should be a url, if the arc_type is FDSN then this can be either a url or a known obspy client. If arc_type is day_vols, then this is the path to the top directory. :type arc_type: str :param arc_type: The type of archive, can be: seishub, FDSN, day_volumes :type day: datetime.date :param day: Date to retrieve data for :type stachans: list :param stachans: List of tuples of Stations and channels to try and get, will not fail if stations are not available, but will warn. :type length: float :param length: Data length to extract in seconds, defaults to 1 day. :returns: Stream of data :rtype: obspy.core.stream.Stream .. note:: A note on arc_types, if arc_type is day_vols, then this will \ look for directories labelled in the IRIS DMC conventions of \ Data within these files directories should be stored as day-long, \ single-channel files. This is not implemented in the fasted way \ possible to allow for a more general situation. If you require more \ speed you will need to re-write this. .. rubric:: Example 1 Trace(s) in Stream: BP.JCNB.40.SP1 | 2012-03-26T00:00:00.000000Z - 2012-03-26T23:59:59.\ 950000Z | 20.0 Hz, 1728000 samples .. rubric:: Example, missing data 1 Trace(s) in Stream: BP.JCNB.40.SP1 | 2012-03-26T00:00:00.000000Z - 2012-03-26T23:59:59.\ 950000Z | 20.0 Hz, 1728000 samples .. rubric:: Example, local day-volumes 2 Trace(s) in Stream: AF.WHYM..SHZ | 2012-03-26T00:00:00.000000Z - 2012-03-26T23:59:59.000000Z \ | 1.0 Hz, 86400 samples AF.EORO..SHZ | 2012-03-26T00:00:00.000000Z - 2012-03-26T23:59:59.000000Z \ | 1.0 Hz, 86400 samples """ |
st = []
available_stations = _check_available_data(archive, arc_type, day)
for station in stachans:
if len(station[1]) == 2:
# Cope with two char channel naming in seisan
station_map = (station[0], station[1][0] + '*' + station[1][1])
available_stations_map = [(sta[0], sta[1][0] + '*' + sta[1][-1])
for sta in available_stations]
else:
station_map = station
available_stations_map = available_stations
if station_map not in available_stations_map:
msg = ' '.join([station[0], station_map[1], 'is not available for',
day.strftime('%Y/%m/%d')])
warnings.warn(msg)
continue
if arc_type.lower() == 'seishub':
client = SeishubClient(archive)
st += client.get_waveforms(
network='*', station=station_map[0], location='*',
channel=station_map[1], starttime=UTCDateTime(day),
endtime=UTCDateTime(day) + length)
elif arc_type.upper() == "FDSN":
client = FDSNClient(archive)
try:
st += client.get_waveforms(
network='*', station=station_map[0], location='*',
channel=station_map[1], starttime=UTCDateTime(day),
endtime=UTCDateTime(day) + length)
except FDSNException:
warnings.warn('No data on server despite station being ' +
'available...')
continue
elif arc_type.lower() == 'day_vols':
wavfiles = _get_station_file(os.path.join(
archive, day.strftime('Y%Y' + os.sep + 'R%j.01')),
station_map[0], station_map[1])
for wavfile in wavfiles:
st += read(wavfile, starttime=day, endtime=day + length)
st = Stream(st)
return st |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_station_file(path_name, station, channel, debug=0):
""" Helper function to find the correct file. :type path_name: str :param path_name: Path to files to check. :type station: str :type channel: str :returns: list of filenames, str """ |
wavfiles = glob.glob(path_name + os.sep + '*')
out_files = [_check_data(wavfile, station, channel, debug=debug)
for wavfile in wavfiles]
out_files = list(set(out_files))
return out_files |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_data(wavfile, station, channel, debug=0):
""" Inner loop for parallel checks. :type wavfile: str :param wavfile: Wavefile path name to look in. :type station: str :param station: Channel name to check for :type channel: str :param channel: Channel name to check for :type debug: int :param debug: Debug level, if > 1, will output what it it working on. """ |
if debug > 1:
print('Checking ' + wavfile)
st = read(wavfile, headonly=True)
for tr in st:
if tr.stats.station == station and tr.stats.channel == channel:
return wavfile |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_available_data(archive, arc_type, day):
""" Function to check what stations are available in the archive for a given \ day. :type archive: str :param archive: The archive source :type arc_type: str :param arc_type: The type of archive, can be: :type day: datetime.date :param day: Date to retrieve data for :returns: list of tuples of (station, channel) as available. .. note:: Currently the seishub options are untested. """ |
available_stations = []
if arc_type.lower() == 'day_vols':
wavefiles = glob.glob(os.path.join(archive, day.strftime('Y%Y'),
day.strftime('R%j.01'), '*'))
for wavefile in wavefiles:
header = read(wavefile, headonly=True)
available_stations.append((header[0].stats.station,
header[0].stats.channel))
elif arc_type.lower() == 'seishub':
client = SeishubClient(archive)
st = client.get_previews(starttime=UTCDateTime(day),
endtime=UTCDateTime(day) + 86400)
for tr in st:
available_stations.append((tr.stats.station, tr.stats.channel))
elif arc_type.lower() == 'fdsn':
client = FDSNClient(archive)
inventory = client.get_stations(starttime=UTCDateTime(day),
endtime=UTCDateTime(day) + 86400,
level='channel')
for network in inventory:
for station in network:
for channel in station:
available_stations.append((station.code,
channel.code))
return available_stations |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rt_time_log(logfile, startdate):
""" Open and read reftek raw log-file. Function to open and read a log-file as written by a RefTek RT130 datalogger. The information within is then scanned for timing errors above the threshold. :type logfile: str :param logfile: The logfile to look in :type startdate: datetime.date :param startdate: The start of the file as a date - files contain timing \ and the julian day, but not the year. :returns: List of tuple of (:class:`datetime.datetime`, float) as time \ stamps and phase error. """ |
if os.name == 'nt':
f = io.open(logfile, 'rb')
else:
f = io.open(logfile, 'rb')
phase_err = []
lock = []
# Extract all the phase errors
for line_binary in f:
try:
line = line_binary.decode("utf8", "ignore")
except UnicodeDecodeError:
warnings.warn('Cannot decode line, skipping')
continue
if re.search("INTERNAL CLOCK PHASE ERROR", line):
match = re.search("INTERNAL CLOCK PHASE ERROR", line)
d_start = match.start() - 13
phase_err.append((dt.datetime.strptime(str(startdate.year) +
':' +
line[d_start:d_start + 12],
'%Y:%j:%H:%M:%S'),
float(line.rstrip().split()[-2]) *
0.000001))
elif re.search("EXTERNAL CLOCK POWER IS TURNED OFF", line):
match = re.search("EXTERNAL CLOCK POWER IS TURNED OFF", line)
d_start = match.start() - 13
lock.append((dt.datetime.strptime(str(startdate.year) +
':' + line[d_start:d_start + 12],
'%Y:%j:%H:%M:%S'),
999))
if len(phase_err) == 0 and len(lock) > 0:
phase_err = lock
f.close()
return phase_err |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rt_location_log(logfile):
""" Extract location information from a RefTek raw log-file. Function to read a specific RefTek RT130 log-file and find all location information. :type logfile: str :param logfile: The logfile to look in :returns: list of tuples of lat, lon, elevation in decimal degrees and km. :rtype: list """ |
if os.name == 'nt':
f = open(logfile, 'rb')
else:
f = open(logfile, 'rb')
locations = []
for line_binary in f:
try:
line = line_binary.decode("utf8", "ignore")
except UnicodeDecodeError:
warnings.warn('Cannot decode line, skipping')
print(line_binary)
continue
match = re.search("GPS: POSITION:", line)
if match:
# Line is of form:
# jjj:hh:mm:ss GPS: POSITION: xDD:MM:SS.SS xDDD:MM:SS.SS xMMMMMMM
loc = line[match.end() + 1:].rstrip().split(' ')
lat_sign = loc[0][0]
lat = loc[0][1:].split(':')
lat = int(lat[0]) + (int(lat[1]) / 60.0) + (float(lat[2]) / 3600.0)
if lat_sign == 'S':
lat *= -1
lon_sign = loc[1][0]
lon = loc[1][1:].split(':')
lon = int(lon[0]) + (int(lon[1]) / 60.0) + (float(lon[2]) / 3600.0)
if lon_sign == 'W':
lon *= -1
elev_sign = loc[2][0]
elev_unit = loc[2][-1]
if not elev_unit == 'M':
raise NotImplementedError('Elevation is not in M: unit=' +
elev_unit)
elev = int(loc[2][1:-1])
if elev_sign == '-':
elev *= -1
# Convert to km
elev /= 1000
locations.append((lat, lon, elev))
f.close()
return locations |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flag_time_err(phase_err, time_thresh=0.02):
""" Find large time errors in list. Scan through a list of tuples of time stamps and phase errors and return a list of time stamps with timing errors above a threshold. .. note:: This becomes important for networks cross-correlations, where if timing information is uncertain at one site, the relative arrival time (lag) will be incorrect, which will degrade the cross-correlation sum. :type phase_err: list :param phase_err: List of Tuple of float, datetime.datetime :type time_thresh: float :param time_thresh: Threshold to declare a timing error for :returns: List of :class:`datetime.datetime` when timing is questionable. """ |
time_err = []
for stamp in phase_err:
if abs(stamp[1]) > time_thresh:
time_err.append(stamp[0])
return time_err |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_all_logs(directory, time_thresh):
""" Check all the log-files in a directory tree for timing errors. :type directory: str :param directory: Directory to search within :type time_thresh: float :param time_thresh: Time threshold in seconds :returns: List of :class:`datetime.datetime` for which error timing is above threshold, e.g. times when data are questionable. :rtype: list """ |
log_files = glob.glob(directory + '/*/0/000000000_00000000')
print('I have ' + str(len(log_files)) + ' log files to scan')
total_phase_errs = []
for i, log_file in enumerate(log_files):
startdate = dt.datetime.strptime(log_file.split('/')[-4][0:7],
'%Y%j').date()
total_phase_errs += rt_time_log(log_file, startdate)
sys.stdout.write("\r" + str(float(i) / len(log_files) * 100) +
"% \r")
sys.stdout.flush()
time_errs = flag_time_err(total_phase_errs, time_thresh)
time_errs.sort()
return time_errs, total_phase_errs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _cc_round(num, dp):
""" Convenience function to take a float and round it to dp padding with zeros to return a string :type num: float :param num: Number to round :type dp: int :param dp: Number of decimal places to round to. :returns: str 0.25 """ |
num = round(num, dp)
num = '{0:.{1}f}'.format(num, dp)
return num |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def readSTATION0(path, stations):
""" Read a Seisan STATION0.HYP file on the path given. Outputs the information, and writes to station.dat file. :type path: str :param path: Path to the STATION0.HYP file :type stations: list :param stations: Stations to look for :returns: List of tuples of station, lat, long, elevation :rtype: list [('WHFS', -43.261, 170.359, 60.0), ('WHAT2', -43.2793, \ 170.36038333333335, 95.0), ('BOB', 41.408166666666666, \ -174.87116666666665, 101.0)] """ |
stalist = []
f = open(path + '/STATION0.HYP', 'r')
for line in f:
if line[1:6].strip() in stations:
station = line[1:6].strip()
lat = line[6:14] # Format is either ddmm.mmS/N or ddmm(.)mmmS/N
if lat[-1] == 'S':
NS = -1
else:
NS = 1
if lat[4] == '.':
lat = (int(lat[0:2]) + float(lat[2:-1]) / 60) * NS
else:
lat = (int(lat[0:2]) + float(lat[2:4] + '.' + lat[4:-1]) /
60) * NS
lon = line[14:23]
if lon[-1] == 'W':
EW = -1
else:
EW = 1
if lon[5] == '.':
lon = (int(lon[0:3]) + float(lon[3:-1]) / 60) * EW
else:
lon = (int(lon[0:3]) + float(lon[3:5] + '.' + lon[5:-1]) /
60) * EW
elev = float(line[23:-1].strip())
# Note, negative altitude can be indicated in 1st column
if line[0] == '-':
elev *= -1
stalist.append((station, lat, lon, elev))
f.close()
f = open('station.dat', 'w')
for sta in stalist:
line = ''.join([sta[0].ljust(5), _cc_round(sta[1], 4).ljust(10),
_cc_round(sta[2], 4).ljust(10),
_cc_round(sta[3] / 1000, 4).rjust(7), '\n'])
f.write(line)
f.close()
return stalist |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sfiles_to_event(sfile_list):
""" Write an event.dat file from a list of Seisan events :type sfile_list: list :param sfile_list: List of s-files to sort and put into the database :returns: List of tuples of event ID (int) and Sfile name """ |
event_list = []
sort_list = [(readheader(sfile).origins[0].time, sfile)
for sfile in sfile_list]
sort_list.sort(key=lambda tup: tup[0])
sfile_list = [sfile[1] for sfile in sort_list]
catalog = Catalog()
for i, sfile in enumerate(sfile_list):
event_list.append((i, sfile))
catalog.append(readheader(sfile))
# Hand off to sister function
write_event(catalog)
return event_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_event(catalog):
""" Write obspy.core.event.Catalog to a hypoDD format event.dat file. :type catalog: obspy.core.event.Catalog :param catalog: A catalog of obspy events. """ |
f = open('event.dat', 'w')
for i, event in enumerate(catalog):
try:
evinfo = event.origins[0]
except IndexError:
raise IOError('No origin')
try:
Mag_1 = event.magnitudes[0].mag
except IndexError:
Mag_1 = 0.0
try:
t_RMS = event.origins[0].quality['standard_error']
except AttributeError:
print('No time residual in header')
t_RMS = 0.0
f.write(str(evinfo.time.year) + str(evinfo.time.month).zfill(2) +
str(evinfo.time.day).zfill(2) + ' ' +
str(evinfo.time.hour).rjust(2) +
str(evinfo.time.minute).zfill(2) +
str(evinfo.time.second).zfill(2) +
str(evinfo.time.microsecond)[0:2].zfill(2) + ' ' +
str(evinfo.latitude).ljust(8, str('0')) + ' ' +
str(evinfo.longitude).ljust(8, str('0')) + ' ' +
str(evinfo.depth / 1000).rjust(7).ljust(9, str('0')) + ' ' +
str(Mag_1) + ' 0.00 0.00 ' +
str(t_RMS).ljust(4, str('0')) +
str(i).rjust(11) + '\n')
f.close()
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_phase(ph_file):
""" Read hypoDD phase files into Obspy catalog class. :type ph_file: str :param ph_file: Phase file to read event info from. :returns: Catalog of events from file. :rtype: :class:`obspy.core.event.Catalog` True """ |
ph_catalog = Catalog()
f = open(ph_file, 'r')
# Topline of each event is marked by # in position 0
for line in f:
if line[0] == '#':
if 'event_text' not in locals():
event_text = {'header': line.rstrip(),
'picks': []}
else:
ph_catalog.append(_phase_to_event(event_text))
event_text = {'header': line.rstrip(),
'picks': []}
else:
event_text['picks'].append(line.rstrip())
ph_catalog.append(_phase_to_event(event_text))
return ph_catalog |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _phase_to_event(event_text):
""" Function to convert the text for one event in hypoDD phase format to \ event object. :type event_text: dict :param event_text: dict of two elements, header and picks, header is a \ str, picks is a list of str. :returns: obspy.core.event.Event """ |
ph_event = Event()
# Extract info from header line
# YR, MO, DY, HR, MN, SC, LAT, LON, DEP, MAG, EH, EZ, RMS, ID
header = event_text['header'].split()
ph_event.origins.append(Origin())
ph_event.origins[0].time =\
UTCDateTime(year=int(header[1]), month=int(header[2]),
day=int(header[3]), hour=int(header[4]),
minute=int(header[5]), second=int(header[6].split('.')[0]),
microsecond=int(float(('0.' + header[6].split('.')[1])) *
1000000))
ph_event.origins[0].latitude = float(header[7])
ph_event.origins[0].longitude = float(header[8])
ph_event.origins[0].depth = float(header[9]) * 1000
ph_event.origins[0].quality = OriginQuality(
standard_error=float(header[13]))
ph_event.magnitudes.append(Magnitude())
ph_event.magnitudes[0].mag = float(header[10])
ph_event.magnitudes[0].magnitude_type = 'M'
# Extract arrival info from picks!
for i, pick_line in enumerate(event_text['picks']):
pick = pick_line.split()
_waveform_id = WaveformStreamID(station_code=pick[0])
pick_time = ph_event.origins[0].time + float(pick[1])
ph_event.picks.append(Pick(waveform_id=_waveform_id,
phase_hint=pick[3],
time=pick_time))
ph_event.origins[0].arrivals.append(Arrival(phase=ph_event.picks[i],
pick_id=ph_event.picks[i].
resource_id))
ph_event.origins[0].arrivals[i].time_weight = float(pick[2])
return ph_event |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extract_from_stack(stack, template, length, pre_pick, pre_pad, Z_include=False, pre_processed=True, samp_rate=None, lowcut=None, highcut=None, filt_order=3):
""" Extract a multiplexed template from a stack of detections. Function to extract a new template from a stack of previous detections. Requires the stack, the template used to make the detections for the \ stack, and we need to know if the stack has been pre-processed. :type stack: obspy.core.stream.Stream :param stack: Waveform stack from detections. Can be of any length and \ can have delays already included, or not. :type template: obspy.core.stream.Stream :param template: Template used to make the detections in the stack. Will \ use the delays of this for the new template. :type length: float :param length: Length of new template in seconds :type pre_pick: float :param pre_pick: Extract additional data before the detection, seconds :type pre_pad: float :param pre_pad: Pad used in seconds when extracting the data, e.g. the \ time before the detection extracted. If using \ clustering.extract_detections this half the length of the extracted \ waveform. :type Z_include: bool :param Z_include: If True will include any Z-channels even if there is \ no template for this channel, as long as there is a template for this \ station at a different channel. If this is False and Z channels are \ included in the template Z channels will be included in the \ new_template anyway. :type pre_processed: bool :param pre_processed: Have the data been pre-processed, if True (default) \ then we will only cut the data here. :type samp_rate: float :param samp_rate: If pre_processed=False then this is required, desired \ sampling rate in Hz, defaults to False. :type lowcut: float :param lowcut: If pre_processed=False then this is required, lowcut in \ Hz, defaults to False. :type highcut: float :param highcut: If pre_processed=False then this is required, highcut in \ Hz, defaults to False :type filt_order: int :param filt_order: If pre_processed=False then this is required, filter \ order, defaults to False :returns: Newly cut template. :rtype: :class:`obspy.core.stream.Stream` """ |
new_template = stack.copy()
# Copy the data before we trim it to keep the stack safe
# Get the earliest time in the template as this is when the detection is
# taken.
mintime = min([tr.stats.starttime for tr in template])
# Generate a list of tuples of (station, channel, delay) with delay in
# seconds
delays = [(tr.stats.station, tr.stats.channel[-1],
tr.stats.starttime - mintime) for tr in template]
# Process the data if necessary
if not pre_processed:
new_template = pre_processing.shortproc(
st=new_template, lowcut=lowcut, highcut=highcut,
filt_order=filt_order, samp_rate=samp_rate, debug=0)
# Loop through the stack and trim!
out = Stream()
for tr in new_template:
# Find the matching delay
delay = [d[2] for d in delays if d[0] == tr.stats.station and
d[1] == tr.stats.channel[-1]]
if Z_include and len(delay) == 0:
delay = [d[2] for d in delays if d[0] == tr.stats.station]
if len(delay) == 0:
debug_print("No matching template channel found for stack channel"
" {0}.{1}".format(tr.stats.station, tr.stats.channel),
2, 3)
new_template.remove(tr)
else:
for d in delay:
out += tr.copy().trim(
starttime=tr.stats.starttime + d + pre_pad - pre_pick,
endtime=tr.stats.starttime + d + pre_pad + length -
pre_pick)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _group_events(catalog, process_len, template_length, data_pad):
""" Internal function to group events into sub-catalogs based on process_len. :param catalog: Catalog to groups into sub-catalogs :type catalog: obspy.core.event.Catalog :param process_len: Length in seconds that data will be processed in :type process_len: int :return: List of catalogs :rtype: list """ |
# case for catalog only containing one event
if len(catalog) == 1:
return [catalog]
sub_catalogs = []
# Sort catalog by date
catalog.events = sorted(
catalog.events,
key=lambda e: (e.preferred_origin() or e.origins[0]).time)
sub_catalog = Catalog([catalog[0]])
for event in catalog[1:]:
origin_time = (event.preferred_origin() or event.origins[0]).time
last_pick = sorted(event.picks, key=lambda p: p.time)[-1]
max_diff = (
process_len - (last_pick.time - origin_time) - template_length)
max_diff -= 2 * data_pad
if origin_time - sub_catalog[0].origins[0].time < max_diff:
sub_catalog.append(event)
else:
sub_catalogs.append(sub_catalog)
sub_catalog = Catalog([event])
sub_catalogs.append(sub_catalog)
return sub_catalogs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multi_template_gen(catalog, st, length, swin='all', prepick=0.05, all_horiz=False, delayed=True, plot=False, debug=0, return_event=False, min_snr=None):
""" Generate multiple templates from one stream of data. Thin wrapper around _template_gen to generate multiple templates from one stream of continuous data. Takes processed (filtered and resampled) seismic data! :type catalog: obspy.core.event.Catalog :param catalog: Events to extract templates for :type st: obspy.core.stream.Stream :param st: Processed stream to extract from, e.g. filtered and re-sampled to what you want using pre_processing.dayproc. :type length: float :param length: Length of template in seconds :type swin: string :param swin: P, S, P_all, S_all or all, defaults to all: see note in :func:`eqcorrscan.core.template_gen.template_gen` :type prepick: float :param prepick: Length in seconds to extract before the pick time default is 0.05 seconds. :type all_horiz: bool :param all_horiz: To use both horizontal channels even if there is only a pick on one of them. Defaults to False. :type delayed: bool :param delayed: If True, each channel will begin relative to it's own pick-time, if set to False, each channel will begin at the same time. :type plot: bool :param plot: To plot the template or not, default is True :type debug: int :param debug: Debug output level from 0-5. :type return_event: bool :param return_event: Whether to return the event and process length or not. :type min_snr: float :param min_snr: Minimum signal-to-noise ratio for a channel to be included in the template, where signal-to-noise ratio is calculated as the ratio of the maximum amplitude in the template window to the rms amplitude in the whole window given. :returns: List of :class:`obspy.core.stream.Stream` templates. :rtype: list .. warning:: Data must be processed before using this function - highcut, lowcut and filt_order are only used to generate the meta-data for the templates. .. note:: By convention templates are generated with P-phases on the \ vertical channel and S-phases on the horizontal channels, normal \ seismograph naming conventions are assumed, where Z denotes vertical \ and N, E, R, T, 1 and 2 denote horizontal channels, either oriented \ or not. To this end we will **only** use Z channels if they have a \ P-pick, and will use one or other horizontal channels **only** if \ there is an S-pick on it. .. warning:: If there is no phase_hint included in picks, and swin=all, \ all channels with picks will be used. """ |
EQcorrscanDeprecationWarning(
"Function is depreciated and will be removed soon. Use "
"template_gen.template_gen instead.")
temp_list = template_gen(
method="from_meta_file", process=False, meta_file=catalog, st=st,
lowcut=None, highcut=None, samp_rate=st[0].stats.sampling_rate,
filt_order=None, length=length, prepick=prepick,
swin=swin, all_horiz=all_horiz, delayed=delayed, plot=plot,
debug=debug, return_event=return_event, min_snr=min_snr,
parallel=False)
return temp_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_client(catalog, client_id, lowcut, highcut, samp_rate, filt_order, length, prepick, swin, process_len=86400, data_pad=90, all_horiz=False, delayed=True, plot=False, debug=0, return_event=False, min_snr=None):
""" Generate multiplexed template from FDSN client. Function to generate templates from an FDSN client. Must be given \ an obspy.Catalog class and the client_id as input. The function returns \ a list of obspy.Stream classes containing steams for each desired \ template. :type catalog: obspy.core.event.Catalog :param catalog: Catalog class containing desired template events :type client_id: str :param client_id: Name of the client, either url, or Obspy \ mappable (see the :mod:`obspy.clients.fdsn` documentation). :type lowcut: float :param lowcut: Low cut (Hz), if set to None will not apply a lowcut. :type highcut: float :param highcut: High cut (Hz), if set to None will not apply a highcut. :type samp_rate: float :param samp_rate: New sampling rate in Hz. :type filt_order: int :param filt_order: Filter level (number of corners). :type length: float :param length: Extract length in seconds. :type prepick: float :param prepick: Pre-pick time in seconds :type swin: str :param swin: P, S, P_all, S_all or all, defaults to all: see note in :func:`eqcorrscan.core.template_gen.template_gen` :type process_len: int :param process_len: Length of data in seconds to download and process. :param data_pad: Length of data (in seconds) required before and after \ any event for processing, use to reduce edge-effects of filtering on \ the templates. :type data_pad: int :type all_horiz: bool :param all_horiz: To use both horizontal channels even if there is only \ a pick on one of them. Defaults to False. :type delayed: bool :param delayed: If True, each channel will begin relative to it's own \ pick-time, if set to False, each channel will begin at the same time. :type plot: bool :param plot: Plot templates or not. :type debug: int :param debug: Level of debugging output, higher=more :type return_event: bool :param return_event: Whether to return the event and process length or not. :type min_snr: float :param min_snr: Minimum signal-to-noise ratio for a channel to be included in the template, where signal-to-noise ratio is calculated as the ratio of the maximum amplitude in the template window to the rms amplitude in the whole window given. :returns: List of :class:`obspy.core.stream.Stream` Templates :rtype: list .. warning:: This function is depreciated and will be removed in a forthcoming release. Please use `template_gen` instead. .. note:: process_len should be set to the same length as used when computing detections using match_filter.match_filter, e.g. if you read in day-long data for match_filter, process_len should be 86400. .. rubric:: Example .. figure:: ../../plots/template_gen.from_client.png """ |
EQcorrscanDeprecationWarning(
"Function is depreciated and will be removed soon. Use "
"template_gen.template_gen instead.")
temp_list = template_gen(
method="from_client", catalog=catalog, client_id=client_id,
lowcut=lowcut, highcut=highcut, samp_rate=samp_rate,
filt_order=filt_order, length=length, prepick=prepick,
swin=swin, process_len=process_len, data_pad=data_pad,
all_horiz=all_horiz, delayed=delayed, plot=plot, debug=debug,
return_event=return_event, min_snr=min_snr)
return temp_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_sac(sac_files, lowcut, highcut, samp_rate, filt_order, length, swin, prepick, all_horiz=False, delayed=True, plot=False, debug=0, return_event=False, min_snr=None):
""" Generate a multiplexed template from a list of SAC files. Function to read picks and waveforms from SAC data, and generate a \ template from these. Usually sac_files is a list of all single-channel \ SAC files for a given event, a single, multi-channel template will be \ created from these traces. **All files listed in sac_files should be associated with a single event.** :type sac_files: list :param sac_files: osbpy.core.stream.Stream of sac waveforms, or list of paths to sac waveforms. :type lowcut: float :param lowcut: Low cut (Hz), if set to None will not apply a lowcut. :type highcut: float :param highcut: High cut (Hz), if set to None will not apply a highcut. :type samp_rate: float :param samp_rate: New sampling rate in Hz. :type filt_order: int :param filt_order: Filter level. :type length: float :param length: Extract length in seconds. :type swin: str :param swin: P, S, P_all, S_all or all, defaults to all: see note in :func:`eqcorrscan.core.template_gen.template_gen` :type prepick: float :param prepick: Length to extract prior to the pick in seconds. :type all_horiz: bool :param all_horiz: To use both horizontal channels even if there is only \ a pick on one of them. Defaults to False. :type delayed: bool :param delayed: If True, each channel will begin relative to it's own \ pick-time, if set to False, each channel will begin at the same time. :type plot: bool :param plot: Turns template plotting on or off. :type debug: int :param debug: Debug level, higher number=more output. :type return_event: bool :param return_event: Whether to return the event and process length or not. :type min_snr: float :param min_snr: Minimum signal-to-noise ratio for a channel to be included in the template, where signal-to-noise ratio is calculated as the ratio of the maximum amplitude in the template window to the rms amplitude in the whole window given. :returns: Newly cut template. :rtype: :class:`obspy.core.stream.Stream` .. note:: This functionality is not supported for obspy versions below \ 1.0.0 as references times are not read in by SACIO, which are needed \ for defining pick times. .. rubric:: Example 25.0 15 """ |
EQcorrscanDeprecationWarning(
"Function is depreciated and will be removed soon. Use "
"template_gen.template_gen instead.")
temp_list = template_gen(
method="from_sac", sac_files=sac_files,
lowcut=lowcut, highcut=highcut, samp_rate=samp_rate,
filt_order=filt_order, length=length, prepick=prepick,
swin=swin, all_horiz=all_horiz, delayed=delayed, plot=plot,
debug=debug, return_event=return_event, min_snr=min_snr,
parallel=False)
return temp_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def time_func(func, name, *args, **kwargs):
""" call a func with args and kwargs, print name of func and how long it took. """ |
tic = time.time()
out = func(*args, **kwargs)
toc = time.time()
print('%s took %0.2f seconds' % (name, toc - tic))
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def seis_sim(sp, amp_ratio=1.5, flength=False, phaseout='all'):
""" Generate a simulated seismogram from a given S-P time. Will generate spikes separated by a given S-P time, which are then convolved with a decaying sine function. The P-phase is simulated by a positive spike of value 1, the S-arrival is simulated by a decaying boxcar of maximum amplitude 1.5. These amplitude ratios can be altered by changing the amp_ratio, which is the ratio S amplitude:P amplitude. .. note:: In testing this can achieve 0.3 or greater cross-correlations with data. :type sp: int :param sp: S-P time in samples :type amp_ratio: float :param amp_ratio: S:P amplitude ratio :type flength: int :param flength: Fixed length in samples, defaults to False :type phaseout: str :param phaseout: Either 'P', 'S' or 'all', controls which phases to cut around, defaults to 'all'. Can only be used with 'P' or 'S' options if flength is set. :returns: Simulated data. :rtype: :class:`numpy.ndarray` """ |
if flength and 2.5 * sp < flength and 100 < flength:
additional_length = flength
elif 2.5 * sp < 100.0:
additional_length = 100
else:
additional_length = 2.5 * sp
synth = np.zeros(int(sp + 10 + additional_length))
# Make the array begin 10 samples before the P
# and at least 2.5 times the S-P samples after the S arrival
synth[10] = 1.0 # P-spike fixed at 10 samples from start of window
# The length of the decaying S-phase should depend on the SP time,\
# Some basic estimations suggest this should be atleast 10 samples\
# and that the coda should be about 1/10 of the SP time
S_length = 10 + int(sp // 3)
S_spikes = np.arange(amp_ratio, 0, -(amp_ratio / S_length))
# What we actually want, or what appears better is to have a series of\
# individual spikes, of alternating polarity...
for i in range(len(S_spikes)):
if i in np.arange(1, len(S_spikes), 2):
S_spikes[i] = 0
if i in np.arange(2, len(S_spikes), 4):
S_spikes[i] *= -1
# Put these spikes into the synthetic
synth[10 + sp:10 + sp + len(S_spikes)] = S_spikes
# Generate a rough damped sine wave to convolve with the model spikes
sine_x = np.arange(0, 10.0, 0.5)
damped_sine = np.exp(-sine_x) * np.sin(2 * np.pi * sine_x)
# Convolve the spike model with the damped sine!
synth = np.convolve(synth, damped_sine)
# Normalize snyth
synth = synth / np.max(np.abs(synth))
if not flength:
return synth
else:
if phaseout in ['all', 'P']:
synth = synth[0:flength]
elif phaseout == 'S':
synth = synth[sp:]
if len(synth) < flength:
# If this is too short, pad
synth = np.append(synth, np.zeros(flength - len(synth)))
else:
synth = synth[0:flength]
return synth |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SVD_sim(sp, lowcut, highcut, samp_rate, amp_range=np.arange(-10, 10, 0.01)):
""" Generate basis vectors of a set of simulated seismograms. Inputs should have a range of S-P amplitude ratios, in theory to simulate \ a range of focal mechanisms. :type sp: int :param sp: S-P time in seconds - will be converted to samples according \ to samp_rate. :type lowcut: float :param lowcut: Low-cut for bandpass filter in Hz :type highcut: float :param highcut: High-cut for bandpass filter in Hz :type samp_rate: float :param samp_rate: Sampling rate in Hz :type amp_range: numpy.ndarray :param amp_range: Amplitude ratio range to generate synthetics for. :returns: set of output basis vectors :rtype: :class:`numpy.ndarray` """ |
# Convert SP to samples
sp = int(sp * samp_rate)
# Scan through a range of amplitude ratios
synthetics = [Stream(Trace(seis_sim(sp, a))) for a in amp_range]
for st in synthetics:
for tr in st:
tr.stats.station = 'SYNTH'
tr.stats.channel = 'SH1'
tr.stats.sampling_rate = samp_rate
tr.filter('bandpass', freqmin=lowcut, freqmax=highcut)
# We have a list of obspy Trace objects, we can pass this to EQcorrscan's
# SVD functions
U, s, V, stachans = clustering.svd(synthetics)
return U, s, V, stachans |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def template_grid(stations, nodes, travel_times, phase, PS_ratio=1.68, samp_rate=100, flength=False, phaseout='all'):
""" Generate a group of synthetic seismograms for a grid of sources. Used to simulate phase arrivals from a grid of known sources in a three-dimensional model. Lags must be known and supplied, these can be generated from the bright_lights function: read_tt, and resampled to fit the desired grid dimensions and spacing using other functions therein. These synthetic seismograms are very simple models of seismograms using the seis_sim function herein. These approximate body-wave P and S first arrivals as spikes convolved with damped sine waves. :type stations: list :param stations: List of the station names :type nodes: list :param nodes: List of node locations in (lon,lat,depth) :type travel_times: numpy.ndarray :param travel_times: Array of travel times where travel_times[i][:] \ refers to the travel times for station=stations[i], and \ travel_times[i][j] refers to stations[i] for nodes[j] :type phase: str :param phase: Can be either 'P' or 'S' :type PS_ratio: float :param PS_ratio: P/S velocity ratio, defaults to 1.68 :type samp_rate: float :param samp_rate: Desired sample rate in Hz, defaults to 100.0 :type flength: int :param flength: Length of template in samples, defaults to False :type phaseout: str :param phaseout: Either 'S', 'P', 'all' or 'both', determines which \ phases to clip around. 'all' Encompasses both phases in one channel, \ but will return nothing if the flength is not long enough, 'both' \ will return two channels for each stations, one SYN_Z with the \ synthetic P-phase, and one SYN_H with the synthetic S-phase. :returns: List of :class:`obspy.core.stream.Stream` """ |
if phase not in ['S', 'P']:
raise IOError('Phase is neither P nor S')
# Initialize empty list for templates
templates = []
# Loop through the nodes, for every node generate a template!
for i, node in enumerate(nodes):
st = [] # Empty list to be filled with synthetics
# Loop through stations
for j, station in enumerate(stations):
tr = Trace()
tr.stats.sampling_rate = samp_rate
tr.stats.station = station
tr.stats.channel = 'SYN'
tt = travel_times[j][i]
if phase == 'P':
# If the input travel-time is the P-wave travel-time
SP_time = (tt * PS_ratio) - tt
if phaseout == 'S':
tr.stats.starttime += tt + SP_time
else:
tr.stats.starttime += tt
elif phase == 'S':
# If the input travel-time is the S-wave travel-time
SP_time = tt - (tt / PS_ratio)
if phaseout == 'S':
tr.stats.starttime += tt
else:
tr.stats.starttime += tt - SP_time
# Set start-time of trace to be travel-time for P-wave
# Check that the template length is long enough to include the SP
if flength and SP_time * samp_rate < flength - 11 \
and phaseout == 'all':
tr.data = seis_sim(sp=int(SP_time * samp_rate), amp_ratio=1.5,
flength=flength, phaseout=phaseout)
st.append(tr)
elif flength and phaseout == 'all':
warnings.warn('Cannot make a bulk synthetic with this fixed ' +
'length for station ' + station)
elif phaseout == 'all':
tr.data = seis_sim(sp=int(SP_time * samp_rate), amp_ratio=1.5,
flength=flength, phaseout=phaseout)
st.append(tr)
elif phaseout in ['P', 'S']:
tr.data = seis_sim(sp=int(SP_time * samp_rate), amp_ratio=1.5,
flength=flength, phaseout=phaseout)
st.append(tr)
elif phaseout == 'both':
for _phaseout in ['P', 'S']:
_tr = tr.copy()
_tr.data = seis_sim(sp=int(SP_time * samp_rate),
amp_ratio=1.5, flength=flength,
phaseout=_phaseout)
if _phaseout == 'P':
_tr.stats.channel = 'SYN_Z'
# starttime defaults to S-time
_tr.stats.starttime = _tr.stats.starttime - SP_time
elif _phaseout == 'S':
_tr.stats.channel = 'SYN_H'
st.append(_tr)
templates.append(Stream(st))
# Stream(st).plot(size=(800,600))
return templates |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_synth_data(nsta, ntemplates, nseeds, samp_rate, t_length, max_amp, max_lag, debug=0):
""" Generate a synthetic dataset to be used for testing. This will generate both templates and data to scan through. Templates will be generated using the utils.synth_seis functions. The day of data will be random noise, with random signal-to-noise ratio copies of the templates randomly seeded throughout the day. It also returns the seed times and signal-to-noise ratios used. :type nsta: int :param nsta: Number of stations to generate data for < 15. :type ntemplates: int :param ntemplates: Number of templates to generate, will be generated \ with random arrival times. :type nseeds: int :param nseeds: Number of copies of the template to seed within the \ day of noisy data for each template. :type samp_rate: float :param samp_rate: Sampling rate to use in Hz :type t_length: float :param t_length: Length of templates in seconds. :type max_amp: float :param max_amp: Maximum signal-to-noise ratio of seeds. :param max_lag: Maximum lag time in seconds (randomised). :type max_lag: float :type debug: int :param debug: Debug level, bigger the number, the more plotting/output. :returns: Templates: List of :class:`obspy.core.stream.Stream` :rtype: list :returns: Data: :class:`obspy.core.stream.Stream` of seeded noisy data :rtype: :class:`obspy.core.stream.Stream` :returns: Seeds: dictionary of seed SNR and time with time in samples. :rtype: dict """ |
# Generate random arrival times
t_times = np.abs(np.random.random([nsta, ntemplates])) * max_lag
# Generate random node locations - these do not matter as they are only
# used for naming
lats = np.random.random(ntemplates) * 90.0
lons = np.random.random(ntemplates) * 90.0
depths = np.abs(np.random.random(ntemplates) * 40.0)
nodes = zip(lats, lons, depths)
# Generating a 5x3 array to make 3 templates
stations = ['ALPH', 'BETA', 'GAMM', 'KAPP', 'ZETA', 'BOB', 'MAGG',
'ALF', 'WALR', 'ALBA', 'PENG', 'BANA', 'WIGG', 'SAUS',
'MALC']
if debug > 1:
print(nodes)
print(t_times)
print(stations[0:nsta])
templates = template_grid(stations=stations[0:nsta], nodes=nodes,
travel_times=t_times, phase='S',
samp_rate=samp_rate,
flength=int(t_length * samp_rate))
if debug > 2:
for template in templates:
print(template)
# Now we want to create a day of synthetic data
seeds = []
data = templates[0].copy() # Copy a template to get the correct length
# and stats for data, we will overwrite the data on this copy
for tr in data:
tr.data = np.zeros(86400 * int(samp_rate))
# Set all the traces to have a day of zeros
tr.stats.starttime = UTCDateTime(0)
for i, template in enumerate(templates):
impulses = np.zeros(86400 * int(samp_rate))
# Generate a series of impulses for seeding
# Need three seperate impulse traces for each of the three templates,
# all will be convolved within the data though.
impulse_times = np.random.randint(86400 * int(samp_rate),
size=nseeds)
impulse_amplitudes = np.random.randn(nseeds) * max_amp
# Generate amplitudes up to maximum amplitude in a normal distribution
seeds.append({'SNR': impulse_amplitudes,
'time': impulse_times})
for j in range(nseeds):
impulses[impulse_times[j]] = impulse_amplitudes[j]
# We now have one vector of impulses, we need nsta numbers of them,
# shifted with the appropriate lags
mintime = min([template_tr.stats.starttime
for template_tr in template])
for j, template_tr in enumerate(template):
offset = int((template_tr.stats.starttime - mintime) * samp_rate)
pad = np.zeros(offset)
tr_impulses = np.append(pad, impulses)[0:len(impulses)]
# Convolve this with the template trace to give the daylong seeds
data[j].data += np.convolve(tr_impulses,
template_tr.data)[0:len(impulses)]
# Add the noise
for tr in data:
noise = np.random.randn(86400 * int(samp_rate))
tr.data += noise / max(noise)
return templates, data, seeds |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def linstack(streams, normalize=True):
""" Compute the linear stack of a series of seismic streams of \ multiplexed data. :type streams: list :param streams: List of streams to stack :type normalize: bool :param normalize: Normalize traces before stacking, normalizes by the RMS \ amplitude. :returns: stacked data :rtype: :class:`obspy.core.stream.Stream` """ |
stack = streams[np.argmax([len(stream) for stream in streams])].copy()
if normalize:
for tr in stack:
tr.data = tr.data / np.sqrt(np.mean(np.square(tr.data)))
tr.data = np.nan_to_num(tr.data)
for i in range(1, len(streams)):
for tr in stack:
matchtr = streams[i].select(station=tr.stats.station,
channel=tr.stats.channel)
if matchtr:
# Normalize the data before stacking
if normalize:
norm = matchtr[0].data /\
np.sqrt(np.mean(np.square(matchtr[0].data)))
norm = np.nan_to_num(norm)
else:
norm = matchtr[0].data
tr.data = np.sum((norm, tr.data), axis=0)
return stack |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def PWS_stack(streams, weight=2, normalize=True):
""" Compute the phase weighted stack of a series of streams. .. note:: It is recommended to align the traces before stacking. :type streams: list :param streams: List of :class:`obspy.core.stream.Stream` to stack. :type weight: float :param weight: Exponent to the phase stack used for weighting. :type normalize: bool :param normalize: Normalize traces before stacking. :return: Stacked stream. :rtype: :class:`obspy.core.stream.Stream` """ |
# First get the linear stack which we will weight by the phase stack
Linstack = linstack(streams)
# Compute the instantaneous phase
instaphases = []
print("Computing instantaneous phase")
for stream in streams:
instaphase = stream.copy()
for tr in instaphase:
analytic = hilbert(tr.data)
envelope = np.sqrt(np.sum((np.square(analytic),
np.square(tr.data)), axis=0))
tr.data = analytic / envelope
instaphases.append(instaphase)
# Compute the phase stack
print("Computing the phase stack")
Phasestack = linstack(instaphases, normalize=normalize)
# Compute the phase-weighted stack
for tr in Phasestack:
tr.data = Linstack.select(station=tr.stats.station)[0].data *\
np.abs(tr.data ** weight)
return Phasestack |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def align_traces(trace_list, shift_len, master=False, positive=False, plot=False):
""" Align traces relative to each other based on their cross-correlation value. Uses the :func:`eqcorrscan.core.match_filter.normxcorr2` function to find the optimum shift to align traces relative to a master event. Either uses a given master to align traces, or uses the trace with the highest MAD amplitude. :type trace_list: list :param trace_list: List of traces to align :type shift_len: int :param shift_len: Length to allow shifting within in samples :type master: obspy.core.trace.Trace :param master: Master trace to align to, if set to False will align to \ the largest amplitude trace (default) :type positive: bool :param positive: Return the maximum positive cross-correlation, or the \ absolute maximum, defaults to False (absolute maximum). :type plot: bool :param plot: If true, will plot each trace aligned with the master. :returns: list of shifts and correlations for best alignment in seconds. :rtype: list """ |
from eqcorrscan.core.match_filter import normxcorr2
from eqcorrscan.utils.plotting import xcorr_plot
traces = deepcopy(trace_list)
if not master:
# Use trace with largest MAD amplitude as master
master = traces[0]
MAD_master = np.median(np.abs(master.data))
for i in range(1, len(traces)):
if np.median(np.abs(traces[i].data)) > MAD_master:
master = traces[i]
MAD_master = np.median(np.abs(master.data))
else:
print('Using master given by user')
shifts = []
ccs = []
for i in range(len(traces)):
if not master.stats.sampling_rate == traces[i].stats.sampling_rate:
raise ValueError('Sampling rates not the same')
cc_vec = normxcorr2(template=traces[i].data.
astype(np.float32)[shift_len:-shift_len],
image=master.data.astype(np.float32))
cc_vec = cc_vec[0]
shift = np.abs(cc_vec).argmax()
cc = cc_vec[shift]
if plot:
xcorr_plot(template=traces[i].data.
astype(np.float32)[shift_len:-shift_len],
image=master.data.astype(np.float32), shift=shift,
cc=cc)
shift -= shift_len
if cc < 0 and positive:
cc = cc_vec.max()
shift = cc_vec.argmax() - shift_len
shifts.append(shift / master.stats.sampling_rate)
ccs.append(cc)
return shifts, ccs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def temporary_directory():
""" make a temporary directory, yeild its name, cleanup on exit """ |
dir_name = tempfile.mkdtemp()
yield dir_name
if os.path.exists(dir_name):
shutil.rmtree(dir_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _total_microsec(t1, t2):
""" Calculate difference between two datetime stamps in microseconds. :type t1: :class: `datetime.datetime` :type t2: :class: `datetime.datetime` :return: int .. rubric:: Example -31536000000000 """ |
td = t1 - t2
return (td.seconds + td.days * 24 * 3600) * 10 ** 6 + td.microseconds |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _templates_match(t, family_file):
""" Return True if a tribe matches a family file path. :type t: Tribe :type family_file: str :return: bool """ |
return t.name == family_file.split(os.sep)[-1].split('_detections.csv')[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _group_process(template_group, parallel, debug, cores, stream, daylong, ignore_length, overlap):
""" Process data into chunks based on template processing length. Templates in template_group must all have the same processing parameters. :type template_group: list :param template_group: List of Templates. :type parallel: bool :param parallel: Whether to use parallel processing or not :type debug: int :param debug: Debug level from 0-5 :type cores: int :param cores: Number of cores to use, can be False to use all available. :type stream: :class:`obspy.core.stream.Stream` :param stream: Stream to process, will be left intact. :type daylong: bool :param daylong: Whether to enforce day-length files or not. :type ignore_length: bool :param ignore_length: If using daylong=True, then dayproc will try check that the data are there for at least 80% of the day, if you don't want this check (which will raise an error if too much data are missing) then set ignore_length=True. This is not recommended! :type overlap: float :param overlap: Number of seconds to overlap chunks by. :return: list of processed streams. """ |
master = template_group[0]
processed_streams = []
kwargs = {
'filt_order': master.filt_order,
'highcut': master.highcut, 'lowcut': master.lowcut,
'samp_rate': master.samp_rate, 'debug': debug,
'parallel': parallel, 'num_cores': cores}
# Processing always needs to be run to account for gaps - pre-process will
# check whether filtering and resampling needs to be done.
if daylong:
if not master.process_length == 86400:
warnings.warn(
'Processing day-long data, but template was cut from %i s long'
' data, will reduce correlations' % master.process_length)
func = dayproc
kwargs.update({'ignore_length': ignore_length})
# Check that data all start on the same day, otherwise strange
# things will happen...
starttimes = [tr.stats.starttime.date for tr in stream]
if not len(list(set(starttimes))) == 1:
warnings.warn('Data start on different days, setting to last day')
starttime = UTCDateTime(
stream.sort(['starttime'])[-1].stats.starttime.date)
else:
starttime = stream.sort(['starttime'])[0].stats.starttime
else:
# We want to use shortproc to allow overlaps
func = shortproc
starttime = stream.sort(['starttime'])[0].stats.starttime
endtime = stream.sort(['endtime'])[-1].stats.endtime
data_len_samps = round((endtime - starttime) * master.samp_rate) + 1
chunk_len_samps = (master.process_length - overlap) * master.samp_rate
n_chunks = int(data_len_samps / chunk_len_samps)
if n_chunks == 0:
print('Data must be process_length or longer, not computing')
for i in range(n_chunks):
kwargs.update(
{'starttime': starttime + (i * (master.process_length - overlap))})
if not daylong:
kwargs.update(
{'endtime': kwargs['starttime'] + master.process_length})
chunk_stream = stream.slice(starttime=kwargs['starttime'],
endtime=kwargs['endtime']).copy()
else:
chunk_stream = stream.copy()
for tr in chunk_stream:
tr.data = tr.data[0:int(
master.process_length * tr.stats.sampling_rate)]
processed_streams.append(func(st=chunk_stream, **kwargs))
return processed_streams |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _par_read(dirname, compressed=True):
""" Internal write function to read a formatted parameter file. :type dirname: str :param dirname: Directory to read the parameter file from. :type compressed: bool :param compressed: Whether the directory is compressed or not. """ |
templates = []
if compressed:
arc = tarfile.open(dirname, "r:*")
members = arc.getmembers()
_parfile = [member for member in members
if member.name.split(os.sep)[-1] ==
'template_parameters.csv']
if len(_parfile) == 0:
arc.close()
raise MatchFilterError(
'No template parameter file in archive')
parfile = arc.extractfile(_parfile[0])
else:
parfile = open(dirname + '/' + 'template_parameters.csv', 'r')
for line in parfile:
t_in = Template()
for key_pair in line.rstrip().split(','):
if key_pair.split(':')[0].strip() == 'name':
t_in.__dict__[key_pair.split(':')[0].strip()] = \
key_pair.split(':')[-1].strip()
elif key_pair.split(':')[0].strip() == 'filt_order':
try:
t_in.__dict__[key_pair.split(':')[0].strip()] = \
int(key_pair.split(':')[-1])
except ValueError:
pass
else:
try:
t_in.__dict__[key_pair.split(':')[0].strip()] = \
float(key_pair.split(':')[-1])
except ValueError:
pass
templates.append(t_in)
parfile.close()
if compressed:
arc.close()
return templates |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _badpath(path, base):
""" joinpath will ignore base if path is absolute. """ |
return not _resolved(os.path.join(base, path)).startswith(base) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _badlink(info, base):
""" Links are interpreted relative to the directory containing the link """ |
tip = _resolved(os.path.join(base, os.path.dirname(info.name)))
return _badpath(info.linkname, base=tip) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _safemembers(members):
"""Check members of a tar archive for safety. Ensure that they do not contain paths or links outside of where we need them - this would only happen if the archive wasn't made by eqcorrscan. :type members: :class:`tarfile.TarFile` :param members: an open tarfile. """ |
base = _resolved(".")
for finfo in members:
if _badpath(finfo.name, base):
print(finfo.name, "is blocked (illegal path)")
elif finfo.issym() and _badlink(finfo, base):
print(finfo.name, "is blocked: Hard link to", finfo.linkname)
elif finfo.islnk() and _badlink(finfo, base):
print(finfo.name, "is blocked: Symlink to", finfo.linkname)
else:
yield finfo |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _write_family(family, filename):
""" Write a family to a csv file. :type family: :class:`eqcorrscan.core.match_filter.Family` :param family: Family to write to file :type filename: str :param filename: File to write to. """ |
with open(filename, 'w') as f:
for detection in family.detections:
det_str = ''
for key in detection.__dict__.keys():
if key == 'event' and detection.__dict__[key] is not None:
value = str(detection.event.resource_id)
elif key in ['threshold', 'detect_val', 'threshold_input']:
value = format(detection.__dict__[key], '.32f').rstrip('0')
else:
value = str(detection.__dict__[key])
det_str += key + ': ' + value + '; '
f.write(det_str + '\n')
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_family(fname, all_cat, template):
""" Internal function to read csv family files. :type fname: str :param fname: Filename :return: list of Detection """ |
detections = []
with open(fname, 'r') as f:
for line in f:
det_dict = {}
gen_event = False
for key_pair in line.rstrip().split(';'):
key = key_pair.split(': ')[0].strip()
value = key_pair.split(': ')[-1].strip()
if key == 'event':
if len(all_cat) == 0:
gen_event = True
continue
el = [e for e in all_cat
if str(e.resource_id).split('/')[-1] == value][0]
det_dict.update({'event': el})
elif key == 'detect_time':
det_dict.update(
{'detect_time': UTCDateTime(value)})
elif key == 'chans':
det_dict.update({'chans': ast.literal_eval(value)})
elif key in ['template_name', 'typeofdet', 'id',
'threshold_type']:
det_dict.update({key: value})
elif key == 'no_chans':
det_dict.update({key: int(float(value))})
elif len(key) == 0:
continue
else:
det_dict.update({key: float(value)})
detection = Detection(**det_dict)
if gen_event:
detection._calculate_event(template=template)
detections.append(detection)
return detections |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_party(fname=None, read_detection_catalog=True):
""" Read detections and metadata from a tar archive. :type fname: str :param fname: Filename to read from, if this contains a single Family, then will return a party of length = 1 :type read_detection_catalog: bool :param read_detection_catalog: Whether to read the detection catalog or not, if False, catalog will be regenerated - for large catalogs this can be faster. :return: :class:`eqcorrscan.core.match_filter.Party` """ |
party = Party()
party.read(filename=fname, read_detection_catalog=read_detection_catalog)
return party |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_detections(fname):
""" Read detections from a file to a list of Detection objects. :type fname: str :param fname: File to read from, must be a file written to by \ Detection.write. :returns: list of :class:`eqcorrscan.core.match_filter.Detection` :rtype: list .. note:: :class:`eqcorrscan.core.match_filter.Detection`'s returned do not contain Detection.event """ |
f = open(fname, 'r')
detections = []
for index, line in enumerate(f):
if index == 0:
continue # Skip header
if line.rstrip().split('; ')[0] == 'Template name':
continue # Skip any repeated headers
detection = line.rstrip().split('; ')
detection[1] = UTCDateTime(detection[1])
detection[2] = int(float(detection[2]))
detection[3] = ast.literal_eval(detection[3])
detection[4] = float(detection[4])
detection[5] = float(detection[5])
if len(detection) < 9:
detection.extend(['Unset', float('NaN')])
else:
detection[7] = float(detection[7])
detections.append(Detection(
template_name=detection[0], detect_time=detection[1],
no_chans=detection[2], detect_val=detection[4],
threshold=detection[5], threshold_type=detection[6],
threshold_input=detection[7], typeofdet=detection[8],
chans=detection[3]))
f.close()
return detections |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_catalog(detections, fname, format="QUAKEML"):
"""Write events contained within detections to a catalog file. :type detections: list :param detections: list of eqcorrscan.core.match_filter.Detection :type fname: str :param fname: Name of the file to write to :type format: str :param format: File format to use, see obspy.core.event.Catalog.write \ for supported formats. """ |
catalog = get_catalog(detections)
catalog.write(filename=fname, format=format) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extract_from_stream(stream, detections, pad=5.0, length=30.0):
""" Extract waveforms for a list of detections from a stream. :type stream: obspy.core.stream.Stream :param stream: Stream containing the detections. :type detections: list :param detections: list of eqcorrscan.core.match_filter.detection :type pad: float :param pad: Pre-detection extract time in seconds. :type length: float :param length: Total extracted length in seconds. :returns: list of :class:`obspy.core.stream.Stream`, one for each detection. :type: list """ |
streams = []
for detection in detections:
cut_stream = Stream()
for pick in detection.event.picks:
tr = stream.select(station=pick.waveform_id.station_code,
channel=pick.waveform_id.channel_code)
if len(tr) == 0:
print('No data in stream for pick:')
print(pick)
continue
cut_stream += tr.slice(
starttime=pick.time - pad,
endtime=pick.time - pad + length).copy()
streams.append(cut_stream)
return streams |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normxcorr2(template, image):
""" Thin wrapper to eqcorrscan.utils.correlate functions. :type template: numpy.ndarray :param template: Template array :type image: numpy.ndarray :param image: Image to scan the template through. The order of these matters, if you put the template after the image you will get a reversed correlation matrix :return: New :class:`numpy.ndarray` of the correlation values for the correlation of the image with the template. :rtype: numpy.ndarray .. note:: If your data contain gaps these must be padded with zeros before using this function. The `eqcorrscan.utils.pre_processing` functions will provide gap-filled data in the appropriate format. Note that if you pad your data with zeros before filtering or resampling the gaps will not be all zeros after filtering. This will result in the calculation of spurious correlations in the gaps. """ |
array_xcorr = get_array_xcorr()
# Check that we have been passed numpy arrays
if type(template) != np.ndarray or type(image) != np.ndarray:
print('You have not provided numpy arrays, I will not convert them')
return 'NaN'
if len(template) > len(image):
ccc = array_xcorr(
templates=np.array([image]).astype(np.float32),
stream=template.astype(np.float32), pads=[0],
threaded=False)[0][0]
else:
ccc = array_xcorr(
templates=np.array([template]).astype(np.float32),
stream=image.astype(np.float32), pads=[0], threaded=False)[0][0]
ccc = ccc.reshape((1, len(ccc)))
return ccc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def select(self, template_name):
""" Select a specific family from the party. :type template_name: str :param template_name: Template name of Family to select from a party. :returns: Family """ |
return [fam for fam in self.families
if fam.template.name == template_name][0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sort(self):
""" Sort the families by template name. .. rubric:: Example Family of 0 detections from template b Family of 0 detections from template a """ |
self.families.sort(key=lambda x: x.template.name)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter(self, dates=None, min_dets=1):
""" Return a new Party filtered according to conditions. Return a new Party with only detections within a date range and only families with a minimum number of detections. :type dates: list of obspy.core.UTCDateTime objects :param dates: A start and end date for the new Party :type min_dets: int :param min_dets: Minimum number of detections per family .. rubric:: Example """ |
if dates is None:
raise MatchFilterError('Need a list defining a date range')
new_party = Party()
for fam in self.families:
new_fam = Family(
template=fam.template,
detections=[det for det in fam if
dates[0] < det.detect_time < dates[1]])
if len(new_fam) >= min_dets:
new_party.families.append(new_fam)
return new_party |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot(self, plot_grouped=False, dates=None, min_dets=1, rate=False, **kwargs):
""" Plot the cumulative detections in time. :type plot_grouped: bool :param plot_grouped: Whether to plot all families together (plot_grouped=True), or each as a separate line. :type dates: list :param dates: list of obspy.core.UTCDateTime objects bounding the plot. The first should be the start date, the last the end date. :type min_dets: int :param min_dets: Plot only families with this number of detections or more. :type rate: bool :param rate: Whether or not to plot the daily rate of detection as opposed to cumulative number. Only works with plot_grouped=True. :param \**kwargs: Any other arguments accepted by :func:`eqcorrscan.utils.plotting.cumulative_detections` .. rubric:: Examples Plot cumulative detections for all templates individually: Plot cumulative detections for all templates grouped together: Plot the rate of detection for all templates grouped together: Plot cumulative detections for all templates with more than five detections between June 1st, 2012 and July 31st, 2012: """ |
all_dets = []
if dates:
new_party = self.filter(dates=dates, min_dets=min_dets)
for fam in new_party.families:
all_dets.extend(fam.detections)
else:
for fam in self.families:
all_dets.extend(fam.detections)
fig = cumulative_detections(detections=all_dets,
plot_grouped=plot_grouped,
rate=rate, **kwargs)
return fig |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rethreshold(self, new_threshold, new_threshold_type='MAD'):
""" Remove detections from the Party that are below a new threshold. .. Note:: threshold can only be set higher. .. Warning:: Works in place on Party. :type new_threshold: float :param new_threshold: New threshold level :type new_threshold_type: str :param new_threshold_type: Either 'MAD', 'absolute' or 'av_chan_corr' .. rubric:: Examples Using the MAD threshold on detections made using the MAD threshold: 4 4 Using the absolute thresholding method on the same Party: 1 Using the av_chan_corr method on the same Party: 4 """ |
for family in self.families:
rethresh_detections = []
for d in family.detections:
if new_threshold_type == 'MAD' and d.threshold_type == 'MAD':
new_thresh = (d.threshold /
d.threshold_input) * new_threshold
elif new_threshold_type == 'MAD' and d.threshold_type != 'MAD':
raise MatchFilterError(
'Cannot recalculate MAD level, '
'use another threshold type')
elif new_threshold_type == 'absolute':
new_thresh = new_threshold
elif new_threshold_type == 'av_chan_corr':
new_thresh = new_threshold * d.no_chans
else:
raise MatchFilterError(
'new_threshold_type %s is not recognised' %
str(new_threshold_type))
if d.detect_val >= new_thresh:
d.threshold = new_thresh
d.threshold_input = new_threshold
d.threshold_type = new_threshold_type
rethresh_detections.append(d)
family.detections = rethresh_detections
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decluster(self, trig_int, timing='detect', metric='avg_cor'):
""" De-cluster a Party of detections by enforcing a detection separation. De-clustering occurs between events detected by different (or the same) templates. If multiple detections occur within trig_int then the preferred detection will be determined by the metric argument. This can be either the average single-station correlation coefficient which is calculated as Detection.detect_val / Detection.no_chans, or the raw cross channel correlation sum which is simply Detection.detect_val. :type trig_int: float :param trig_int: Minimum detection separation in seconds. :type metric: str :param metric: What metric to sort peaks by. Either 'avg_cor' which takes the single station average correlation or 'cor_sum' which takes the total correlation sum across all channels. :type timing: str :param timing: Either 'detect' or 'origin' to decluster based on either the detection time or the origin time. .. Warning:: Works in place on object, if you need to keep the original safe then run this on a copy of the object! .. rubric:: Example 4 3 """ |
all_detections = []
for fam in self.families:
all_detections.extend(fam.detections)
if timing == 'detect':
if metric == 'avg_cor':
detect_info = [(d.detect_time, d.detect_val / d.no_chans)
for d in all_detections]
elif metric == 'cor_sum':
detect_info = [(d.detect_time, d.detect_val)
for d in all_detections]
else:
raise MatchFilterError('metric is not cor_sum or avg_cor')
elif timing == 'origin':
if metric == 'avg_cor':
detect_info = [(_get_origin(d.event).time,
d.detect_val / d.no_chans)
for d in all_detections]
elif metric == 'cor_sum':
detect_info = [(_get_origin(d.event).time, d.detect_val)
for d in all_detections]
else:
raise MatchFilterError('metric is not cor_sum or avg_cor')
else:
raise MatchFilterError('timing is not detect or origin')
min_det = sorted([d[0] for d in detect_info])[0]
detect_vals = np.array([d[1] for d in detect_info])
detect_times = np.array([
_total_microsec(d[0].datetime, min_det.datetime)
for d in detect_info])
# Trig_int must be converted from seconds to micro-seconds
peaks_out = decluster(
peaks=detect_vals, index=detect_times, trig_int=trig_int * 10 ** 6)
# Need to match both the time and the detection value
declustered_detections = []
for ind in peaks_out:
matching_time_indeces = np.where(detect_times == ind[-1])[0]
matches = matching_time_indeces[
np.where(detect_vals[matching_time_indeces] == ind[0])[0][0]]
declustered_detections.append(all_detections[matches])
# Convert this list into families
template_names = list(set([d.template_name
for d in declustered_detections]))
new_families = []
for template_name in template_names:
template = [fam.template for fam in self.families
if fam.template.name == template_name][0]
new_families.append(Family(
template=template,
detections=[d for d in declustered_detections
if d.template_name == template_name]))
self.families = new_families
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(self, filename=None, read_detection_catalog=True):
""" Read a Party from a file. :type filename: str :param filename: File to read from - can be a list of files, and can contain wildcards. :type read_detection_catalog: bool :param read_detection_catalog: Whether to read the detection catalog or not, if False, catalog will be regenerated - for large catalogs this can be faster. .. rubric:: Example Party of 4 Families. """ |
tribe = Tribe()
families = []
if filename is None:
# If there is no filename given, then read the example.
filename = os.path.join(os.path.dirname(__file__),
'..', 'tests', 'test_data',
'test_party.tgz')
if isinstance(filename, list):
filenames = []
for _filename in filename:
# Expand wildcards
filenames.extend(glob.glob(_filename))
else:
# Expand wildcards
filenames = glob.glob(filename)
for _filename in filenames:
with tarfile.open(_filename, "r:*") as arc:
temp_dir = tempfile.mkdtemp()
arc.extractall(path=temp_dir, members=_safemembers(arc))
# Read in the detections first, this way, if we read from multiple
# files then we can just read in extra templates as needed.
# Read in families here!
party_dir = glob.glob(temp_dir + os.sep + '*')[0]
tribe._read_from_folder(dirname=party_dir)
det_cat_file = glob.glob(os.path.join(party_dir, "catalog.*"))
if len(det_cat_file) != 0 and read_detection_catalog:
try:
all_cat = read_events(det_cat_file[0])
except TypeError as e:
print(e)
pass
else:
all_cat = Catalog()
for family_file in glob.glob(join(party_dir, '*_detections.csv')):
template = [
t for t in tribe if _templates_match(t, family_file)]
family = Family(template=template[0] or Template())
new_family = True
if family.template.name in [f.template.name for f in families]:
family = [
f for f in families if
f.template.name == family.template.name][0]
new_family = False
family.detections = _read_family(
fname=family_file, all_cat=all_cat, template=template[0])
if new_family:
families.append(family)
shutil.rmtree(temp_dir)
self.families = families
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_catalog(self):
""" Get an obspy catalog object from the party. :returns: :class:`obspy.core.event.Catalog` .. rubric:: Example 4 """ |
catalog = Catalog()
for fam in self.families:
if len(fam.catalog) != 0:
catalog.events.extend(fam.catalog.events)
return catalog |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def min_chans(self, min_chans):
""" Remove detections with fewer channels used than min_chans :type min_chans: int :param min_chans: Minimum number of channels to allow a detection. :return: Party .. Note:: Works in place on Party. .. rubric:: Example 4 1 """ |
declustered = Party()
for family in self.families:
fam = Family(family.template)
for d in family.detections:
if d.no_chans > min_chans:
fam.detections.append(d)
declustered.families.append(fam)
self.families = declustered.families
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _uniq(self):
""" Get list of unique detections. Works in place. .. rubric:: Example 3 2 """ |
_detections = []
[_detections.append(d) for d in self.detections
if not _detections.count(d)]
self.detections = _detections
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sort(self):
"""Sort by detection time. .. rubric:: Example UTCDateTime(1970, 1, 1, 0, 3, 20) UTCDateTime(1970, 1, 1, 0, 0) """ |
self.detections = sorted(self.detections, key=lambda d: d.detect_time)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot(self, plot_grouped=False):
""" Plot the cumulative number of detections in time. .. rubric:: Example .. plot:: from eqcorrscan.core.match_filter import Family, Template from eqcorrscan.core.match_filter import Detection from obspy import UTCDateTime family = Family( template=Template(name='a'), detections=[ Detection(template_name='a', detect_time=UTCDateTime(0) + 200, no_chans=8, detect_val=4.2, threshold=1.2, typeofdet='corr', threshold_type='MAD', threshold_input=8.0), Detection(template_name='a', detect_time=UTCDateTime(0), no_chans=8, detect_val=4.5, threshold=1.2, typeofdet='corr', threshold_type='MAD', threshold_input=8.0), Detection(template_name='a', detect_time=UTCDateTime(0) + 10, no_chans=8, detect_val=4.5, threshold=1.2, typeofdet='corr', threshold_type='MAD', threshold_input=8.0)]) family.plot(plot_grouped=True) """ |
cumulative_detections(
detections=self.detections, plot_grouped=plot_grouped) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def same_processing(self, other):
""" Check is the templates are processed the same. .. rubric:: Example True False """ |
for key in self.__dict__.keys():
if key in ['name', 'st', 'prepick', 'event', 'template_info']:
continue
if not self.__dict__[key] == other.__dict__[key]:
return False
return True |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.