The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: SchemaInferenceError
Message: Please pass `features` or at least one example when writing data
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1821, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 795, in finalize
raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SeismicX-Cont — Continuous Seismic Dataset
Benchmark dataset for seismic monitoring with continuous waveform data
1. Overview
The SeismicX-Continuous Dataset is a curated benchmark dataset optimized for evaluating seismic monitoring algorithms on real-world continuous waveform recordings.
Unlike traditional event-driven datasets, this collection emphasizes continuous data streams, enabling robust comparisons across both high-activity and low-activity seismic periods.
The dataset is integrated with the SeismicX DataLoader, delivering an AI-ready pipeline for large-scale waveform analysis.
1.1 Repository Layout
The repository is organized as follows:
data/
hdf5/ # 14 daily HDF5 waveform files
index/ # SQLite waveform index
label/ # annotation JSON, station table, consensus JSON
response/ # instrument-response JSON for dataloader correction/simulation
notebooks/ # interactive quickstart notebook
pickers/ # example TorchScript/ONNX picker models
scripts/ # conversion, indexing, picker, evaluation, plotting, upload scripts
utils/ # reusable HDF5 dataloader and waveform-index APIs
The command-line entry points live in scripts/. The quickstart notebook
notebooks/seismicx_cont_quickstart.ipynb demonstrates file checks, SQLite
queries, HDF5 waveform plotting, annotation inspection, and evaluation command
templates.
Useful release-maintenance commands:
./scripts/verify_full_dataset.py
./scripts/build_waveform_index.sh
./scripts/write_manifest.sh
2. Dataset Description
2.1 Data Source
This dataset contains continuous waveform records from three regional seismic networks:
- CI — Southern California Seismic Network
- NC — Northern California Seismic Network
- BK — Berkeley Digital Seismic Network
These networks present varied observation conditions, including differences in instrument response, channel configuration, noise characteristics, and station density.
2.2 Time Windows
Two representative 7-day continuous time windows are provided:
TIME_WINDOWS = [
{
"id": "day_20190701",
"starttime": "2019-07-01T00:00:00",
"endtime": "2019-07-08T00:00:00",
},
{
"id": "day_20211108",
"starttime": "2021-11-08T00:00:00",
"endtime": "2021-11-15T00:00:00",
},
]
Each window covers 7 consecutive days of continuous seismic recordings.
2.3 Data Content
The dataset includes continuous three-component waveform data in HDF5 format, station metadata (latitude, longitude, elevation, validity period), multi-segment waveform records merged into continuous traces, and event annotations derived from the CEED dataset.
3. Benchmark Design
SeismicX-Continuous is designed as a dual-scenario benchmark, separating the dataset into high- and low-activity evaluation regimes.
3.1 High-Activity Window (July 2019)
Dense seismicity with frequent and overlapping events. Evaluation focuses on recall, missed detections, detection in clustered event sequences, and temporal resolution limits.
3.2 Low-Activity Window (November 2021)
Sparse seismicity with extended quiet periods. Evaluation focuses on precision, false alarms, noise robustness, and model stability over long quiet intervals.
4. Data Structure
The dataset is stored in a hierarchical HDF5 layout:
/year
/day
/stations
/network.station.location
/waveform
/channel
/segment
Example:
/2019-01-01T00:00:00.000000Z
/2019-07-01T00:00:00.000000Z
/stations
/BK.BDM.00
/waveform
/BHE/0
/BHN/0
/BHZ/0
Each channel can contain multiple waveform segments that are automatically merged during loading.
5. SeismicX DataLoader
File: utils/hdf5_waveform_dataset.py
A PyTorch Dataset that wraps the hierarchical HDF5 files and delivers ready-to-use waveform tensors with full station metadata. Designed for single-pass inference over large multi-file datasets with minimal memory footprint.
5.1 Key Features
Multi-file input — accepts a single file, a directory, a glob pattern (data/hdf5/*.h5), or a Python list of paths.
Channel-family grouping — waveforms are grouped by the first two characters of the channel code (the "family"), so BHE/BHN/BHZ form one sample rather than three separate ones.
| Channels | Family |
|---|---|
| BHE, BHN, BHZ | BH |
| HHE, HHN, HHZ | HH |
| HNE, HNN, HNZ / HN1, HN2, HN3 | HN |
| EHE, EHN, EHZ | EH |
Automatic 3-component construction — E/1 → slot 0, N/2 → slot 1, Z/3 → slot 2. Z-only stations can be replicated to [Z, Z, Z].
Multi-segment merging — consecutive HDF5 segments are merged into one contiguous trace with gap-filling.
Optional resampling — linear-interpolation resampling to a target sampling rate (no scipy dependency).
Optional instrument-response handling — the loader can remove the native instrument response using data/response/instrument_responses.json and can optionally simulate a user-selected target response.
Memory-safe file handle caching — only the current HDF5 file's handle is kept open; when the loader moves to a new file the old handle (and its chunk cache) is closed immediately. Memory usage stays O(1) regardless of the number of files.
5.2 Constructor Parameters
HDF5WaveformDataset(
h5_file,
mode="three",
fill_value=0.0,
dtype=np.float32,
default_location="--",
allowed_families=("HH", "BH", "EH", "HN"),
allowed_z_only_channels=("EHZ",),
allow_z_only=True,
replicate_z_only=True,
target_sampling_rate=None,
skip_sample_keys=None,
skip_jsonl=None,
skip_record_type="phase_pick",
keep_h5_open=True,
include_segments_metadata=True,
use_overlap_mask=True,
h5_rdcc_nbytes=8 * 1024 * 1024,
max_duration_sec=90000.0,
instrument_response_json=None,
remove_instrument_response=False,
response_output="VEL",
response_pre_filt=None,
response_water_level=60,
response_error_behavior="raise",
simulate_instrument_response=False,
simulation_response_json=None,
simulation_response_id=None,
simulation_response_selector=None,
simulation_paz=None,
simulation_output=None,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
h5_file |
str / Path / list | required | Single file, directory, glob pattern (data/hdf5/*.h5), or list |
mode |
str | "three" |
Output mode: "single", "three", or "multi" |
allowed_families |
tuple[str] | ("HH","BH","EH","HN") |
Channel families included in the index |
allowed_z_only_channels |
tuple[str] | ("EHZ",) |
Full channel codes accepted as Z-only stations |
allow_z_only |
bool | True |
Include single-Z stations (from allowed_z_only_channels) |
replicate_z_only |
bool | True |
Fill all 3 components with the Z trace when Z-only |
target_sampling_rate |
float or None | None |
Resample to this rate (Hz); None keeps original rate |
fill_value |
float | 0.0 |
Value used to fill gaps between merged segments |
dtype |
np.dtype | np.float32 |
NumPy dtype for waveform arrays |
keep_h5_open |
bool | True |
Cache the current file's handle; stale handles are closed |
include_segments_metadata |
bool | True |
Include raw segment dicts in the output dict (set False to save memory) |
use_overlap_mask |
bool | True |
Prevent overlapping segments from overwriting each other |
h5_rdcc_nbytes |
int | 8 MB |
HDF5 chunk-cache size per file handle (8 MB is enough for single-pass inference) |
max_duration_sec |
float | 90000.0 |
Safety limit for one merged channel trace |
instrument_response_json |
str or None | None |
Path to data/response/instrument_responses.json for response removal |
remove_instrument_response |
bool | False |
Remove native instrument response with ObsPy and return physical units |
response_output |
str | "VEL" |
Output unit for response removal: usually "DISP", "VEL", or "ACC" |
response_pre_filt |
tuple or None | None |
Four-corner frequency pre-filter passed to ObsPy remove_response |
response_water_level |
float or None | 60 |
Water level passed to ObsPy; use None when controlling stabilization by response_pre_filt |
response_error_behavior |
str | "raise" |
"raise", "warn", or "skip" if a response cannot be applied |
simulate_instrument_response |
bool | False |
Simulate a target response after optional response removal |
simulation_response_json |
str or None | None |
Response JSON containing the target simulated response; defaults to instrument_response_json |
simulation_response_id |
str or None | None |
Exact response_id to simulate |
simulation_response_selector |
dict or None | None |
Select target response by network, station, location, channel, and optional time |
simulation_paz |
dict or None | None |
Alternative ObsPy-style PAZ dict for simple response simulation |
simulation_output |
str or None | None |
Physical unit used when evaluating the target response; defaults to response_output |
skip_sample_keys |
set or None | None |
Pre-built set of sample keys to exclude from the index |
skip_jsonl |
str or None | None |
Path to an existing output JSONL; finished samples are scanned and removed from the index at startup |
skip_record_type |
str | "phase_pick" |
record_type value matched when scanning the JSONL for resume |
5.3 Output Modes
mode="single"
One sample per channel. Output waveform shape: [T].
mode="three" (recommended)
One sample per channel family. Components are arranged as E/N/Z (or 1/2/3 mapped to the same slots). Output waveform shape: [T, 3].
mode="multi"
One sample per channel family, all available channels stacked. Output waveform shape: [T, C].
5.4 Usage Examples
Single file:
from utils.hdf5_waveform_dataset import HDF5WaveformDataset, waveform_collate_fn
dataset = HDF5WaveformDataset(
h5_file="data/hdf5/continuous_waveform_usa_20190701.h5",
mode="three",
target_sampling_rate=100.0,
)
Glob pattern (multiple files):
dataset = HDF5WaveformDataset(
h5_file="data/hdf5/continuous_waveform_usa_*.h5",
mode="three",
allowed_families=("HH", "BH", "EH", "HN"),
allow_z_only=True,
replicate_z_only=True,
target_sampling_rate=100.0,
)
print(f"Files: {len(dataset.h5_files)} | Samples: {len(dataset)}")
Directory:
dataset = HDF5WaveformDataset(h5_file="data/hdf5/", mode="three")
Remove native instrument response:
dataset = HDF5WaveformDataset(
h5_file="data/hdf5/continuous_waveform_usa_20190701.h5",
mode="three",
instrument_response_json="data/response/instrument_responses.json",
remove_instrument_response=True,
response_output="VEL",
response_pre_filt=(0.2, 0.5, 20.0, 45.0),
response_water_level=None,
)
Remove the native response and simulate a target response:
dataset = HDF5WaveformDataset(
h5_file="data/hdf5/continuous_waveform_usa_20190701.h5",
mode="single",
instrument_response_json="data/response/instrument_responses.json",
remove_instrument_response=True,
response_output="VEL",
response_pre_filt=(0.2, 0.5, 20.0, 45.0),
response_water_level=None,
simulate_instrument_response=True,
simulation_response_selector={
"network": "BK",
"station": "BDM",
"location": "00",
"channel": "BHZ",
"time": "2019-07-06T04:00:00Z",
},
simulation_output="VEL",
)
The returned item includes an instrument_processing field with the matched
native response_id, response epoch, simulated response id, and any processing
error. Response removal and response simulation use ObsPy's response machinery;
the dataloader only provides the SeismicX-Cont JSON lookup and per-sample
matching layer.
5.5 DataLoader Configuration
Single-process (safest — required for MPS):
from torch.utils.data import DataLoader
loader = DataLoader(
dataset,
batch_size=1,
shuffle=False,
num_workers=0,
collate_fn=waveform_collate_fn,
)
Multi-process (recommended for CUDA):
h5py is not fork-safe. On Linux the default DataLoader start method is fork, which can corrupt HDF5 reads. Use spawn explicitly:
from utils.hdf5_waveform_dataset import hdf5_worker_init_fn
loader = DataLoader(
dataset,
batch_size=4,
shuffle=False,
num_workers=4,
collate_fn=waveform_collate_fn,
multiprocessing_context="spawn", # avoids h5py + fork deadlocks on Linux
worker_init_fn=hdf5_worker_init_fn,
prefetch_factor=2,
persistent_workers=True,
pin_memory=True, # speeds up CPU → GPU transfers on CUDA
)
On macOS the default context is already spawn (Python ≥ 3.8), so no override is needed.
5.6 Resume: Skip Already-Processed Samples
Pass skip_jsonl to remove already-processed samples from the index before any waveform data is read from HDF5:
dataset = HDF5WaveformDataset(
h5_file="data/hdf5/continuous_waveform_usa_*.h5",
mode="three",
skip_jsonl="data/picks/output.jsonl",
skip_record_type="phase_pick",
)
print(f"Original samples : {dataset.original_index_size}")
print(f"Already done : {dataset.filtered_index_size}")
print(f"Remaining : {len(dataset)}")
At startup the JSONL is split into byte-range chunks and parsed in parallel with ThreadPoolExecutor. Each finished record's sample key is reconstructed directly from its fields (h5_file, year_id, day_id, station_id, channel_family, channels). The matching index entries are dropped before any __getitem__ call.
5.7 Output Sample Dictionary
Each item returned by __getitem__ (or delivered by the DataLoader after waveform_collate_fn) is a plain Python dict:
{
# Identity
"mode": "three",
"h5_file": "data/hdf5/continuous_waveform_usa_20190701.h5",
"year_id": "2019-01-01T00:00:00.000000Z",
"day_id": "2019-07-01T00:00:00.000000Z",
"station_id": "BK.BDM.00",
# Channel layout
"channel_family": "BH",
"channels": ["BHE", "BHN", "BHZ"], # one entry per component slot
"component_order": "E/N/Z or 1/2/3",
"is_z_only": False,
"z_only_replicated": False,
# Waveform
"waveform": torch.Tensor, # shape [T, 3], dtype float32
"sampling_rate": 100.0, # Hz after optional resampling
"original_sampling_rate": 40.0,
"target_sampling_rate": 100.0,
"resampled": True,
"npts": 8640001,
# Timing
"starttime": "2019-07-01T00:00:00.000000Z",
"endtime": "2019-07-01T23:59:59.990000Z",
# Station metadata
"station_info": {
"station_id": "BK.BDM.00",
"network": "BK",
"station": "BDM",
"location": "00",
"longitude": -122.2486,
"latitude": 37.8762,
"elevation": 276.0,
"location_available": True,
"position_match_mode": "strict_time_matched_network_station_only",
"position_is_fallback": False,
"position_history": [...],
...
},
# Per-channel metadata (keyed by channel code)
"channel_info": {
"BHE": {"channel": "BHE", "segment_count": 3, "starttime": "...", ...},
"BHN": {...},
"BHZ": {...},
},
# Raw segment metadata (empty list when include_segments_metadata=False)
"segments": [],
}
5.8 Minimal Example Script
scripts/example_dataloader.py demonstrates the full pipeline in under 90 lines:
python scripts/example_dataloader.py \
--h5_input data/hdf5/continuous_waveform_usa_20190701.h5 \
--n_samples 3
Sample output:
── Sample 1 ──────────────────────────────────────────
station_id : BK.BDM.00
network : BK.BDM
channels : ['BHE', 'BHN', 'BHZ']
starttime : 2019-07-01T00:00:00.000000Z
sampling_rate : 100.0 Hz
waveform shape: (8640001, 3) (86400.0 s × 3 components)
waveform dtype: torch.float32
Z-only : False
location : lon=-122.2486 lat=37.8762
ch[E/1] min=-1.234e-05 max=+2.345e-05 std=3.210e-06
ch[N/2] min=-1.100e-05 max=+1.987e-05 std=2.890e-06
ch[Z/3] min=-9.870e-06 max=+1.543e-05 std=2.654e-06
6. Evaluation Metrics
Recommended metrics for benchmarking models on this dataset:
- Recall (Detection Rate) — fraction of labeled picks matched within the TP tolerance
- Precision — fraction of automatic picks that correspond to a labeled event
- F1-score — harmonic mean of precision and recall
- Miss Rate — 1 − Recall
- False Alarm Rate — automatic picks per unit time with no matching label
- Residual distribution — mean, std, and robust percentiles of
auto_time − label_time
7. Design Philosophy
SeismicX-Continuous is built as a controlled benchmark for continuous seismic monitoring, with separate evaluation pathways for:
- High-event-density conditions → missed detection analysis
- Low-event-density conditions → false alarm and stability analysis
This structure supports systematic evaluation of model behavior in realistic operational scenarios.
8. Summary
The SeismicX-Continuous Dataset provides a realistic continuous waveform benchmark, balanced evaluation scenarios, high-quality annotations, and an AI-ready data structure.
It is well suited for continuous seismic detection, real-time monitoring systems, machine learning model evaluation, and large-scale waveform analysis.
9. Waveform Index API and Data Query (with macOS External Drive Notes)
This section introduces the SQLite-based waveform indexing and querying system.
9.1 Core API
from utils.waveform_index_api import query_and_merge
merged, rows = query_and_merge(
db_file="data/index/waveform_index.sqlite",
network="BK",
station="BDM",
location="*",
channel="BH*",
starttime="2019-07-01T00:00:00",
endtime="2019-07-01T01:00:00",
fill_value=0.0,
)
merged is a dict keyed by "network.station.location.channel", each value containing data (ndarray), sampling_rate, starttime, endtime, filled_ratio, and segments.
9.2 Wildcard Query Mechanism
* is converted to SQL LIKE (%). Supported: * (match all), BK (exact), B* (prefix), *Z (suffix), BH* (channel family).
9.3 Build SQLite Index
python utils/hdf5_waveform_index.py build \
--h5 "data/hdf5/continuous_waveform_usa_*.h5" \
--db data/index/waveform_index.sqlite
On external drives, prefer the wrapper below. It builds the SQLite database on
local temporary storage first and then copies the finished index into
data/index/:
./scripts/build_waveform_index.sh
9.4 Example Script
python scripts/example_query_waveform.py \
--db data/index/waveform_index.sqlite \
--network BK --station BDM \
--starttime 2019-07-01T00:00:00 \
--endtime 2019-07-01T01:00:00
9.5 ⚠️ macOS External Drive Issue
SQLite requires file locking and journal/WAL files. External drives (exFAT, NTFS, network drives) often lack reliable locking support, causing sqlite3.OperationalError: attempt to write a readonly database even when HDF5 writes succeed normally.
Solution (recommended): keep the SQLite index on a local disk while HDF5 data stays on the external drive:
# HDF5 on external drive
--h5 /Volumes/Data/.../*.h5
# SQLite on local disk
--db ~/waveform_index.sqlite
Test SQLite writability explicitly:
python -c "import sqlite3; sqlite3.connect('/tmp/seismicx_sqlite_test.db').execute('CREATE TABLE t(a)')"
10. Phase Picking: scripts/run_picker_to_jsonl.py
Runs a TorchScript seismic phase picker over every sample in the DataLoader and writes results to a JSONL file. Supports Mac/MPS, CUDA (single or multi-GPU), and CPU with automatic device selection, periodic model reloading, and built-in resume.
10.1 Quick Start
Mac / Apple Silicon (MPS):
python scripts/run_picker_to_jsonl.py \
--h5_input 'data/hdf5/continuous_waveform_usa_*.h5' \
--output_jsonl data/picks/output.jsonl \
--picker_model pickers/pnsn.v1.jit \
--polar_model pickers/polar.onnx \
--device mps
CUDA with multi-process prefetch:
python scripts/run_picker_to_jsonl.py \
--h5_input 'data/hdf5/continuous_waveform_usa_*.h5' \
--output_jsonl data/picks/output.jsonl \
--picker_model pickers/pnsn.v1.jit \
--polar_model pickers/polar.onnx \
--device cuda \
--batch_size 4 \
--num_workers 4 \
--prefetch_factor 2
Resume an interrupted run (default behavior):
Simply re-run the same command. The script scans the existing JSONL at startup, rebuilds the set of already-processed sample keys, and removes them from the DataLoader index before any waveform data is read.
10.2 CLI Arguments
| Argument | Default | Description |
|---|---|---|
--h5_input |
data/hdf5/continuous_waveform_usa_*.h5 |
HDF5 file, directory, or glob pattern |
--output_jsonl |
data/picks/output.jsonl |
Output JSONL file (appended in resume mode) |
--picker_model |
pickers/pnsn.v1.jit |
TorchScript phase picker (.jit) |
--polar_model |
pickers/polar.onnx |
Optional ONNX polarity model; pass "" to disable |
--device |
auto |
Inference device: auto, cpu, cuda, or mps |
--batch_size |
1 |
DataLoader batch size; samples are still processed one by one |
--num_workers |
0 |
DataLoader worker processes; 0 = single-process (required for MPS) |
--prefetch_factor |
2 |
Batches pre-loaded per worker (ignored when num_workers=0) |
--multiprocessing_context |
auto |
auto selects spawn on Linux; override with spawn, fork, or forkserver |
--allowed_families |
HH,BH,EH,HN |
Comma-separated channel families |
--allowed_z_only_channels |
EHZ |
Comma-separated Z-only channel codes |
--allow_z_only / --no_z_only |
True |
Include / exclude Z-only stations |
--replicate_z_only / --no_replicate_z_only |
True |
Replicate Z to [Z,Z,Z] |
--target_sampling_rate |
100.0 |
Resample to Hz; use -1 to disable |
--min_confidence |
0.0 |
Minimum pick probability to write |
--snr_window_sec |
2.0 |
SNR window length before/after pick (seconds) |
--progress_interval |
100 |
Print progress every N samples |
--flush_interval |
1000 |
Flush JSONL every N samples; 0 = flush only at end |
--gc_interval |
200 |
Run gc.collect + empty device cache every N samples |
--resume / --no_resume |
True |
Enable / disable resume mode |
--mps_empty_cache_interval |
1 |
Call torch.mps.empty_cache() every N samples |
--cuda_empty_cache_interval |
200 |
Call torch.cuda.empty_cache() every N samples |
--reload_model_interval |
-1 |
Reload TorchScript model every N samples; -1 = auto (50 on MPS, 0 on CUDA/CPU) |
--include_segments_metadata |
False |
Include raw HDF5 segment dicts in loader output |
--no_keep_h5_open |
— | Disable per-file HDF5 handle caching |
--no_overlap_mask |
— | Disable overlap mask in segment merging |
--h5_rdcc_nbytes |
8388608 (8 MB) |
HDF5 chunk-cache size per file handle |
--max_samples |
0 |
Exit after N new samples (code 75); re-run with --resume to continue. 0 = no limit. See MPS restart loop below. |
10.3 Device-Specific Notes
MPS (Mac / Apple Silicon)
Use --num_workers 0 (single-process). The MPS TorchScript allocator accumulates internal Metal buffer state with each sess(xt) call; torch.mps.empty_cache() does not fully reclaim it. The model is reloaded every 50 samples by default (--reload_model_interval -1) to reset allocator state. Increase the interval for very short waveforms if the reload overhead is noticeable; set --reload_model_interval 0 to disable.
For long runs (thousands of samples), even model reloads cannot fully reclaim the Metal allocator's process-level state. The recommended pattern is to let the OS reclaim memory on each process exit using --max_samples and a restart loop:
while true; do
python scripts/run_picker_to_jsonl.py \
--h5_input 'data/hdf5/continuous_waveform_usa_*.h5' \
--output_jsonl data/picks/output.jsonl \
--picker_model pickers/pnsn.v1.jit \
--polar_model pickers/polar.onnx \
--device mps --max_samples 200
[ $? -ne 75 ] && break
done
The script exits with code 75 when --max_samples is reached and code 0 when all samples are finished. --resume is enabled by default, so each restart skips already-written samples at the dataset level (no waveform data is re-read from HDF5).
CUDA
Use --num_workers 4 or more to overlap HDF5 I/O with GPU inference. The spawn context is selected automatically on Linux. CUDA allocator state does not accumulate across TorchScript calls, so model reload is disabled by default.
CPU
Single-process; no special memory management needed.
10.4 Resume Mechanism
At startup the script passes skip_jsonl=output_jsonl to HDF5WaveformDataset. The JSONL is scanned in parallel using ThreadPoolExecutor. Each finished record's sample key is reconstructed from its stored fields (h5_file, year_id, day_id, station_id, channel_family, channels). Matching index entries are dropped from the DataLoader — their waveform data is never read from HDF5.
On the second and subsequent runs, startup takes O(N_finished × JSON parse cost). For very large JSONL files (>10 GB) this may take 30–120 seconds but is far cheaper than re-running inference.
10.5 Output JSONL Format
Each output line is a self-contained JSON record. Two record_type values are written:
"phase_pick" — one record per detected phase:
{
"record_type": "phase_pick",
"phase_id": 0,
"phase_name": "Pg",
"phase_prob": 0.987,
"phase_sample": 15234.0,
"phase_relative_time": 152.34,
"phase_time": "2019-07-01T00:02:32.340000Z",
"polarity": "U",
"polarity_prob": 0.912,
"polarity_available": true,
"snr": 12.4,
"amplitude": 3.14e-05,
"pre_std": 2.10e-06,
"post_std": 2.60e-05,
"pre_abs_p95": 4.11e-06,
"post_abs_p95": 5.87e-05,
"channels": ["BHE", "BHN", "BHZ"],
"channel_family": "BH",
"component_order": "E/N/Z or 1/2/3",
"is_z_only": false,
"z_only_replicated": false,
"waveform_shape": [8640001, 3],
"sampling_rate": 100.0,
"original_sampling_rate": 40.0,
"target_sampling_rate": 100.0,
"resampled": true,
"h5_file": "data/hdf5/continuous_waveform_usa_20190701.h5",
"year_id": "2019-01-01T00:00:00.000000Z",
"day_id": "2019-07-01T00:00:00.000000Z",
"station_info": {
"station_id": "BK.BDM.00",
"network": "BK",
"station": "BDM",
"location": "00",
"longitude": -122.2486,
"latitude": 37.8762,
"elevation": 276.0,
"location_available": true,
"position_in_time_range": true,
"position_is_fallback": false
}
}
"error" — one record per sample where inference failed:
{
"record_type": "error",
"station_id": "BK.BDM.00",
"h5_file": "...",
"year_id": "...",
"day_id": "...",
"sample_key": "...|...|...|...|BH|BHE,BHN,BHZ",
"error": "RuntimeError: ..."
}
Phase ID → name mapping:
phase_id |
phase_name |
|---|---|
| 0 | Pg |
| 1 | Sg |
| 2 | Pn |
| 3 | Sn |
| 4 | P |
| 5 | S |
11. Evaluating Picks: scripts/evaluate_picks.py
Compares automatically generated phase picks against human annotation labels. Uses a SQLite index to handle JSONL output files of 40 GB or more without loading them into memory.
11.1 Workflow
Step 1 — Build the SQLite index (first run only):
python scripts/evaluate_picks.py \
--auto-jsonl data/picks/output.jsonl \
--index-db ~/output.index.sqlite \
--build-index \
--drop-existing
This streams through the JSONL, batch-inserts picks into SQLite (50,000 rows per commit), and builds indices on (station_id, phase_name, sec_key) and time_epoch for fast lookup. The index file is typically 3–5× smaller than the JSONL.
Use --drop-existing when rebuilding an index after updating the source JSONL, so old picks are not retained.
Step 2 — Evaluate recall and residuals:
python scripts/evaluate_picks.py \
--auto-jsonl data/picks/output.jsonl \
--label-json data/label/annotations_for_continuous_hdf5.json \
--index-db ~/output.index.sqlite \
--outdir eval_picks/eval_phasenet \
--tp-tol 1.5 \
--err-window 5.0
Step 3 — Generate figures (optional, requires matplotlib + scipy):
python scripts/evaluate_picks.py \
--auto-jsonl data/picks/output.jsonl \
--label-json data/label/annotations_for_continuous_hdf5.json \
--index-db ~/output.index.sqlite \
--outdir eval_picks/eval_phasenet \
--plot
Inspect index statistics:
python scripts/evaluate_picks.py \
--index-db ~/output.index.sqlite \
--db-info
11.2 CLI Arguments
| Argument | Default | Description |
|---|---|---|
--auto-jsonl |
data/picks/skynet.phase.jsonl |
JSONL output from scripts/run_picker_to_jsonl.py |
--label-json |
data/label/annotations_for_continuous_hdf5.json |
Annotation JSON (CEED format) |
--index-db |
~/skynet.pick.index.sqlite |
Path to SQLite index (keep on local disk — see Section 9.5) |
--outdir |
eval_picks/eval_skynet |
Directory for output files |
--build-index |
flag | Stream JSONL and build / update SQLite index |
--drop-existing |
flag | Drop the existing index table before rebuilding |
--keep-raw-json |
flag | Store full raw JSON in SQLite (not recommended for large files) |
--batch-size |
50000 |
SQLite insert batch size during indexing |
--tp-tol |
1.5 |
True-positive tolerance window in seconds |
--err-window |
5.0 |
Wider search window for residual distribution in seconds |
--min-prob |
None |
Minimum phase_prob to include automatic picks |
--phase-map |
None |
Map label phases to automatic phases (see below) |
--plot |
flag | Generate residual histograms and recall bar charts |
--db-info |
flag | Print pick counts from index and exit |
11.3 Phase Map
The --phase-map argument controls how label phase names are matched to automatic phase names. The default mapping is:
P → Pg
S → Sg
Pg → Pg
Sg → Sg
Pn → Pg, Pn, P
Sn → Sg, Sn, S
Override with a semicolon-separated list:
# Match label P to Pg or Pn; label S to Sg or Sn
--phase-map "P:Pg,Pn;S:Sg,Sn"
# Strict: label P must match Pg only
--phase-map "P:Pg;S:Sg"
11.4 Annotation JSON Format
The label file follows a nested structure:
years → days → events → stations → picks[]
Each pick entry must have at minimum:
{
"phase": "P",
"time": "2019-07-01T00:02:32.340000Z",
"status": "manual",
"station_id": "BK.BDM.00"
}
Optional fields: network, station, location, event_id, distance_km.
11.5 Output Files
All outputs are written to --outdir:
| File | Description |
|---|---|
summary.json |
Per-phase and per-subset statistics: recall, residual mean/std/median/percentiles, Gaussian and Student-t fit parameters, automatic pick counts |
summary.tsv |
Same data as tab-separated text, one row per (subset, phase) |
matches.jsonl |
One JSON record per label pick: whether it matched, residual in seconds, auto phase probability, SNR |
figures/residual_fit_{subset}_{phase}.png |
Residual histogram overlaid with fitted Gaussian and Student-t PDFs |
figures/recall_bar.png |
Recall by phase and subset |
figures/auto_pick_count_bar.png |
Automatic pick counts grouped by phase name |
Subsets reported: all (every label pick), manual (manually reviewed only), automatic (automatically labeled only).
summary.json structure (excerpt):
{
"tp_tolerance_s": 1.5,
"residual_window_s": 5.0,
"label_phase_count_all_status": {"P": 1240, "S": 1198},
"subsets": {
"all": {
"P": {
"n_label": 1240,
"n_matched_within_tp_tol": 1102,
"recall": 0.889,
"residual_mean_s": 0.031,
"residual_std_s": 0.214,
"residual_median_s": 0.018,
"residual_abs_p90_s": 0.45,
"gaussian_fit_all_within_err_window": {"mean": 0.031, "std_mle": 0.213, "std_unbiased": 0.213},
"student_t_fit_all_within_err_window": {"df": 4.2, "loc": 0.029, "scale": 0.18}
}
}
},
"auto_pick_count": {
"total": 458320,
"by_auto_phase": {"Pg": 231044, "Sg": 198210, "Pn": 18700, "Sn": 10366},
"mapped_to_label_phase": {"P": 231044, "S": 198210}
}
}
12. REAL Association: scripts/run_real_association.py
scripts/run_real_association.py connects the SeismicX-Cont picker JSONL
format to the REAL association algorithm. It compiles scripts/real/real.c,
normalizes picker phases to P/S, writes REAL input files, runs
association-based event detection, and exports a workflow JSONL containing both
time-window phase candidates and associated earthquake events. The output can be
used directly for event-detection evaluation or passed to a user-supplied
earthquake-location program through the associated-arrival list.
python scripts/run_real_association.py \
--picks-jsonl data/picks/pnsn_v3_diff.phase.jsonl \
--output-jsonl data/associated/real_events.jsonl \
--starttime 2019-07-01T00:00:00Z \
--endtime 2019-07-08T00:00:00Z
The output data/associated/real_events.jsonl includes one
real_association_run metadata record, one real_input_pick record for each
phase pick that entered REAL after filtering/NMS, and one real_event record
for each associated event. Event arrivals keep a source_pick.real_input_pick_id
so users can trace each associated phase back to the time-window phase record
and then to the original picker JSONL line, station metadata, probability,
channel family, and HDF5 file. The companion real_events.summary.json records
filter counts and REAL run commands.
The default settings are conservative workflow defaults rather than calibrated
catalog-production parameters. Tune --min-prob, --real-s, --real-r, and
--real-g for a particular picker, period, and velocity model. Detailed usage
notes are in scripts/README_real_association.md.
The association JSONL is intentionally simple. A downstream locator can ignore
the REAL-specific diagnostic fields and consume only real_event.picks[], where
each associated arrival gives station identity, phase name, pick time, and a
link back to the original automatic-pick record. Preliminary latitude,
longitude, and depth_km fields are included when the associator provides
them, but they are not required for downstream location workflows.
12.1 Event-Level Catalog Comparison
Use scripts/compare_associated_events.py to compare associated events against
the SeismicX-Cont catalog or any event JSONL following the documented schema:
python scripts/compare_associated_events.py \
--pred-jsonl data/associated/real_events.jsonl \
--catalog data/label/annotations_for_continuous_hdf5.json \
--outdir eval_events/real_vs_catalog
An event is counted as a true positive when it matches a reference event with
epicentral distance error < 20 km and origin-time error < 3 s. The script
reports event precision, recall, F1, false positives, false negatives, and
time/location/depth/magnitude error statistics. The generic event JSONL format
is documented in scripts/README_event_comparison.md.
13. License, Citation, and Checksums
Data products and derived metadata are released under CC BY 4.0. Code is
released under the MIT License. See LICENSE for the full terms.
The waveform data remain subject to the original SCEDC and NCEDC data usage policies. When reusing SeismicX-Cont, cite SeismicX-Cont together with the original SCEDC, NCEDC, and CEED sources.
The main-release checksum manifest can be regenerated with:
./scripts/write_manifest.sh
The script writes manifest_sha256.txt and excludes local build products,
manuscript files, generated figures, mini-release files, temporary files,
picker outputs, and intermediate hourly subsets.
Contact
SeismicX is developed by:
- Ziye Yu - yuziye@cea-igp.ac.cn
- Yuqi Cai - caiyuqiming@foxmail.com
- Xin Liu - xinliu_geo@outlook.com
- Fajun Miu -
- Tengchao Dong -
- Tingting Li - 972516012@qq.com
- Xinghui Huang - huangxinhui@seis.ac.cn
- Lu Li -
- Ming Ma - 925100762@qq.com
Built with ❤️ for the seismology community
- Downloads last month
- 295