Datasets:

ArXiv:
DOI:
License:
File size: 8,160 Bytes
a0d9759
 
 
771255d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d9c8d3
771255d
 
 
 
 
 
7d9c8d3
1634439
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d9c8d3
771255d
 
 
 
7d9c8d3
 
 
 
 
 
1634439
 
771255d
7d9c8d3
 
771255d
 
 
 
 
 
7d9c8d3
771255d
7d9c8d3
 
771255d
 
7d9c8d3
 
771255d
 
7d9c8d3
771255d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d9c8d3
 
1634439
 
 
7d9c8d3
1634439
7d9c8d3
 
 
1634439
7d9c8d3
 
1634439
7d9c8d3
1634439
 
7d9c8d3
 
1634439
7d9c8d3
 
1634439
7d9c8d3
 
 
 
1634439
 
7d9c8d3
 
 
1634439
7d9c8d3
 
 
 
 
 
 
 
 
 
 
 
 
771255d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
---
license: mit
---

# Quakeflow_NC

## Introduction
This dataset is part of the data from NCEDC (Northern California Earthquake Data Center) and is organised as several HDF5 files. The dataset structure is shown below: (File [ncedc_event_dataset_000.h5.txt](./ncedc_event_dataset_000.h5.txt) shows the structure of the firsr shard of the dataset, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))

```
Group: / len:10000
  |- Group: /nc100012 len:5
  |  |-* begin_time = 1987-05-08T00:15:48.890
  |  |-* depth_km = 7.04
  |  |-* end_time = 1987-05-08T00:17:48.890
  |  |-* event_id = nc100012
  |  |-* event_time = 1987-05-08T00:16:14.700
  |  |-* event_time_index = 2581
  |  |-* latitude = 37.5423
  |  |-* longitude = -118.4412
  |  |-* magnitude = 1.1
  |  |-* magnitude_type = D
  |  |-* num_stations = 5
  |  |- Dataset: /nc100012/NC.MRS..EH (shape:(3, 12000))
  |  |  |- (dtype=float32)
  |  |  |  |-* azimuth = 265.0
  |  |  |  |-* component = ['Z']
  |  |  |  |-* distance_km = 39.1
  |  |  |  |-* dt_s = 0.01
  |  |  |  |-* elevation_m = 3680.0
  |  |  |  |-* emergence_angle = 93.0
  |  |  |  |-* event_id = ['nc100012' 'nc100012']
  |  |  |  |-* latitude = 37.5107
  |  |  |  |-* location = 
  |  |  |  |-* longitude = -118.8822
  |  |  |  |-* network = NC
  |  |  |  |-* phase_index = [3274 3802]
  |  |  |  |-* phase_polarity = ['U' 'N']
  |  |  |  |-* phase_remark = ['IP' 'S']
  |  |  |  |-* phase_score = [1 1]
  |  |  |  |-* phase_time = ['1987-05-08T00:16:21.630' '1987-05-08T00:16:26.920']
  |  |  |  |-* phase_type = ['P' 'S']
  |  |  |  |-* snr = [0.         0.         1.98844361]
  |  |  |  |-* station = MRS
  |  |  |  |-* unit = 1e-6m/s
  |  |- Dataset: /nc100012/NN.BEN.N1.EH (shape:(3, 12000))
  |  |  |- (dtype=float32)
  |  |  |  |-* azimuth = 329.0
  |  |  |  |-* component = ['Z']
  |  |  |  |-* distance_km = 22.5
  |  |  |  |-* dt_s = 0.01
  |  |  |  |-* elevation_m = 2476.0
  |  |  |  |-* emergence_angle = 102.0
  |  |  |  |-* event_id = ['nc100012' 'nc100012']
  |  |  |  |-* latitude = 37.7154
  |  |  |  |-* location = N1
  |  |  |  |-* longitude = -118.5741
  |  |  |  |-* network = NN
  |  |  |  |-* phase_index = [3010 3330]
  |  |  |  |-* phase_polarity = ['U' 'N']
  |  |  |  |-* phase_remark = ['IP' 'S']
  |  |  |  |-* phase_score = [0 0]
  |  |  |  |-* phase_time = ['1987-05-08T00:16:18.990' '1987-05-08T00:16:22.190']
  |  |  |  |-* phase_type = ['P' 'S']
  |  |  |  |-* snr = [0.         0.         7.31356192]
  |  |  |  |-* station = BEN
  |  |  |  |-* unit = 1e-6m/s
  ......
  ```

## How to use

### Requirements
- datasets
- h5py
- torch (for PyTorch)

### Usage
Import the necessary packages:
```python
import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
from datasets import load_dataset
```
We have 2 configurations for the dataset: `NCEDC` and `NCEDC_full_size`. They all return event-based samples one by one. But `NCEDC` returns samples with 10 stations each, while `NCEDC_full_size` return samples with stations same as the original data.

The sample of `NCEDC` is a dictionary with the following keys:
- `waveform`: the waveform with shape `(3, nt, n_sta)`, the first dimension is 3 components, the second dimension is the number of time samples, the third dimension is the number of stations
- `phase_pick`: the probability of the phase pick with shape `(3, nt, n_sta)`, the first dimension is noise, P and S
- `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
- `station_location`: the station location with shape `(n_sta, 3)`, the first dimension is latitude, longitude and depth

Because Huggingface datasets only support dynamic size on first dimension, so the sample of `NCEDC_full_size` is a dictionary with the following keys:
- `waveform`: the waveform with shape `(n_sta, 3, nt)`, 
- `phase_pick`: the probability of the phase pick with shape `(n_sta, 3, nt)`
- `event_location`: the event location with shape `(4,)`
- `station_location`: the station location with shape `(n_sta, 3)`, the first dimension is latitude, longitude and depth

The default configuration is `NCEDC`. You can specify the configuration by argument `name`. For example:
```python
# load dataset
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time

# to load "NCEDC"
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="train")
# or
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")

# to load "NCEDC_full_size"
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC_full_size", split="train")
```

If you want to use the first several shards of the dataset, you can download the script `quakeflow_nc.py` and change the code as below:
```python
# change the 37 to the number of shards you want
_URLS = {
    "NCEDC": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)]
}
```
Then you can use the dataset like this (Don't forget to specify the argument `name`):
```python
# don't forget to specify the script path
quakeflow_nc = datasets.load_dataset("path_to_script/quakeflow_nc.py", split="train")
quakeflow_nc
```

#### Usage for `NCEDC`
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
```python
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")
quakeflow_nc = quakeflow_nc.to_iterable_dataset()
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
    isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
    raise Exception("quakeflow_nc is not an IterableDataset")

# print the first sample of the iterable dataset
for example in quakeflow_nc:
    print("\nIterable test\n")
    print(example.keys())
    for key in example.keys():
        print(key, example[key].shape, example[key].dtype)
    break

dataloader = DataLoader(quakeflow_nc, batch_size=4)

for batch in dataloader:
    print("\nDataloader test\n")
    print(batch.keys())
    for key in batch.keys():
        print(key, batch[key].shape, batch[key].dtype)
    break
```

#### Usage for `NCEDC_full_size`

Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
```python
quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="train", name="NCEDC_full_size")

# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
def reorder_keys(example):
    example["waveform"] = example["waveform"].permute(1,2,0).contiguous()
    example["phase_pick"] = example["phase_pick"].permute(1,2,0).contiguous()
    return example
    
quakeflow_nc = quakeflow_nc.map(reorder_keys)

try:
    isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
    raise Exception("quakeflow_nc is not an IterableDataset")

data_loader = DataLoader(
    quakeflow_nc,
    batch_size=1,
    num_workers=num_workers,
)

for batch in quakeflow_nc:
    print("\nIterable test\n")
    print(batch.keys())
    for key in batch.keys():
        print(key, batch[key].shape, batch[key].dtype)
    break

for batch in data_loader:
    print("\nDataloader test\n")
    print(batch.keys())
    for key in batch.keys():
        batch[key] = batch[key].squeeze(0)
        print(key, batch[key].shape, batch[key].dtype)
    break
```