Datasets:

ArXiv:
License:
zhuwq0 commited on
Commit
5925089
·
verified ·
1 Parent(s): 9c95468

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +188 -180
README.md CHANGED
@@ -1,180 +1,188 @@
1
- ---
2
- license: mit
3
- ---
4
-
5
- ## CEED: *C*alifornia *E*arthquak*E* *D*ataset for Machine Learning and Cloud Computing
6
-
7
- The California EarthquakE Dataset (CEED) is a dataset of earthquake waveforms and metadata for machine learning and cloud computing. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/)
8
-
9
- ```
10
- Group: / len:60424
11
- |- Group: /ci38457511 len:35
12
- | |-* begin_time = 2019-07-06T03:19:23.668000
13
- | |-* depth_km = 8.0
14
- | |-* end_time = 2019-07-06T03:21:23.668000
15
- | |-* event_id = ci38457511
16
- | |-* event_time = 2019-07-06T03:19:53.040000
17
- | |-* event_time_index = 2937
18
- | |-* latitude = 35.7695
19
- | |-* longitude = -117.5993
20
- | |-* magnitude = 7.1
21
- | |-* magnitude_type = w
22
- | |-* nt = 12000
23
- | |-* nx = 35
24
- | |-* sampling_rate = 100
25
- | |-* source = SC
26
- | |- Dataset: /ci38457511/CI.CCC..HH (shape:(3, 12000))
27
- | | |- (dtype=float32)
28
- | | | |-* azimuth = 141.849479
29
- | | | |-* back_azimuth = 321.986302
30
- | | | |-* component = ENZ
31
- | | | |-* depth_km = -0.67
32
- | | | |-* distance_km = 34.471389
33
- | | | |-* dt_s = 0.01
34
- | | | |-* elevation_m = 670.0
35
- | | | |-* event_id = ['ci38457511' 'ci38457511' 'ci37260300']
36
- | | | |-* instrument = HH
37
- | | | |-* latitude = 35.52495
38
- | | | |-* local_depth_m = 0.0
39
- | | | |-* location =
40
- | | | |-* longitude = -117.36453
41
- | | | |-* network = CI
42
- | | | |-* p_phase_index = 3575
43
- | | | |-* p_phase_polarity = U
44
- | | | |-* p_phase_score = 0.8
45
- | | | |-* p_phase_status = manual
46
- | | | |-* p_phase_time = 2019-07-06T03:19:59.422000
47
- | | | |-* phase_index = [ 3575 4184 11826]
48
- | | | |-* phase_picking_channel = ['HHZ' 'HNN' 'HHZ']
49
- | | | |-* phase_polarity = ['U' 'N' 'N']
50
- | | | |-* phase_remark = ['i' 'e' 'e']
51
- | | | |-* phase_score = [0.8 0.5 0.5]
52
- | | | |-* phase_status = manual
53
- | | | |-* phase_time = ['2019-07-06T03:19:59.422000' '2019-07-06T03:20:05.509000' '2019-07-06T03:21:21.928000']
54
- | | | |-* phase_type = ['P' 'S' 'P']
55
- | | | |-* s_phase_index = 4184
56
- | | | |-* s_phase_polarity = N
57
- | | | |-* s_phase_score = 0.5
58
- | | | |-* s_phase_status = manual
59
- | | | |-* s_phase_time = 2019-07-06T03:20:05.509000
60
- | | | |-* snr = [ 637.9865898 286.9100766 1433.04052911]
61
- | | | |-* station = CCC
62
- | | | |-* unit = 1e-6m/s
63
- | |- Dataset: /ci38457511/CI.CCC..HN (shape:(3, 12000))
64
- | | |- (dtype=float32)
65
- | | | |-* azimuth = 141.849479
66
- | | | |-* back_azimuth = 321.986302
67
- | | | |-* component = ENZ
68
- | | | |-* depth_km = -0.67
69
- | | | |-* distance_km = 34.471389
70
- | | | |-* dt_s = 0.01
71
- | | | |-* elevation_m = 670.0
72
- | | | |-* event_id = ['ci38457511' 'ci38457511' 'ci37260300']
73
- ......
74
- ```
75
-
76
- ## Getting Started
77
-
78
- ### Requirements
79
- - datasets
80
- - h5py
81
- - fsspec
82
- - pytorch
83
-
84
- ### Usage
85
- Import the necessary packages:
86
- ```python
87
- import h5py
88
- import numpy as np
89
- import torch
90
- from datasets import load_dataset
91
- ```
92
- We have 6 configurations for the dataset:
93
- - "station"
94
- - "event"
95
- - "station_train"
96
- - "event_train"
97
- - "station_test"
98
- - "event_test"
99
-
100
- "station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
101
-
102
- The sample of `station` is a dictionary with the following keys:
103
- - `data`: the waveform with shape `(3, nt)`, the default time length is 8192
104
- - `begin_time`: the begin time of the waveform data
105
- - `end_time`: the end time of the waveform data
106
- - `phase_time`: the phase arrival time
107
- - `phase_index`: the time point index of the phase arrival time
108
- - `phase_type`: the phase type
109
- - `phase_polarity`: the phase polarity in ('U', 'D', 'N')
110
- - `event_time`: the event time
111
- - `event_time_index`: the time point index of the event time
112
- - `event_location`: the event location with shape `(3,)`, including latitude, longitude, depth
113
- - `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
114
-
115
- The sample of `event` is a dictionary with the following keys:
116
- - `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
117
- - `begin_time`: the begin time of the waveform data
118
- - `end_time`: the end time of the waveform data
119
- - `phase_time`: the phase arrival time with shape `(n_station,)`
120
- - `phase_index`: the time point index of the phase arrival time with shape `(n_station,)`
121
- - `phase_type`: the phase type with shape `(n_station,)`
122
- - `phase_polarity`: the phase polarity in ('U', 'D', 'N') with shape `(n_station,)`
123
- - `event_time`: the event time
124
- - `event_time_index`: the time point index of the event time
125
- - `event_location`: the space-time coordinates of the event with shape `(n_staion, 3)`
126
- - `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
127
-
128
- The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
129
- ```python
130
- # load dataset
131
- # ATTENTION: Streaming(Iterable Dataset) is complex to support because of the feature of HDF5
132
- # So we recommend to directly load the dataset and convert it into iterable later
133
- # The dataset is very large, so you need to wait for some time at the first time
134
-
135
- # to load "station_test" with test split
136
- ceed = load_dataset("AI4EPS/CEED", split="test")
137
- # or
138
- ceed = load_dataset("AI4EPS/CEED", name="station_test", split="test")
139
-
140
- # to load "event" with train split
141
- ceed = load_dataset("AI4EPS/CEED", name="event", split="train")
142
- ```
143
-
144
- #### Example loading the dataset
145
- ```python
146
- ceed = load_dataset("AI4EPS/CEED", name="station_test", split="test")
147
-
148
- # print the first sample of the iterable dataset
149
- for example in ceed:
150
- print("\nIterable test\n")
151
- print(example.keys())
152
- for key in example.keys():
153
- if key == "data":
154
- print(key, np.array(example[key]).shape)
155
- else:
156
- print(key, example[key])
157
- break
158
-
159
- # %%
160
- ceed = ceed.with_format("torch")
161
- dataloader = DataLoader(ceed, batch_size=8, num_workers=0, collate_fn=lambda x: x)
162
-
163
- for batch in dataloader:
164
- print("\nDataloader test\n")
165
- print(f"Batch size: {len(batch)}")
166
- print(batch[0].keys())
167
- for key in batch[0].keys():
168
- if key == "data":
169
- print(key, np.array(batch[0][key]).shape)
170
- else:
171
- print(key, batch[0][key])
172
- break
173
- ```
174
-
175
- #### Extension
176
-
177
- If you want to introduce new features in to labels, we recommend to make a copy of `CEED.py` and modify the `_generate_examples` method. Check [AI4EPS/EQNet](https://github.com/AI4EPS/EQNet/blob/master/eqnet/data/quakeflow_nc.py) for an example. To load the dataset with your modified script, specify the path to the script in `load_dataset` function:
178
- ```python
179
- ceed = load_dataset("path/to/your/CEED.py", name="station_test", split="test", trust_remote_code=True)
180
- ```
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ ## CEED: *C*alifornia *E*arthquake *E*vent *D*ataset for Machine Learning and Cloud Computing
6
+
7
+ The California Earthquake Event Dataset (CEED) is a dataset of earthquake waveforms and metadata for machine learning and cloud computing.
8
+ Detailed statistics about the dataset are available in this [arXiv paper](https://arxiv.org/abs/2502.11500).
9
+
10
+ ### Acknowledgments
11
+ The seismic data used in this study were collected by (1) the Berkeley Digital Seismic Network (BDSN, doi:10.7932/BDSN) and the USGS Northern California Seismic Network (NCSN, doi:10.7914/SN/NC); and (2) the Southern California Seismic Network (SCSN, doi:10.7914/SN/CI).
12
+ The original waveform data, metadata, and data products for this study were accessed through the Northern California Earthquake Data Center (doi:10.7932/NCEDC) and the Southern California Earthquake Center (doi:10.7909/C3WD3xH1).
13
+ Please include acknowledgment and citation of the original data providers when using this dataset.
14
+
15
+ The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/)
16
+
17
+ ```
18
+ Group: / len:60424
19
+ |- Group: /ci38457511 len:35
20
+ | |-* begin_time = 2019-07-06T03:19:23.668000
21
+ | |-* depth_km = 8.0
22
+ | |-* end_time = 2019-07-06T03:21:23.668000
23
+ | |-* event_id = ci38457511
24
+ | |-* event_time = 2019-07-06T03:19:53.040000
25
+ | |-* event_time_index = 2937
26
+ | |-* latitude = 35.7695
27
+ | |-* longitude = -117.5993
28
+ | |-* magnitude = 7.1
29
+ | |-* magnitude_type = w
30
+ | |-* nt = 12000
31
+ | |-* nx = 35
32
+ | |-* sampling_rate = 100
33
+ | |-* source = SC
34
+ | |- Dataset: /ci38457511/CI.CCC..HH (shape:(3, 12000))
35
+ | | |- (dtype=float32)
36
+ | | | |-* azimuth = 141.849479
37
+ | | | |-* back_azimuth = 321.986302
38
+ | | | |-* component = ENZ
39
+ | | | |-* depth_km = -0.67
40
+ | | | |-* distance_km = 34.471389
41
+ | | | |-* dt_s = 0.01
42
+ | | | |-* elevation_m = 670.0
43
+ | | | |-* event_id = ['ci38457511' 'ci38457511' 'ci37260300']
44
+ | | | |-* instrument = HH
45
+ | | | |-* latitude = 35.52495
46
+ | | | |-* local_depth_m = 0.0
47
+ | | | |-* location =
48
+ | | | |-* longitude = -117.36453
49
+ | | | |-* network = CI
50
+ | | | |-* p_phase_index = 3575
51
+ | | | |-* p_phase_polarity = U
52
+ | | | |-* p_phase_score = 0.8
53
+ | | | |-* p_phase_status = manual
54
+ | | | |-* p_phase_time = 2019-07-06T03:19:59.422000
55
+ | | | |-* phase_index = [ 3575 4184 11826]
56
+ | | | |-* phase_picking_channel = ['HHZ' 'HNN' 'HHZ']
57
+ | | | |-* phase_polarity = ['U' 'N' 'N']
58
+ | | | |-* phase_remark = ['i' 'e' 'e']
59
+ | | | |-* phase_score = [0.8 0.5 0.5]
60
+ | | | |-* phase_status = manual
61
+ | | | |-* phase_time = ['2019-07-06T03:19:59.422000' '2019-07-06T03:20:05.509000' '2019-07-06T03:21:21.928000']
62
+ | | | |-* phase_type = ['P' 'S' 'P']
63
+ | | | |-* s_phase_index = 4184
64
+ | | | |-* s_phase_polarity = N
65
+ | | | |-* s_phase_score = 0.5
66
+ | | | |-* s_phase_status = manual
67
+ | | | |-* s_phase_time = 2019-07-06T03:20:05.509000
68
+ | | | |-* snr = [ 637.9865898 286.9100766 1433.04052911]
69
+ | | | |-* station = CCC
70
+ | | | |-* unit = 1e-6m/s
71
+ | |- Dataset: /ci38457511/CI.CCC..HN (shape:(3, 12000))
72
+ | | |- (dtype=float32)
73
+ | | | |-* azimuth = 141.849479
74
+ | | | |-* back_azimuth = 321.986302
75
+ | | | |-* component = ENZ
76
+ | | | |-* depth_km = -0.67
77
+ | | | |-* distance_km = 34.471389
78
+ | | | |-* dt_s = 0.01
79
+ | | | |-* elevation_m = 670.0
80
+ | | | |-* event_id = ['ci38457511' 'ci38457511' 'ci37260300']
81
+ ......
82
+ ```
83
+
84
+ ## Getting Started
85
+
86
+ ### Requirements
87
+ - datasets
88
+ - h5py
89
+ - fsspec
90
+ - pytorch
91
+
92
+ ### Usage
93
+ Import the necessary packages:
94
+ ```python
95
+ import h5py
96
+ import numpy as np
97
+ import torch
98
+ from datasets import load_dataset
99
+ ```
100
+ We have 6 configurations for the dataset:
101
+ - "station"
102
+ - "event"
103
+ - "station_train"
104
+ - "event_train"
105
+ - "station_test"
106
+ - "event_test"
107
+
108
+ "station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
109
+
110
+ The sample of `station` is a dictionary with the following keys:
111
+ - `data`: the waveform with shape `(3, nt)`, the default time length is 8192
112
+ - `begin_time`: the begin time of the waveform data
113
+ - `end_time`: the end time of the waveform data
114
+ - `phase_time`: the phase arrival time
115
+ - `phase_index`: the time point index of the phase arrival time
116
+ - `phase_type`: the phase type
117
+ - `phase_polarity`: the phase polarity in ('U', 'D', 'N')
118
+ - `event_time`: the event time
119
+ - `event_time_index`: the time point index of the event time
120
+ - `event_location`: the event location with shape `(3,)`, including latitude, longitude, depth
121
+ - `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
122
+
123
+ The sample of `event` is a dictionary with the following keys:
124
+ - `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
125
+ - `begin_time`: the begin time of the waveform data
126
+ - `end_time`: the end time of the waveform data
127
+ - `phase_time`: the phase arrival time with shape `(n_station,)`
128
+ - `phase_index`: the time point index of the phase arrival time with shape `(n_station,)`
129
+ - `phase_type`: the phase type with shape `(n_station,)`
130
+ - `phase_polarity`: the phase polarity in ('U', 'D', 'N') with shape `(n_station,)`
131
+ - `event_time`: the event time
132
+ - `event_time_index`: the time point index of the event time
133
+ - `event_location`: the space-time coordinates of the event with shape `(n_staion, 3)`
134
+ - `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
135
+
136
+ The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
137
+ ```python
138
+ # load dataset
139
+ # ATTENTION: Streaming(Iterable Dataset) is complex to support because of the feature of HDF5
140
+ # So we recommend to directly load the dataset and convert it into iterable later
141
+ # The dataset is very large, so you need to wait for some time at the first time
142
+
143
+ # to load "station_test" with test split
144
+ ceed = load_dataset("AI4EPS/CEED", split="test")
145
+ # or
146
+ ceed = load_dataset("AI4EPS/CEED", name="station_test", split="test")
147
+
148
+ # to load "event" with train split
149
+ ceed = load_dataset("AI4EPS/CEED", name="event", split="train")
150
+ ```
151
+
152
+ #### Example loading the dataset
153
+ ```python
154
+ ceed = load_dataset("AI4EPS/CEED", name="station_test", split="test")
155
+
156
+ # print the first sample of the iterable dataset
157
+ for example in ceed:
158
+ print("\nIterable test\n")
159
+ print(example.keys())
160
+ for key in example.keys():
161
+ if key == "data":
162
+ print(key, np.array(example[key]).shape)
163
+ else:
164
+ print(key, example[key])
165
+ break
166
+
167
+ # %%
168
+ ceed = ceed.with_format("torch")
169
+ dataloader = DataLoader(ceed, batch_size=8, num_workers=0, collate_fn=lambda x: x)
170
+
171
+ for batch in dataloader:
172
+ print("\nDataloader test\n")
173
+ print(f"Batch size: {len(batch)}")
174
+ print(batch[0].keys())
175
+ for key in batch[0].keys():
176
+ if key == "data":
177
+ print(key, np.array(batch[0][key]).shape)
178
+ else:
179
+ print(key, batch[0][key])
180
+ break
181
+ ```
182
+
183
+ #### Extension
184
+
185
+ If you want to introduce new features in to labels, we recommend to make a copy of `CEED.py` and modify the `_generate_examples` method. Check [AI4EPS/EQNet](https://github.com/AI4EPS/EQNet/blob/master/eqnet/data/quakeflow_nc.py) for an example. To load the dataset with your modified script, specify the path to the script in `load_dataset` function:
186
+ ```python
187
+ ceed = load_dataset("path/to/your/CEED.py", name="station_test", split="test", trust_remote_code=True)
188
+ ```