Datasets:

ArXiv:
DOI:
License:

update EQNet event example generator and readme

#4
by kylewhy - opened
This view is limited to 50 files because it contains too many changes.  See the raw diff here.
.gitattributes CHANGED
@@ -52,4 +52,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
- *.csv filter=lfs diff=lfs merge=lfs -text
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ ncedc_eventid.h5 filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -7,15 +7,11 @@ license: mit
7
  ## Introduction
8
  This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
9
 
10
- Cite the NCEDC and PhaseNet:
11
-
12
- Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
13
-
14
- NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
15
 
16
  Acknowledge the NCEDC:
17
-
18
- Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
19
 
20
  ```
21
  Group: / len:16227
@@ -66,7 +62,7 @@ Waveform data, metadata, or data products for this study were accessed through t
66
  - datasets
67
  - h5py
68
  - fsspec
69
- - pytorch
70
 
71
  ### Usage
72
  Import the necessary packages:
@@ -74,6 +70,7 @@ Import the necessary packages:
74
  import h5py
75
  import numpy as np
76
  import torch
 
77
  from datasets import load_dataset
78
  ```
79
  We have 6 configurations for the dataset:
@@ -88,28 +85,16 @@ We have 6 configurations for the dataset:
88
 
89
  The sample of `station` is a dictionary with the following keys:
90
  - `data`: the waveform with shape `(3, nt)`, the default time length is 8192
91
- - `begin_time`: the begin time of the waveform data
92
- - `end_time`: the end time of the waveform data
93
- - `phase_time`: the phase arrival time
94
- - `phase_index`: the time point index of the phase arrival time
95
- - `phase_type`: the phase type
96
- - `phase_polarity`: the phase polarity in ('U', 'D', 'N')
97
- - `event_time`: the event time
98
- - `event_time_index`: the time point index of the event time
99
- - `event_location`: the event location with shape `(3,)`, including latitude, longitude, depth
100
  - `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
101
 
102
  The sample of `event` is a dictionary with the following keys:
103
  - `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
104
- - `begin_time`: the begin time of the waveform data
105
- - `end_time`: the end time of the waveform data
106
- - `phase_time`: the phase arrival time with shape `(n_station,)`
107
- - `phase_index`: the time point index of the phase arrival time with shape `(n_station,)`
108
- - `phase_type`: the phase type with shape `(n_station,)`
109
- - `phase_polarity`: the phase polarity in ('U', 'D', 'N') with shape `(n_station,)`
110
- - `event_time`: the event time
111
- - `event_time_index`: the time point index of the event time
112
- - `event_location`: the space-time coordinates of the event with shape `(n_staion, 3)`
113
  - `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
114
 
115
  The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
@@ -128,33 +113,70 @@ quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="t
128
  quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
129
  ```
130
 
131
- #### Example loading the dataset
 
132
  ```python
133
  quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
134
 
135
  # print the first sample of the iterable dataset
136
  for example in quakeflow_nc:
137
  print("\nIterable test\n")
138
  print(example.keys())
139
  for key in example.keys():
140
- if key == "data":
141
- print(key, np.array(example[key]).shape)
142
- else:
143
- print(key, example[key])
144
  break
145
 
146
- # %%
147
- quakeflow_nc = quakeflow_nc.with_format("torch")
148
- dataloader = DataLoader(quakeflow_nc, batch_size=8, num_workers=0, collate_fn=lambda x: x)
149
 
150
  for batch in dataloader:
151
  print("\nDataloader test\n")
152
- print(f"Batch size: {len(batch)}")
153
- print(batch[0].keys())
154
- for key in batch[0].keys():
155
- if key == "data":
156
- print(key, np.array(batch[0][key]).shape)
157
- else:
158
- print(key, batch[0][key])
159
  break
160
  ```
 
7
  ## Introduction
8
  This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
9
 
10
+ Cite the NCEDC:
11
+ "NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC."
 
 
 
12
 
13
  Acknowledge the NCEDC:
14
+ "Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC."
 
15
 
16
  ```
17
  Group: / len:16227
 
62
  - datasets
63
  - h5py
64
  - fsspec
65
+ - torch (for PyTorch)
66
 
67
  ### Usage
68
  Import the necessary packages:
 
70
  import h5py
71
  import numpy as np
72
  import torch
73
+ from torch.utils.data import Dataset, IterableDataset, DataLoader
74
  from datasets import load_dataset
75
  ```
76
  We have 6 configurations for the dataset:
 
85
 
86
  The sample of `station` is a dictionary with the following keys:
87
  - `data`: the waveform with shape `(3, nt)`, the default time length is 8192
88
+ - `phase_pick`: the probability of the phase pick with shape `(3, nt)`, the first dimension is noise, P and S
89
+ - `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
 
 
 
 
 
 
 
90
  - `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
91
 
92
  The sample of `event` is a dictionary with the following keys:
93
  - `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
94
+ - `phase_pick`: the probability of the phase pick with shape `(n_station, 3, nt)`, the first dimension is noise, P and S
95
+ - `event_center`: the probability of the event time with shape `(n_station, feature_nt)`, default feature time length is 512
96
+ - `event_location`: the space-time coordinates of the event with shape `(n_staion, 4, feature_nt)`
97
+ - `event_location_mask`: the probability mask of the event time with shape `(n_station, feature_nt)`
 
 
 
 
 
98
  - `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
99
 
100
  The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
 
113
  quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
114
  ```
115
 
116
+ #### Usage for `station`
117
+ Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
118
  ```python
119
  quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
120
+ # for PyTorch DataLoader, we need to divide the dataset into several shards
121
+ num_workers=4
122
+ quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
123
+ # because add examples formatting to get tensors when using the "torch" format
124
+ # has not been implemented yet, we need to manually add the formatting when using iterable dataset
125
+ # if you want to use dataset directly, just use
126
+ # quakeflow_nc.with_format("torch")
127
+ quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
128
+ try:
129
+ isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
130
+ except:
131
+ raise Exception("quakeflow_nc is not an IterableDataset")
132
+
133
+ # print the first sample of the iterable dataset
134
+ for example in quakeflow_nc:
135
+ print("\nIterable test\n")
136
+ print(example.keys())
137
+ for key in example.keys():
138
+ print(key, example[key].shape, example[key].dtype)
139
+ break
140
+
141
+ dataloader = DataLoader(quakeflow_nc, batch_size=4, num_workers=num_workers)
142
+
143
+ for batch in dataloader:
144
+ print("\nDataloader test\n")
145
+ print(batch.keys())
146
+ for key in batch.keys():
147
+ print(key, batch[key].shape, batch[key].dtype)
148
+ break
149
+ ```
150
+
151
+ #### Usage for `event`
152
+
153
+ Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
154
+ ```python
155
+ quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="test", name="event_test")
156
+
157
+ # for PyTorch DataLoader, we need to divide the dataset into several shards
158
+ num_workers=4
159
+ quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
160
+ quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
161
+ try:
162
+ isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
163
+ except:
164
+ raise Exception("quakeflow_nc is not an IterableDataset")
165
 
166
  # print the first sample of the iterable dataset
167
  for example in quakeflow_nc:
168
  print("\nIterable test\n")
169
  print(example.keys())
170
  for key in example.keys():
171
+ print(key, example[key].shape, example[key].dtype)
 
 
 
172
  break
173
 
174
+ dataloader = DataLoader(quakeflow_nc, batch_size=1, num_workers=num_workers)
 
 
175
 
176
  for batch in dataloader:
177
  print("\nDataloader test\n")
178
+ print(batch.keys())
179
+ for key in batch.keys():
180
+ print(key, batch[key].shape, batch[key].dtype)
 
 
 
 
181
  break
182
  ```
events.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:84166f6a0be6a02caeb8d11ed3495e5256db698c795dbb3db4d45d8b863313d8
3
- size 46863258
 
 
 
 
events_test.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:74b5bf132e23763f851035717a1baa92ab8fb73253138b640103390dce33e154
3
- size 1602217
 
 
 
 
events_train.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ef579400d9354ecaf142bdc7023291c952dbfc20d6bafab4715dff1774b3f7a5
3
- size 45261178
 
 
 
 
example.py DELETED
@@ -1,54 +0,0 @@
1
- # %%
2
- import datasets
3
- import numpy as np
4
- from torch.utils.data import DataLoader
5
-
6
- quakeflow_nc = datasets.load_dataset(
7
- "AI4EPS/quakeflow_nc",
8
- name="station",
9
- split="train",
10
- # name="station_test",
11
- # split="test",
12
- # download_mode="force_redownload",
13
- trust_remote_code=True,
14
- num_proc=36,
15
- )
16
- # quakeflow_nc = datasets.load_dataset(
17
- # "./quakeflow_nc.py",
18
- # name="station",
19
- # split="train",
20
- # # name="statoin_test",
21
- # # split="test",
22
- # num_proc=36,
23
- # )
24
-
25
- print(quakeflow_nc)
26
-
27
- # print the first sample of the iterable dataset
28
- for example in quakeflow_nc:
29
- print("\nIterable dataset\n")
30
- print(example)
31
- print(example.keys())
32
- for key in example.keys():
33
- if key == "waveform":
34
- print(key, np.array(example[key]).shape)
35
- else:
36
- print(key, example[key])
37
- break
38
-
39
- # %%
40
- quakeflow_nc = quakeflow_nc.with_format("torch")
41
- dataloader = DataLoader(quakeflow_nc, batch_size=8, num_workers=0, collate_fn=lambda x: x)
42
-
43
- for batch in dataloader:
44
- print("\nDataloader dataset\n")
45
- print(f"Batch size: {len(batch)}")
46
- print(batch[0].keys())
47
- for key in batch[0].keys():
48
- if key == "waveform":
49
- print(key, np.array(batch[0][key]).shape)
50
- else:
51
- print(key, batch[0][key])
52
- break
53
-
54
- # %%
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
merge_hdf5.py DELETED
@@ -1,65 +0,0 @@
1
- # %%
2
- import os
3
-
4
- import h5py
5
- import matplotlib.pyplot as plt
6
- from tqdm import tqdm
7
-
8
- # %%
9
- h5_dir = "waveform_h5"
10
- h5_out = "waveform.h5"
11
- h5_train = "waveform_train.h5"
12
- h5_test = "waveform_test.h5"
13
-
14
- # # %%
15
- # h5_dir = "waveform_h5"
16
- # h5_out = "waveform.h5"
17
- # h5_train = "waveform_train.h5"
18
- # h5_test = "waveform_test.h5"
19
-
20
- h5_files = sorted(os.listdir(h5_dir))
21
- train_files = h5_files[:-1]
22
- test_files = h5_files[-1:]
23
- # train_files = h5_files
24
- # train_files = [x for x in train_files if (x != "2014.h5") and (x not in [])]
25
- # test_files = []
26
- print(f"train files: {train_files}")
27
- print(f"test files: {test_files}")
28
-
29
- # %%
30
- with h5py.File(h5_out, "w") as fp:
31
- # external linked file
32
- for h5_file in h5_files:
33
- with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
34
- for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
35
- if event not in fp:
36
- fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
37
- else:
38
- print(f"{event} already exists")
39
- continue
40
-
41
- # %%
42
- with h5py.File(h5_train, "w") as fp:
43
- # external linked file
44
- for h5_file in train_files:
45
- with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
46
- for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
47
- if event not in fp:
48
- fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
49
- else:
50
- print(f"{event} already exists")
51
- continue
52
-
53
- # %%
54
- with h5py.File(h5_test, "w") as fp:
55
- # external linked file
56
- for h5_file in test_files:
57
- with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
58
- for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
59
- if event not in fp:
60
- fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
61
- else:
62
- print(f"{event} already exists")
63
- continue
64
-
65
- # %%
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
models/phasenet_picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b51df5987a2a05e44e0949b42d00a28692109da521911c55d2692ebfad0c54d7
3
- size 9355127
 
 
 
 
models/phasenet_plus_events.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f686ebf8da632b71a947e4ee884c76f30a313ae0e9d6e32d1f675828884a95f7
3
- size 7381331
 
 
 
 
models/phasenet_plus_picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:83d241a54477f722cd032efe8368a653bba170e1abebf3d9097d7756cfd54b23
3
- size 9987053
 
 
 
 
models/phasenet_pt_picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb7ea98484b5e6e1c4c79ea5eb1e38bce43e87b546fc6d29c72d187a6d8b1d00
3
- size 8715799
 
 
 
 
picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:52f077ae9f94481d4b80f37c9f15038ee1e3636d5da2da3b1d4aaa2991879cc3
3
- size 422247029
 
 
 
 
picks_test.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb09f0ac169bf451cfcfb4547359756cb1a53828bf4074971d9160a3aa171f38
3
- size 21850235
 
 
 
 
picks_train.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d22c5d5eb1c27a723525c657c1308a3b643d6f3e716eb1c43e064b7a87bb0819
3
- size 400397230
 
 
 
 
quakeflow_nc.py CHANGED
@@ -17,13 +17,14 @@
17
  """QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
18
 
19
 
20
- from typing import Dict, List, Optional, Tuple, Union
21
-
22
- import datasets
23
- import fsspec
24
  import h5py
25
  import numpy as np
26
  import torch
 
 
 
 
 
27
 
28
  # TODO: Add BibTeX citation
29
  # Find for instance the citation on arxiv or on the dataset repo/website
@@ -50,45 +51,24 @@ _LICENSE = ""
50
  # TODO: Add link to the official dataset URLs here
51
  # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
52
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
53
- _REPO = "https://huggingface.co/datasets/AI4EPS/quakeflow_nc/resolve/main/waveform_h5"
54
  _FILES = [
55
- "1987.h5",
56
- "1988.h5",
57
- "1989.h5",
58
- "1990.h5",
59
- "1991.h5",
60
- "1992.h5",
61
- "1993.h5",
62
- "1994.h5",
63
- "1995.h5",
64
- "1996.h5",
65
- "1997.h5",
66
- "1998.h5",
67
- "1999.h5",
68
- "2000.h5",
69
- "2001.h5",
70
- "2002.h5",
71
- "2003.h5",
72
- "2004.h5",
73
- "2005.h5",
74
- "2006.h5",
75
- "2007.h5",
76
- "2008.h5",
77
- "2009.h5",
78
- "2010.h5",
79
- "2011.h5",
80
- "2012.h5",
81
- "2013.h5",
82
- "2014.h5",
83
- "2015.h5",
84
- "2016.h5",
85
- "2017.h5",
86
- "2018.h5",
87
- "2019.h5",
88
- "2020.h5",
89
- "2021.h5",
90
- "2022.h5",
91
- "2023.h5",
92
  ]
93
  _URLS = {
94
  "station": [f"{_REPO}/{x}" for x in _FILES],
@@ -104,10 +84,14 @@ class BatchBuilderConfig(datasets.BuilderConfig):
104
  """
105
  yield a batch of event-based sample, so the number of sample stations can vary among batches
106
  Batch Config for QuakeFlow_NC
 
 
107
  """
108
 
109
- def __init__(self, **kwargs):
110
  super().__init__(**kwargs)
 
 
111
 
112
 
113
  # TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
@@ -116,7 +100,11 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
116
 
117
  VERSION = datasets.Version("1.1.0")
118
 
 
119
  nt = 8192
 
 
 
120
 
121
  # This is an example of a dataset with multiple configurations.
122
  # If you don't want/need to define several sub-sets in your dataset,
@@ -165,44 +153,30 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
165
  or (self.config.name == "station_train")
166
  or (self.config.name == "station_test")
167
  ):
168
- features = datasets.Features(
169
  {
170
- "id": datasets.Value("string"),
171
- "event_id": datasets.Value("string"),
172
- "station_id": datasets.Value("string"),
173
- "waveform": datasets.Array2D(shape=(3, self.nt), dtype="float32"),
174
- "phase_time": datasets.Sequence(datasets.Value("string")),
175
- "phase_index": datasets.Sequence(datasets.Value("int32")),
176
- "phase_type": datasets.Sequence(datasets.Value("string")),
177
- "phase_polarity": datasets.Sequence(datasets.Value("string")),
178
- "begin_time": datasets.Value("string"),
179
- "end_time": datasets.Value("string"),
180
- "event_time": datasets.Value("string"),
181
- "event_time_index": datasets.Value("int32"),
182
  "event_location": datasets.Sequence(datasets.Value("float32")),
183
  "station_location": datasets.Sequence(datasets.Value("float32")),
184
- },
185
- )
186
- elif (self.config.name == "event") or (self.config.name == "event_train") or (self.config.name == "event_test"):
187
- features = datasets.Features(
 
 
 
 
188
  {
189
- "event_id": datasets.Value("string"),
190
- "waveform": datasets.Array3D(shape=(None, 3, self.nt), dtype="float32"),
191
- "phase_time": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
192
- "phase_index": datasets.Sequence(datasets.Sequence(datasets.Value("int32"))),
193
- "phase_type": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
194
- "phase_polarity": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
195
- "begin_time": datasets.Value("string"),
196
- "end_time": datasets.Value("string"),
197
- "event_time": datasets.Value("string"),
198
- "event_time_index": datasets.Value("int32"),
199
- "event_location": datasets.Sequence(datasets.Value("float32")),
200
- "station_location": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
201
- },
202
  )
203
- else:
204
- raise ValueError(f"config.name = {self.config.name} is not in BUILDER_CONFIGS")
205
-
206
  return datasets.DatasetInfo(
207
  # This is the description that will appear on the datasets page.
208
  description=_DESCRIPTION,
@@ -228,20 +202,18 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
228
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
229
  urls = _URLS[self.config.name]
230
  # files = dl_manager.download(urls)
231
- if "bucket" not in self.storage_options:
232
- files = dl_manager.download_and_extract(urls)
233
- else:
234
- files = [f"{self.storage_options['bucket']}/{x}" for x in _FILES]
235
- # files = [f"/nfs/quakeflow_dataset/NC/quakeflow_nc/waveform_h5/{x}" for x in _FILES][-3:]
236
- print("Files:\n", "\n".join(sorted(files)))
237
- print(self.storage_options)
238
 
239
  if self.config.name == "station" or self.config.name == "event":
240
  return [
241
  datasets.SplitGenerator(
242
  name=datasets.Split.TRAIN,
243
  # These kwargs will be passed to _generate_examples
244
- gen_kwargs={"filepath": files[:-1], "split": "train"},
 
 
 
245
  ),
246
  datasets.SplitGenerator(
247
  name=datasets.Split.TEST,
@@ -252,7 +224,10 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
252
  return [
253
  datasets.SplitGenerator(
254
  name=datasets.Split.TRAIN,
255
- gen_kwargs={"filepath": files, "split": "train"},
 
 
 
256
  ),
257
  ]
258
  elif self.config.name == "station_test" or self.config.name == "event_test":
@@ -271,92 +246,156 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
271
  # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
272
 
273
  for file in filepath:
274
- print(f"\nReading {file}")
275
  with fsspec.open(file, "rb") as fs:
276
  with h5py.File(fs, "r") as fp:
 
277
  event_ids = list(fp.keys())
278
  for event_id in event_ids:
279
  event = fp[event_id]
280
- event_attrs = event.attrs
281
- begin_time = event_attrs["begin_time"]
282
- end_time = event_attrs["end_time"]
283
- event_location = [
284
- event_attrs["longitude"],
285
- event_attrs["latitude"],
286
- event_attrs["depth_km"],
287
- ]
288
- event_time = event_attrs["event_time"]
289
- event_time_index = event_attrs["event_time_index"]
290
  station_ids = list(event.keys())
291
- if len(station_ids) == 0:
292
- continue
293
  if (
294
  (self.config.name == "station")
295
  or (self.config.name == "station_train")
296
  or (self.config.name == "station_test")
297
  ):
298
- waveform = np.zeros([3, self.nt], dtype="float32")
299
-
300
- for i, station_id in enumerate(station_ids):
301
- waveform[:, : self.nt] = event[station_id][:, : self.nt]
302
- attrs = event[station_id].attrs
303
- phase_type = attrs["phase_type"]
304
- phase_time = attrs["phase_time"]
305
- phase_index = attrs["phase_index"]
306
- phase_polarity = attrs["phase_polarity"]
 
 
 
 
 
 
 
 
307
  station_location = [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
308
 
309
- yield f"{event_id}/{station_id}", {
310
- "id": f"{event_id}/{station_id}",
311
- "event_id": event_id,
312
- "station_id": station_id,
313
- "waveform": waveform,
314
- "phase_time": phase_time,
315
- "phase_index": phase_index,
316
- "phase_type": phase_type,
317
- "phase_polarity": phase_polarity,
318
- "begin_time": begin_time,
319
- "end_time": end_time,
320
- "event_time": event_time,
321
- "event_time_index": event_time_index,
322
- "event_location": event_location,
323
- "station_location": station_location,
324
  }
325
 
 
326
  elif (
327
  (self.config.name == "event")
328
  or (self.config.name == "event_train")
329
  or (self.config.name == "event_test")
330
  ):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
331
 
332
- waveform = np.zeros([len(station_ids), 3, self.nt], dtype="float32")
333
- phase_type = []
334
- phase_time = []
335
- phase_index = []
336
- phase_polarity = []
337
- station_location = []
338
-
339
- for i, station_id in enumerate(station_ids):
340
- waveform[i, :, : self.nt] = event[station_id][:, : self.nt]
341
- attrs = event[station_id].attrs
342
- phase_type.append(list(attrs["phase_type"]))
343
- phase_time.append(list(attrs["phase_time"]))
344
- phase_index.append(list(attrs["phase_index"]))
345
- phase_polarity.append(list(attrs["phase_polarity"]))
346
- station_location.append(
347
- [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
 
 
 
 
 
 
 
 
 
 
 
 
 
348
  )
 
 
 
 
 
 
 
 
349
  yield event_id, {
350
- "event_id": event_id,
351
- "waveform": waveform,
352
- "phase_time": phase_time,
353
- "phase_index": phase_index,
354
- "phase_type": phase_type,
355
- "phase_polarity": phase_polarity,
356
- "begin_time": begin_time,
357
- "end_time": end_time,
358
- "event_time": event_time,
359
- "event_time_index": event_time_index,
360
- "event_location": event_location,
361
- "station_location": station_location,
362
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  """QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
18
 
19
 
 
 
 
 
20
  import h5py
21
  import numpy as np
22
  import torch
23
+ from typing import Dict, List, Optional, Tuple, Union
24
+ import fsspec
25
+
26
+ import datasets
27
+
28
 
29
  # TODO: Add BibTeX citation
30
  # Find for instance the citation on arxiv or on the dataset repo/website
 
51
  # TODO: Add link to the official dataset URLs here
52
  # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
53
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
54
+ _REPO = "https://huggingface.co/datasets/AI4EPS/quakeflow_nc/resolve/main/data"
55
  _FILES = [
56
+ "NC1970-1989.h5",
57
+ "NC1990-1994.h5",
58
+ "NC1995-1999.h5",
59
+ "NC2000-2004.h5",
60
+ "NC2005-2009.h5",
61
+ "NC2010.h5",
62
+ "NC2011.h5",
63
+ "NC2012.h5",
64
+ "NC2013.h5",
65
+ "NC2014.h5",
66
+ "NC2015.h5",
67
+ "NC2016.h5",
68
+ "NC2017.h5",
69
+ "NC2018.h5",
70
+ "NC2019.h5",
71
+ "NC2020.h5",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  ]
73
  _URLS = {
74
  "station": [f"{_REPO}/{x}" for x in _FILES],
 
84
  """
85
  yield a batch of event-based sample, so the number of sample stations can vary among batches
86
  Batch Config for QuakeFlow_NC
87
+ :param batch_size: number of samples in a batch
88
+ :param num_stations_list: possible number of stations in a batch
89
  """
90
 
91
+ def __init__(self, batch_size: int, num_stations_list: List, **kwargs):
92
  super().__init__(**kwargs)
93
+ self.batch_size = batch_size
94
+ self.num_stations_list = num_stations_list
95
 
96
 
97
  # TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
 
100
 
101
  VERSION = datasets.Version("1.1.0")
102
 
103
+ degree2km = 111.32
104
  nt = 8192
105
+ feature_nt = 512
106
+ feature_scale = int(nt / feature_nt)
107
+ sampling_rate = 100.0
108
 
109
  # This is an example of a dataset with multiple configurations.
110
  # If you don't want/need to define several sub-sets in your dataset,
 
153
  or (self.config.name == "station_train")
154
  or (self.config.name == "station_test")
155
  ):
156
+ features=datasets.Features(
157
  {
158
+ "data": datasets.Array2D(shape=(3, self.nt), dtype='float32'),
159
+ "phase_pick": datasets.Array2D(shape=(3, self.nt), dtype='float32'),
 
 
 
 
 
 
 
 
 
 
160
  "event_location": datasets.Sequence(datasets.Value("float32")),
161
  "station_location": datasets.Sequence(datasets.Value("float32")),
162
+ })
163
+
164
+ elif (
165
+ (self.config.name == "event")
166
+ or (self.config.name == "event_train")
167
+ or (self.config.name == "event_test")
168
+ ):
169
+ features=datasets.Features(
170
  {
171
+ "data": datasets.Array3D(shape=(None, 3, self.nt), dtype='float32'),
172
+ "phase_pick": datasets.Array3D(shape=(None, 3, self.nt), dtype='float32'),
173
+ "event_center" : datasets.Array2D(shape=(None, self.feature_nt), dtype='float32'),
174
+ "event_location": datasets.Array3D(shape=(None, 4, self.feature_nt), dtype='float32'),
175
+ "event_location_mask": datasets.Array2D(shape=(None, self.feature_nt), dtype='float32'),
176
+ "station_location": datasets.Array2D(shape=(None, 3), dtype="float32"),
177
+ }
 
 
 
 
 
 
178
  )
179
+
 
 
180
  return datasets.DatasetInfo(
181
  # This is the description that will appear on the datasets page.
182
  description=_DESCRIPTION,
 
202
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
203
  urls = _URLS[self.config.name]
204
  # files = dl_manager.download(urls)
205
+ files = dl_manager.download_and_extract(urls)
206
+ print(files)
 
 
 
 
 
207
 
208
  if self.config.name == "station" or self.config.name == "event":
209
  return [
210
  datasets.SplitGenerator(
211
  name=datasets.Split.TRAIN,
212
  # These kwargs will be passed to _generate_examples
213
+ gen_kwargs={
214
+ "filepath": files[:-1],
215
+ "split": "train",
216
+ },
217
  ),
218
  datasets.SplitGenerator(
219
  name=datasets.Split.TEST,
 
224
  return [
225
  datasets.SplitGenerator(
226
  name=datasets.Split.TRAIN,
227
+ gen_kwargs={
228
+ "filepath": files,
229
+ "split": "train",
230
+ },
231
  ),
232
  ]
233
  elif self.config.name == "station_test" or self.config.name == "event_test":
 
246
  # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
247
 
248
  for file in filepath:
 
249
  with fsspec.open(file, "rb") as fs:
250
  with h5py.File(fs, "r") as fp:
251
+ # for event_id in sorted(list(fp.keys())):
252
  event_ids = list(fp.keys())
253
  for event_id in event_ids:
254
  event = fp[event_id]
 
 
 
 
 
 
 
 
 
 
255
  station_ids = list(event.keys())
 
 
256
  if (
257
  (self.config.name == "station")
258
  or (self.config.name == "station_train")
259
  or (self.config.name == "station_test")
260
  ):
261
+ waveforms = np.zeros([3, self.nt], dtype="float32")
262
+ phase_pick = np.zeros_like(waveforms)
263
+ attrs = event.attrs
264
+ event_location = [
265
+ attrs["longitude"],
266
+ attrs["latitude"],
267
+ attrs["depth_km"],
268
+ attrs["event_time_index"],
269
+ ]
270
+
271
+ for i, sta_id in enumerate(station_ids):
272
+ waveforms[:, : self.nt] = event[sta_id][:, :self.nt]
273
+ # waveforms[:, : self.nt] = event[sta_id][: self.nt, :].T
274
+ attrs = event[sta_id].attrs
275
+ p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
276
+ s_picks = attrs["phase_index"][attrs["phase_type"] == "S"]
277
+ # phase_pick[:, :self.nt] = generate_label([p_picks, s_picks], nt=self.nt)
278
  station_location = [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
279
 
280
+ yield f"{event_id}/{sta_id}", {
281
+ "data": torch.from_numpy(waveforms).float(),
282
+ "phase_pick": torch.from_numpy(phase_pick).float(),
283
+ "event_location": torch.from_numpy(np.array(event_location)).float(),
284
+ "station_location": torch.from_numpy(np.array(station_location)).float(),
 
 
 
 
 
 
 
 
 
 
285
  }
286
 
287
+
288
  elif (
289
  (self.config.name == "event")
290
  or (self.config.name == "event_train")
291
  or (self.config.name == "event_test")
292
  ):
293
+ event_attrs = event.attrs
294
+
295
+ # avoid stations with P arrival equals S arrival
296
+ is_sick = False
297
+ for sta_id in station_ids:
298
+ attrs = event[sta_id].attrs
299
+ if attrs["phase_index"][attrs["phase_type"] == "P"] == attrs["phase_index"][attrs["phase_type"] == "S"]:
300
+ is_sick = True
301
+ break
302
+ if is_sick:
303
+ continue
304
+
305
+ waveforms = np.zeros([len(station_ids), 3, self.nt], dtype="float32")
306
+ phase_pick = np.zeros_like(waveforms)
307
+ event_center = np.zeros([len(station_ids), self.nt])
308
+ event_location = np.zeros([len(station_ids), 4, self.nt])
309
+ event_location_mask = np.zeros([len(station_ids), self.nt])
310
+ station_location = np.zeros([len(station_ids), 3])
311
+
312
+ for i, sta_id in enumerate(station_ids):
313
+ # trace_id = event_id + "/" + sta_id
314
+ waveforms[i, :, :] = event[sta_id][:, :self.nt]
315
+ attrs = event[sta_id].attrs
316
+ p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
317
+ s_picks = attrs["phase_index"][attrs["phase_type"] == "S"]
318
+ phase_pick[i, :, :] = generate_label([p_picks, s_picks], nt=self.nt)
319
+
320
+ ## TODO: how to deal with multiple phases
321
+ # center = (attrs["phase_index"][::2] + attrs["phase_index"][1::2])/2.0
322
+ ## assuming only one event with both P and S picks
323
+ c0 = ((p_picks) + (s_picks)) / 2.0 # phase center
324
+ c0_width = ((s_picks - p_picks) * self.sampling_rate / 200.0).max() if p_picks!=s_picks else 50
325
+ dx = round(
326
+ (event_attrs["longitude"] - attrs["longitude"])
327
+ * np.cos(np.radians(event_attrs["latitude"]))
328
+ * self.degree2km,
329
+ 2,
330
+ )
331
+ dy = round(
332
+ (event_attrs["latitude"] - attrs["latitude"])
333
+ * self.degree2km,
334
+ 2,
335
+ )
336
+ dz = round(
337
+ event_attrs["depth_km"] + attrs["elevation_m"] / 1e3,
338
+ 2,
339
+ )
340
 
341
+ event_center[i, :] = generate_label(
342
+ [
343
+ # [c0 / self.feature_scale],
344
+ c0,
345
+ ],
346
+ label_width=[
347
+ c0_width,
348
+ ],
349
+ # label_width=[
350
+ # 10,
351
+ # ],
352
+ # nt=self.feature_nt,
353
+ nt=self.nt,
354
+ )[1, :]
355
+ mask = event_center[i, :] >= 0.5
356
+ event_location[i, 0, :] = (
357
+ np.arange(self.nt) - event_attrs["event_time_index"]
358
+ ) / self.sampling_rate
359
+ # event_location[0, :, i] = (np.arange(self.feature_nt) - 3000 / self.feature_scale) / self.sampling_rate
360
+ # print(event_location[i, 1:, mask].shape, event_location.shape, event_location[i][1:, mask].shape)
361
+ event_location[i][1:, mask] = np.array([dx, dy, dz])[:, np.newaxis]
362
+ event_location_mask[i, :] = mask
363
+
364
+ ## station location
365
+ station_location[i, 0] = round(
366
+ attrs["longitude"]
367
+ * np.cos(np.radians(attrs["latitude"]))
368
+ * self.degree2km,
369
+ 2,
370
  )
371
+ station_location[i, 1] = round(attrs["latitude"] * self.degree2km, 2)
372
+ station_location[i, 2] = round(-attrs["elevation_m"]/1e3, 2)
373
+
374
+ std = np.std(waveforms, axis=1, keepdims=True)
375
+ std[std == 0] = 1.0
376
+ waveforms = (waveforms - np.mean(waveforms, axis=1, keepdims=True)) / std
377
+ waveforms = waveforms.astype(np.float32)
378
+
379
  yield event_id, {
380
+ "data": torch.from_numpy(waveforms).float(),
381
+ "phase_pick": torch.from_numpy(phase_pick).float(),
382
+ "event_center": torch.from_numpy(event_center[:, ::self.feature_scale]).float(),
383
+ "event_location": torch.from_numpy(event_location[:, :, ::self.feature_scale]).float(),
384
+ "event_location_mask": torch.from_numpy(event_location_mask[:, ::self.feature_scale]).float(),
385
+ "station_location": torch.from_numpy(station_location).float(),
 
 
 
 
 
 
386
  }
387
+
388
+
389
+ def generate_label(phase_list, label_width=[150, 150], nt=8192):
390
+ target = np.zeros([len(phase_list) + 1, nt], dtype=np.float32)
391
+
392
+ for i, (picks, w) in enumerate(zip(phase_list, label_width)):
393
+ for phase_time in picks:
394
+ t = np.arange(nt) - phase_time
395
+ gaussian = np.exp(-(t**2) / (2 * (w / 6) ** 2))
396
+ gaussian[gaussian < 0.1] = 0.0
397
+ target[i + 1, :] += gaussian
398
+
399
+ target[0:1, :] = np.maximum(0, 1 - np.sum(target[1:, :], axis=0, keepdims=True))
400
+
401
+ return target
upload.py DELETED
@@ -1,11 +0,0 @@
1
- from huggingface_hub import HfApi
2
-
3
- api = HfApi()
4
-
5
- # Upload all the content from the local folder to your remote Space.
6
- # By default, files are uploaded at the root of the repo
7
- api.upload_folder(
8
- folder_path="./",
9
- repo_id="AI4EPS/quakeflow_nc",
10
- repo_type="space",
11
- )
 
 
 
 
 
 
 
 
 
 
 
 
waveform.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:77fb8b0bb040e1412a183a217dcbc1aa03ceb86b42db39ac62afe922a1673889
3
- size 20016390
 
 
 
 
waveform_h5/1987.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8afb94aafbf79db2848ae9c2006385c782493a97e6c71c1b8abf97c5d53bfc9d
3
- size 7744528
 
 
 
 
waveform_h5/1988.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1398baca3f539e52744f83625b1dbb6f117a32b8d7e97f6af02a1f452f0dedd
3
- size 46126800
 
 
 
 
waveform_h5/1989.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:533cd50fe365de8c050f0ffd4a90b697dc6b90cb86c8199ec0172316eab2ddaa
3
- size 48255208
 
 
 
 
waveform_h5/1990.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f5a282a9a8c47cf65d144368085470940660faeb0e77cea59fff16af68020d26
3
- size 60092656
 
 
 
 
waveform_h5/1991.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ba897d96eb92e8684b52a206e94a500abfe0192930f971ce7b1319c0638d452
3
- size 62332336
 
 
 
 
waveform_h5/1992.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d00021f46956bf43192f8c59405e203f823f1f4202c720efa52c5029e8e880b8
3
- size 67360896
 
 
 
 
waveform_h5/1993.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:eec41dd0aa7b88c81fa9f9b5dbcaab80e1c7bc8f6c144bd81761941278c57b4f
3
- size 706087936
 
 
 
 
waveform_h5/1994.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b1cd002f20573636eaf101a30c5bac477edda201aba3af68be358756543ed48a
3
- size 609524864
 
 
 
 
waveform_h5/1995.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:948f19d71520a0dd25574be300f70e62c383e319b07a7d7182fca1dcfa9d61ee
3
- size 1728452872
 
 
 
 
waveform_h5/1996.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:23654b6f9c3a4c5a0aa56ed13ba04e943a94b458a51ac80ec1d418e9aa132840
3
- size 1752242680
 
 
 
 
waveform_h5/1997.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d1c0f4c8146fc8ff27c8a47a942b967a97bd2835346203e6de74ca55dd522616
3
- size 2661543208
 
 
 
 
waveform_h5/1998.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1afac9c1a33424b739d26261ac2e9a4520be9c86c57bae4c8fe1a7a422356e45
3
- size 2070489120
 
 
 
 
waveform_h5/1999.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f2595a1919a5435148cdcf2cfa1501ce5edb53878d471500b13936f0f6f558c
3
- size 2300297608
 
 
 
 
waveform_h5/2000.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:250fd52d9f8dd17a8bfb58a3ecfef25d62b0a1adf67f6fe6f2b446e9f72caf7a
3
- size 434865160
 
 
 
 
waveform_h5/2001.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d70dea6156b32057760f91742f7a05a336e4f63b1f793408b5e7aad6a15551e5
3
- size 919203704
 
 
 
 
waveform_h5/2002.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f88c4c5960741a8d354db4a7324d56ef8750ab93aa1d9b11fc80d0c497d8d6ae
3
- size 2445812792
 
 
 
 
waveform_h5/2003.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:943d649f1a8a0e3989d2458be68fbf041058a581c4c73f8de39f1d50d3e7b35c
3
- size 3618485352
 
 
 
 
waveform_h5/2004.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed1ba66e10ba5c165568ac13950a1728927ba49b33903a0df42c3d9965a16807
3
- size 6158740712
 
 
 
 
waveform_h5/2005.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c816d75b172148763b19e60c1469c106c1af1f906843c3d6d94e603e02c2b6cb
3
- size 2994468240
 
 
 
 
waveform_h5/2006.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:521e6b0ce262461f87b4b0a78ac6403cfbb597d6ace36e17f92354c456a30447
3
- size 2189511664
 
 
 
 
waveform_h5/2007.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ae6654c213fb4838d6a732b2c8d936bd799005b2a189d64f2d74e3767c0c503a
3
- size 4393926088
 
 
 
 
waveform_h5/2008.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8163aee689448c260032df9b0ab9132a5b46f0fee88a4c1ca8f4492ec5534d6
3
- size 3964283536
 
 
 
 
waveform_h5/2009.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6702c2d3951ddf1034f1886a79e8c5a00dfa47c88c84048edc528f047a2337b5
3
- size 4162296168
 
 
 
 
waveform_h5/2010.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f2de7c07f088a32ea7ae71c2107dfd121780a47d3e3f23e5c98ddb482c6ce71
3
- size 4547184704
 
 
 
 
waveform_h5/2011.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:520d62f3a94f1b4889f583196676fe2eccb6452807461afc93432dca930d6052
3
- size 5633641952
 
 
 
 
waveform_h5/2012.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:98b90529df4cbff7f21cd233d482454eaeac77b81117720ca7fe6c2697819071
3
- size 9520058832
 
 
 
 
waveform_h5/2013.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6f1030ff4ebe488ef9072ec984c91024a8be4ecdbe7e9af47c6e65de942c2fe
3
- size 8380878704
 
 
 
 
waveform_h5/2014.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a63f5e6d7d5bca552dcc99053753603dfa3109a6a080f8402f843ef688927d4c
3
- size 12088815344
 
 
 
 
waveform_h5/2015.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:42be6994ad27eb8aee241f5edfb4ed0ee69aa3460397325cc858224ba9dd9721
3
- size 8536767520
 
 
 
 
waveform_h5/2016.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e706aefd38170da41196974fc92e457d0dc56948a63640a37cea4a86a297843
3
- size 9287201016
 
 
 
 
waveform_h5/2017.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e20f8e5a3f5ec8927e5d44e722987461ef08c9ceb33ab982038528e9000d5323
3
- size 8627205152
 
 
 
 
waveform_h5/2018.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ad6e83734ff1e24ad91b17cb6656766861ae9fb30413948579d762acc092e66a
3
- size 7158598240
 
 
 
 
waveform_h5/2019.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6bda0b414a7a7726aebf89a51d3629ae350ffc4da797c548172a74dfbb723b05
3
- size 8614182952