blumenstiel commited on
Commit
bc2367f
·
verified ·
1 Parent(s): 1d64a51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -6
README.md CHANGED
@@ -45,6 +45,30 @@ TerraMesh
45
  └── terramesh.py
46
  ```
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ---
49
 
50
  ## Description
@@ -71,12 +95,20 @@ More details in our [paper](https://arxiv.org/abs/2504.11172).
71
 
72
  ## Usage
73
 
 
 
 
 
 
 
 
 
 
74
  ### Download
75
 
76
  You can download the dataset with the Hugging Face CLI tool. Please note that the dataset requires 16TB or storage.
77
 
78
  ```shell
79
- pip install huggingface_hub
80
  huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --local-dir data/TerraMesh
81
  ```
82
 
@@ -105,16 +137,16 @@ from torch.utils.data import DataLoader
105
  dataset = build_terramesh_dataset(
106
  path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
107
  modalities=["S2L2A"],
108
- split='val',
109
  batch_size=8
110
  )
111
- # Batch keys: ['__key__', '__url__', 'image']
112
 
113
  # If you pass multiple modalities, the modalities are returned using the modality names as keys
114
  dataset = build_terramesh_dataset(
115
  path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
116
  modalities=["S2L2A", "S2L1C", "S2RGB", "S1GRD", "S1RTC", "DEM", "NDVI", "LULC"],
117
- split='val',
118
  batch_size=8
119
  )
120
 
@@ -124,7 +156,7 @@ dataloader = DataLoader(dataset, batch_size=None, num_workers=4)
124
  # Iterate over the dataloader
125
  for batch in dataloader:
126
  print("Batch keys:", list(batch.keys()))
127
- # Batch keys: ['__key__', '__url__', 'S2L2A', 'S2L1C', 'S2RGB', 'S1RTC', 'DEM', 'NDVI', 'LULC']
128
  # Because S1RTC and S1GRD are not present for all samples, each batch only includes one S1 version.
129
 
130
  print("Data shape:", batch["S2L2A"].shape)
@@ -138,6 +170,7 @@ for batch in dataloader:
138
  We provide some additional code for wrapping `albumentations` transform functions.
139
  We recommend albumentations because parameters are shared between all image modalities (e.g., same random crop).
140
  However, it requires some wrapping to bring the data into the expected shape.
 
141
  ```python
142
  import albumentations as A
143
  from albumentations.pytorch import ToTensorV2
@@ -164,12 +197,34 @@ val_transform = MultimodalTransforms(
164
  dataset = build_terramesh_dataset(
165
  path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/",
166
  modalities=modalities,
167
- split='val',
168
  transform=val_transform,
169
  batch_size=8,
170
  )
171
  ```
172
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
 
174
  If you have any issues with data loading, please create a discussion in the community tab and tag `@blumenstiel`.
175
 
@@ -204,4 +259,6 @@ The satellite data (S2L1C, S2L2A, S1GRD, S1RTC) is sourced from the [SSL4EO‑S1
204
 
205
  The LULC data is provided by [ESRI, Impact Observatory, and Microsoft](https://planetarycomputer.microsoft.com/dataset/io-lulc-annual-v02) (CC-BY-4.0).
206
 
 
 
207
  The DEM data is produced using [Copernicus WorldDEM-30](https://dataspace.copernicus.eu/explore-data/data-collections/copernicus-contributing-missions/collections-description/COP-DEM) © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved
 
45
  └── terramesh.py
46
  ```
47
 
48
+ Each folder includes up to 889 shard files, containing up to 10240 samples each. Samples from MajorTom-Core are stored in shards with the pattern `majortom_{split}_{id}.tar` while shards with SSL4EO-S12 samples start with `ssl4eos12_`.
49
+ Samples are stored as Zarr Zip files which can be loaded with `zarr` (Version <= 2.18) or `xarray.load_zarr()`. Each sample location includes seven modalities that share the same shard and sample name. Note that each sample only inludes one Sentinel-1 version (S1GRD or S1RTC) because of different processing versions in the source datasets.
50
+ Each Zarr file includes aligned metadata as demonstrated by this S1GRD example from sample `ssl4eos12_val_0080385.zarr.zip`:
51
+
52
+ ```
53
+ <xarray.Dataset> Size: 283kB
54
+ Dimensions: (band: 2, time: 1, y: 264, x: 264)
55
+ Coordinates:
56
+ * band (band) <U2 16B "vv" "vh"
57
+ sample <U9 36B "0194630_1"
58
+ spatial_ref int64 8B 0
59
+ * time (time) datetime64[ns] 8B 2020-05-03T02:07:17
60
+ * x (x) float64 2kB 6.004e+05 6.004e+05 ... 6.03e+05 6.03e+05
61
+ * y (y) float64 2kB 4.275e+06 4.275e+06 ... 4.273e+06 4.273e+06
62
+ Data variables:
63
+ bands (time, band, y, x) float16 279kB -9.461 -10.77 ... -16.67
64
+ center_lat float64 8B 38.61
65
+ center_lon float64 8B -121.8
66
+ crs int64 8B 32610
67
+ file_id (time) <U67 268B "S1A_IW_GRDH_1SDV_20201105T020809_20201105T...
68
+ ```
69
+
70
+ Sentinel-2 modalities and LULC additionally provide a `cloud_mask` as additional metadata.
71
+
72
  ---
73
 
74
  ## Description
 
95
 
96
  ## Usage
97
 
98
+ Important! The dataset was created using `zarr==2.18.0` and `numcodecs==0.15.1`. Unfortunately, Zarr 3.0 has backwards compatibility issues, and Zarr 2.18 is incompatible with NumCodecs 0.16. Therefore, we recommend installing:
99
+
100
+ ### Setup
101
+
102
+ ```
103
+ pip install huggingface_hub webdataset torch numpy albumentations braceexpand zarr==2.18.0 numcodecs==0.15.1
104
+ ```
105
+
106
+
107
  ### Download
108
 
109
  You can download the dataset with the Hugging Face CLI tool. Please note that the dataset requires 16TB or storage.
110
 
111
  ```shell
 
112
  huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --local-dir data/TerraMesh
113
  ```
114
 
 
137
  dataset = build_terramesh_dataset(
138
  path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
139
  modalities=["S2L2A"],
140
+ split="val",
141
  batch_size=8
142
  )
143
+ # Batch keys: ["__key__", "__url__", "image"]
144
 
145
  # If you pass multiple modalities, the modalities are returned using the modality names as keys
146
  dataset = build_terramesh_dataset(
147
  path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
148
  modalities=["S2L2A", "S2L1C", "S2RGB", "S1GRD", "S1RTC", "DEM", "NDVI", "LULC"],
149
+ split="val",
150
  batch_size=8
151
  )
152
 
 
156
  # Iterate over the dataloader
157
  for batch in dataloader:
158
  print("Batch keys:", list(batch.keys()))
159
+ # Batch keys: ["__key__", "__url__", "S2L2A", "S2L1C", "S2RGB", "S1RTC", "DEM", "NDVI", "LULC"]
160
  # Because S1RTC and S1GRD are not present for all samples, each batch only includes one S1 version.
161
 
162
  print("Data shape:", batch["S2L2A"].shape)
 
170
  We provide some additional code for wrapping `albumentations` transform functions.
171
  We recommend albumentations because parameters are shared between all image modalities (e.g., same random crop).
172
  However, it requires some wrapping to bring the data into the expected shape.
173
+
174
  ```python
175
  import albumentations as A
176
  from albumentations.pytorch import ToTensorV2
 
197
  dataset = build_terramesh_dataset(
198
  path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/",
199
  modalities=modalities,
200
+ split="val",
201
  transform=val_transform,
202
  batch_size=8,
203
  )
204
  ```
205
 
206
+ If you only use a single modality, you can directly pass a `A.Compose` instance to `build_terramesh_dataset` without the `MultimodalTransforms` wrapper. It still requires `Transpose([1, 2, 0])` as a first step.
207
+
208
+ ### Returning metadata
209
+
210
+ You can pass `return_metadata=True` to `build_terramesh_dataset()` to load center longitude and latitude, timestamps, and the S2 cloud mask as additional metadata.
211
+
212
+ The resulting batch keys include: `["__key__", "__url__", "S2L2A", "S1RTC", ..., "center_lon", "center_lat", "cloud_mask", "time_S2L2A", "time_S1RTC", ...]`
213
+
214
+ Therefore, you need to update the transforms if you use one:
215
+ ```
216
+ ...
217
+ additional_targets={m: "image" for m in modalities + ["cloud_mask"]}
218
+ ),
219
+ non_image_modalities=["__key__", "__url__", "center_lon", "center_lat"] + ["time_" + m for m in modalities]
220
+ ```
221
+
222
+ Note that center points are not corrected when random crop is used.
223
+ The cloud mask provides the classes land (0), water (1), snow (2), thin cloud (3), thick cloud (4), and cloud shadow (5), and no data (6).
224
+ DEM does not return a time value while LULC uses the S2 timestamp because of the augmentation usign the S2 cloud and ice mask. Time values are returned as integer values but can be converted back to datetime with
225
+ ```python
226
+ batch["time_S2L2A"].numpy().astype("datetime64[ns]")
227
+ ```
228
 
229
  If you have any issues with data loading, please create a discussion in the community tab and tag `@blumenstiel`.
230
 
 
259
 
260
  The LULC data is provided by [ESRI, Impact Observatory, and Microsoft](https://planetarycomputer.microsoft.com/dataset/io-lulc-annual-v02) (CC-BY-4.0).
261
 
262
+ The cloud masks used for augmentating the LULC maps and provided as metadata are produced using the [SEnSeIv2](https://github.com/aliFrancis/SEnSeIv2/tree/main?tab=readme-ov-file) model.
263
+
264
  The DEM data is produced using [Copernicus WorldDEM-30](https://dataspace.copernicus.eu/explore-data/data-collections/copernicus-contributing-missions/collections-description/COP-DEM) © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved