Datasets:
Add `library_name` and sample usage, improve top-level links
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -13,13 +13,16 @@ tags:
|
|
| 13 |
- pretrain
|
| 14 |
- self-supervised-learning
|
| 15 |
- sentinel
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
# Dataset Card for Copernicus-Pretrain
|
| 19 |
|
|
|
|
|
|
|
| 20 |
<!-- Provide a quick summary of the dataset. -->
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
*Officially named **Copernicus-Pretrain**, also referred to as SSL4EO-S ("S" means Sentinel), as an extension of [SSL4EO-S12](https://github.com/zhu-xlab/SSL4EO-S12) to the whole Sentinel series.*
|
| 25 |
|
|
@@ -42,13 +45,24 @@ The images are organized into ~310K regional grids (0.25°x0.25°, consistent wi
|
|
| 42 |
| Copernicus DEM | elevation | 30 m | 960×960 | 297,665 | 297,665 | 1 | 297,665 |
|
| 43 |
| **Copernicus-Pretrain** | | | | **312,567** | **3,879,597** | | **18,713,054** |
|
| 44 |
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
|
|
|
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
## License
|
| 54 |
|
|
|
|
| 13 |
- pretrain
|
| 14 |
- self-supervised-learning
|
| 15 |
- sentinel
|
| 16 |
+
library_name: datasets
|
| 17 |
---
|
| 18 |
|
| 19 |
# Dataset Card for Copernicus-Pretrain
|
| 20 |
|
| 21 |
+
[Paper](https://arxiv.org/abs/2503.11849) | [Repository](https://github.com/zhu-xlab/Copernicus-FM)
|
| 22 |
+
|
| 23 |
<!-- Provide a quick summary of the dataset. -->
|
| 24 |
|
| 25 |
+
Copernicus-Pretrain is a large-scale EO pretraining dataset with 18.7M aligned images covering all major Sentinel missions (S1,2,3,5P).
|
| 26 |
|
| 27 |
*Officially named **Copernicus-Pretrain**, also referred to as SSL4EO-S ("S" means Sentinel), as an extension of [SSL4EO-S12](https://github.com/zhu-xlab/SSL4EO-S12) to the whole Sentinel series.*
|
| 28 |
|
|
|
|
| 45 |
| Copernicus DEM | elevation | 30 m | 960×960 | 297,665 | 297,665 | 1 | 297,665 |
|
| 46 |
| **Copernicus-Pretrain** | | | | **312,567** | **3,879,597** | | **18,713,054** |
|
| 47 |
|
| 48 |
+
## Sample Usage
|
| 49 |
+
|
| 50 |
+
You can load the dataset using the Hugging Face `datasets` library. This dataset is very large and may require specific handling such as streaming or selecting specific configurations if available.
|
| 51 |
|
| 52 |
+
```python
|
| 53 |
+
from datasets import load_dataset
|
| 54 |
|
| 55 |
+
# Load the dataset. For large datasets, consider streaming or specific data_files if available.
|
| 56 |
+
# This dataset offers raw GeoTiff and streaming WebDataset formats.
|
| 57 |
+
dataset = load_dataset("wangyi111/Copernicus-Pretrain")
|
| 58 |
|
| 59 |
+
# Print the dataset structure (e.g., available splits)
|
| 60 |
+
print(dataset)
|
| 61 |
+
|
| 62 |
+
# Example of accessing a sample from a split (uncomment and adjust if applicable)
|
| 63 |
+
# For example, if 'train' split exists:
|
| 64 |
+
# print(dataset["train"][0])
|
| 65 |
+
```
|
| 66 |
|
| 67 |
## License
|
| 68 |
|