Datasets:
Tasks:
Image Segmentation
Modalities:
Image
Languages:
English
Tags:
Cloud Detection
Cloud Segmentation
Remote Sensing Images
Satellite Images
HRC-WHU
CloudSEN12-High
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -35,6 +35,60 @@ This dataset card aims to describe the datasets used in the Cloud-Adapter, a col
|
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
## Dataset Structure
|
| 39 |
|
| 40 |
The dataset contains the following splits:
|
|
|
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
| 38 |
+
```python
|
| 39 |
+
# Step 1: Install the datasets library
|
| 40 |
+
# Ensure you have the `datasets` library installed
|
| 41 |
+
# You can install it using pip if it's not already installed:
|
| 42 |
+
# pip install datasets
|
| 43 |
+
|
| 44 |
+
from datasets import load_dataset
|
| 45 |
+
from PIL import Image
|
| 46 |
+
|
| 47 |
+
# Step 2: Load the Cloud-Adapter dataset
|
| 48 |
+
# Replace "XavierJiezou/Cloud-Adapter" with the dataset repository name on Hugging Face
|
| 49 |
+
dataset = load_dataset("XavierJiezou/Cloud-Adapter")
|
| 50 |
+
|
| 51 |
+
# Step 3: Explore the dataset splits
|
| 52 |
+
# The dataset contains three splits: "train", "val", and "test"
|
| 53 |
+
print("Available splits:", dataset.keys())
|
| 54 |
+
|
| 55 |
+
# Step 4: Access individual examples
|
| 56 |
+
# Each example contains an image and a corresponding annotation (segmentation mask)
|
| 57 |
+
train_data = dataset["train"]
|
| 58 |
+
|
| 59 |
+
# View the number of samples in the training set
|
| 60 |
+
print("Number of training samples:", len(train_data))
|
| 61 |
+
|
| 62 |
+
# Step 5: Access a single data sample
|
| 63 |
+
# Each data sample has two keys: "image" and "annotation"
|
| 64 |
+
sample = train_data[0]
|
| 65 |
+
|
| 66 |
+
# Step 6: Display the image and annotation
|
| 67 |
+
# Use PIL to open and display the image and annotation
|
| 68 |
+
image = sample["image"]
|
| 69 |
+
annotation = sample["annotation"]
|
| 70 |
+
|
| 71 |
+
# Display the image
|
| 72 |
+
print("Displaying the image...")
|
| 73 |
+
image.show()
|
| 74 |
+
|
| 75 |
+
# Display the annotation
|
| 76 |
+
print("Displaying the segmentation mask...")
|
| 77 |
+
annotation.show()
|
| 78 |
+
|
| 79 |
+
# Step 7: Use in a machine learning pipeline
|
| 80 |
+
# You can integrate this dataset into your ML pipeline by iterating over the splits
|
| 81 |
+
for sample in train_data:
|
| 82 |
+
image = sample["image"]
|
| 83 |
+
annotation = sample["annotation"]
|
| 84 |
+
# Process or feed `image` and `annotation` into your ML model here
|
| 85 |
+
|
| 86 |
+
# Additional Info: Dataset splits
|
| 87 |
+
# - dataset["train"]: Training split
|
| 88 |
+
# - dataset["val"]: Validation split
|
| 89 |
+
# - dataset["test"]: Testing split
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
## Dataset Structure
|
| 93 |
|
| 94 |
The dataset contains the following splits:
|