update file structure
Browse files
README.md
CHANGED
|
@@ -24,6 +24,7 @@ size_categories:
|
|
| 24 |
# SAM‑TP Traversability Dataset
|
| 25 |
|
| 26 |
This repository contains pixel‑wise **traversability masks** paired with egocentric RGB images, prepared in a **flat, filename‑aligned** layout that is convenient for training SAM‑2 / SAM‑TP‑style segmentation models.
|
|
|
|
| 27 |
|
| 28 |
> **Folder layout**
|
| 29 |
```
|
|
@@ -56,40 +57,7 @@ ride_68496_8ef98b_20240716023032_517__1.png # corresponding mask
|
|
| 56 |
|
| 57 |
## How to use
|
| 58 |
|
| 59 |
-
### A)
|
| 60 |
-
|
| 61 |
-
```python
|
| 62 |
-
from datasets import load_dataset
|
| 63 |
-
from pathlib import Path
|
| 64 |
-
from PIL import Image
|
| 65 |
-
|
| 66 |
-
REPO = "jamiewjm/sam-tp" # e.g. "jamiewjm/sam-tp"
|
| 67 |
-
|
| 68 |
-
ds_imgs = load_dataset(
|
| 69 |
-
"imagefolder",
|
| 70 |
-
data_dir=".",
|
| 71 |
-
data_files={"image": f"hf://datasets/{REPO}/images/**"},
|
| 72 |
-
split="train",
|
| 73 |
-
)
|
| 74 |
-
ds_msks = load_dataset(
|
| 75 |
-
"imagefolder",
|
| 76 |
-
data_dir=".",
|
| 77 |
-
data_files={"mask": f"hf://datasets/{REPO}/annotations/**"},
|
| 78 |
-
split="train",
|
| 79 |
-
)
|
| 80 |
-
|
| 81 |
-
# Build a mask index by filename
|
| 82 |
-
mask_index = {Path(r["image"]["path"]).name: r["image"]["path"] for r in ds_msks}
|
| 83 |
-
|
| 84 |
-
row = ds_imgs[0]
|
| 85 |
-
img_path = Path(row["image"]["path"])
|
| 86 |
-
msk_path = Path(mask_index[img_path.name])
|
| 87 |
-
|
| 88 |
-
img = Image.open(img_path).convert("RGB")
|
| 89 |
-
msk = Image.open(msk_path).convert("L")
|
| 90 |
-
```
|
| 91 |
-
|
| 92 |
-
### B) Minimal PyTorch dataset
|
| 93 |
|
| 94 |
```python
|
| 95 |
from pathlib import Path
|
|
@@ -110,7 +78,7 @@ class TraversabilityDataset(Dataset):
|
|
| 110 |
return Image.open(ip).convert("RGB"), Image.open(mp).convert("L")
|
| 111 |
```
|
| 112 |
|
| 113 |
-
###
|
| 114 |
|
| 115 |
- Resize/pad to your training resolution (commonly **1024×1024**) with masks aligned.
|
| 116 |
- Normalize images per your backbone’s recipe.
|
|
|
|
| 24 |
# SAM‑TP Traversability Dataset
|
| 25 |
|
| 26 |
This repository contains pixel‑wise **traversability masks** paired with egocentric RGB images, prepared in a **flat, filename‑aligned** layout that is convenient for training SAM‑2 / SAM‑TP‑style segmentation models.
|
| 27 |
+
To use the dataset, simply download the data.zip file and unzip it.
|
| 28 |
|
| 29 |
> **Folder layout**
|
| 30 |
```
|
|
|
|
| 57 |
|
| 58 |
## How to use
|
| 59 |
|
| 60 |
+
### A) Minimal PyTorch dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
```python
|
| 63 |
from pathlib import Path
|
|
|
|
| 78 |
return Image.open(ip).convert("RGB"), Image.open(mp).convert("L")
|
| 79 |
```
|
| 80 |
|
| 81 |
+
### B) Pre‑processing notes for SAM‑2/SAM‑TP training
|
| 82 |
|
| 83 |
- Resize/pad to your training resolution (commonly **1024×1024**) with masks aligned.
|
| 84 |
- Normalize images per your backbone’s recipe.
|