ngres commited on
Commit
56dc88d
·
1 Parent(s): 8d1688d

update readme

Browse files
Files changed (1) hide show
  1. README.md +97 -1
README.md CHANGED
@@ -4,4 +4,100 @@ tags:
4
  - image
5
  ---
6
 
7
- # IGCV Segmentation Dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - image
5
  ---
6
 
7
+ # IGCV Segmentation Dataset
8
+
9
+ Dataset for training a semantic image segmentation model for the [Intelligent Ground Vehicle Competition](http://www.igvc.org/).
10
+
11
+ ## Composition
12
+
13
+ Each instance consists of an reference image from the point of view of the robot and the corresponding obstacle (e.g. construction drums, buckets) and lane segmentation masks.
14
+
15
+ **Train**
16
+
17
+ 256 frames rendered in 4 different lighting environments using Blender = 1024 images
18
+
19
+ **Test**
20
+
21
+ 10 frames captured from the [SCR 2023 IGVC run](https://www.youtube.com/watch?v=7tZsk3T3STA) (manually segmented) + 13 frames rendered in 4 lighting environments = 64 images
22
+
23
+ ## Usage
24
+
25
+ For usage with PyTorch it is recommended to wrap the dataset into a [`Dataset`](https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) adapter class and generate a training/validation split:
26
+
27
+ ```python
28
+ from torch.utils.data import Dataset
29
+ from datasets import load_dataset, Dataset as HFDataset
30
+ import numpy as np
31
+
32
+ class Split:
33
+ TRAIN = "train"
34
+ VALID = "valid"
35
+ TEST = "test"
36
+
37
+ class SegmentationDataset(Dataset):
38
+ def __init__(self, path="Nico0302/IGVC-Segmentation", split=Split.TRAIN, transform=None, mask_name="obstacle_mask", valid_size=0.125):
39
+ self.path = path
40
+ self.split = split
41
+ self.transform = transform
42
+ self.mask_name = mask_name
43
+ self.valid_size = valid_size
44
+
45
+ self.data = self._read_split()
46
+
47
+ def __len__(self):
48
+ return len(self.data)
49
+
50
+ def __getitem__(self, idx):
51
+ item = self.data[idx]
52
+
53
+ sample = dict(image=np.array(item["image"]), mask=np.array(item[self.mask_name]))
54
+ if self.transform is not None:
55
+ sample = self.transform(**sample)
56
+
57
+ return {
58
+ "image": np.transpose(sample["image"], (2, 0, 1)), # HWC to CHW (3, H, W)
59
+ "mask": np.expand_dims(sample["mask"].astype(np.float32) / 255.0, 0), # HW to CHW (1, H, W)
60
+ }
61
+
62
+ def _read_split(self):
63
+ dataset = load_dataset(self.path, split="test" if self.split == Split.TEST else "train")
64
+ assert isinstance(dataset, HFDataset), "Dataset must be a Hugging Face Dataset"
65
+
66
+ if (self.split == Split.TEST):
67
+ return dataset
68
+
69
+ splits = dataset.train_test_split(test_size=self.valid_size, seed=42)
70
+ if self.split == Split.VALID:
71
+ return splits["test"]
72
+ return splits["train"]
73
+ ```
74
+
75
+ Using thi adapter, the dataset can simple be passed to the [`DataLoader`](https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader):
76
+
77
+ ```python
78
+ train_dataset = SegmentationDataset(split=Split.TRAIN)
79
+ valid_dataset = SegmentationDataset(split=Split.VALID)
80
+ test_dataset = SegmentationDataset(split=Split.TEST)
81
+
82
+ train_dataloader = DataLoader(train_dataset)
83
+ valid_dataloader = DataLoader(valid_dataset)
84
+ test_dataloader = DataLoader(test_dataset)
85
+ ```
86
+
87
+ ## Acknowledgements
88
+
89
+ Thank you for [Sooner Competitive Robotics](https://ou.edu/scr/) for allowing me to use frames from their IGVC 2023 run video as part of the test set.
90
+
91
+ ## Citation
92
+
93
+ If you are using this dataset, please cite
94
+
95
+ ```bibtex
96
+ @misc{gres2025IGVC,
97
+ author = { Nicolas Gres },
98
+ title = { IGCV Segmentation Dataset },
99
+ year = 2025,
100
+ url = { https://huggingface.co/datasets/Nico0302/IGVC-Segmentation },
101
+ publisher = { Hugging Face }
102
+ }
103
+ ```