harpreetsahota commited on
Commit
cd0a7be
·
verified ·
1 Parent(s): 2a3837b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -90
README.md CHANGED
@@ -11,18 +11,19 @@ tags:
11
  - fiftyone
12
  - image
13
  - image-segmentation
14
- dataset_summary: '
15
 
16
 
17
 
18
 
19
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4420 samples.
 
20
 
21
 
22
  ## Installation
23
 
24
 
25
- If you haven''t already, install FiftyOne:
26
 
27
 
28
  ```bash
@@ -44,7 +45,7 @@ dataset_summary: '
44
 
45
  # Load the dataset
46
 
47
- # Note: other available arguments include ''max_samples'', etc
48
 
49
  dataset = load_from_hub("harpreetsahota/RefSegRS")
50
 
@@ -54,16 +55,12 @@ dataset_summary: '
54
  session = fo.launch_app(dataset)
55
 
56
  ```
57
-
58
- '
59
  ---
60
 
61
- # Dataset Card for refsegrs
62
-
63
- <!-- Provide a quick summary of the dataset. -->
64
-
65
-
66
 
 
67
 
68
 
69
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4420 samples.
@@ -84,141 +81,179 @@ from fiftyone.utils.huggingface import load_from_hub
84
 
85
  # Load the dataset
86
  # Note: other available arguments include 'max_samples', etc
87
- dataset = load_from_hub("harpreetsahota/RefSegRS")
88
 
89
  # Launch the App
90
  session = fo.launch_app(dataset)
 
 
91
  ```
 
92
 
93
 
94
  ## Dataset Details
95
 
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
 
 
99
 
 
 
 
100
 
 
101
 
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
- - **Language(s) (NLP):** en
106
- - **License:** [More Information Needed]
107
 
108
- ### Dataset Sources [optional]
109
 
110
- <!-- Provide the basic links for the dataset. -->
111
 
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
- ## Uses
 
 
117
 
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
 
120
- ### Direct Use
 
 
 
121
 
122
- <!-- This section describes suitable use cases for the dataset. -->
123
 
124
- [More Information Needed]
 
 
 
125
 
126
- ### Out-of-Scope Use
127
 
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
 
 
 
 
 
129
 
130
- [More Information Needed]
131
 
132
- ## Dataset Structure
 
 
 
133
 
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
135
 
136
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
137
 
138
  ## Dataset Creation
139
 
140
  ### Curation Rationale
141
 
142
- <!-- Motivation for the creation of this dataset. -->
143
 
144
- [More Information Needed]
 
 
 
145
 
146
- ### Source Data
 
 
 
 
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
149
 
150
  #### Data Collection and Processing
151
 
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
 
 
153
 
154
- [More Information Needed]
 
 
 
155
 
156
- #### Who are the source data producers?
 
 
 
 
 
157
 
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
 
160
- [More Information Needed]
161
 
162
- ### Annotations [optional]
 
 
 
163
 
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
165
 
166
  #### Annotation process
167
 
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
169
-
170
- [More Information Needed]
171
-
172
- #### Who are the annotators?
173
-
174
- <!-- This section describes the people or systems who created the annotations. -->
175
-
176
- [More Information Needed]
177
-
178
- #### Personal and Sensitive Information
179
-
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
181
-
182
- [More Information Needed]
183
 
184
- ## Bias, Risks, and Limitations
 
 
 
185
 
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
 
 
 
187
 
188
- [More Information Needed]
 
 
189
 
190
- ### Recommendations
191
-
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
193
-
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
195
 
196
- ## Citation [optional]
 
197
 
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
199
 
200
  **BibTeX:**
201
 
202
- [More Information Needed]
 
 
 
 
 
 
 
203
 
204
  **APA:**
205
 
206
- [More Information Needed]
207
-
208
- ## Glossary [optional]
209
-
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
211
-
212
- [More Information Needed]
213
-
214
- ## More Information [optional]
215
-
216
- [More Information Needed]
217
-
218
- ## Dataset Card Authors [optional]
219
-
220
- [More Information Needed]
221
-
222
- ## Dataset Card Contact
223
-
224
- [More Information Needed]
 
11
  - fiftyone
12
  - image
13
  - image-segmentation
14
+ dataset_summary: >
15
 
16
 
17
 
18
 
19
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4420
20
+ samples.
21
 
22
 
23
  ## Installation
24
 
25
 
26
+ If you haven't already, install FiftyOne:
27
 
28
 
29
  ```bash
 
45
 
46
  # Load the dataset
47
 
48
+ # Note: other available arguments include 'max_samples', etc
49
 
50
  dataset = load_from_hub("harpreetsahota/RefSegRS")
51
 
 
55
  session = fo.launch_app(dataset)
56
 
57
  ```
58
+ license: cc-by-4.0
 
59
  ---
60
 
61
+ # Dataset Card for RefSegRS
 
 
 
 
62
 
63
+ ![image/png](refseg_rs.gif)
64
 
65
 
66
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4420 samples.
 
81
 
82
  # Load the dataset
83
  # Note: other available arguments include 'max_samples', etc
84
+ dataset = load_from_hub("Voxel51/RefSegRS")
85
 
86
  # Launch the App
87
  session = fo.launch_app(dataset)
88
+
89
+
90
  ```
91
+ #### NOTE: This dataset is .tif media, for best results view in Safari browser or a browser which supports displaying .tif
92
 
93
 
94
  ## Dataset Details
95
 
96
  ### Dataset Description
97
 
98
+ RefSegRS is a referring remote sensing image segmentation (RRSIS) dataset that enables pixel-level segmentation of objects in remote sensing imagery based on natural language descriptions. The dataset addresses the task of localizing and segmenting desired objects from remote sensing images using referring expressions that include categories, attributes, and spatial relationships.
99
+
100
+ The dataset is built on top of the SkyScapes dataset, consisting of cropped and downsampled aerial RGB images with corresponding segmentation masks and natural language referring expressions. Images are captured from a top-down view with 13 cm spatial resolution, featuring urban scenes with various objects including vehicles, roads, buildings, vegetation, and infrastructure elements.
101
 
102
+ - **Curated by:** Zhenghang Yuan, Lichao Mou, Yuansheng Hua, and Xiao Xiang Zhu (Technical University of Munich)
103
+ - **Language(s) (NLP):** English
104
+ - **License:** CC-BY-4.0
105
 
106
+ ### Dataset Sources
107
 
108
+ - **HF Repository:** https://huggingface.co/datasets/JessicaYuan/RefSegRS
109
+ - **Project Repository:** https://github.com/zhu-xlab/rrsis
110
+ - **Paper (arXiv):** https://arxiv.org/abs/2306.08625
111
+ - **Related Work:** This dataset is part of research on combining remote sensing imagery with natural language processing, related to visual grounding, visual question answering (VQA), and image captioning for remote sensing data.
 
112
 
 
113
 
114
+ ## Dataset Structure
115
 
116
+ The RefSegRS dataset contains **4,420 image-language-label triplets** organized into three splits:
 
 
117
 
118
+ - **Training set:** 2,172 triplets
119
+ - **Validation set:** 431 triplets
120
+ - **Test set:** 1,817 triplets
121
 
122
+ ### Image Specifications
123
 
124
+ - **Format:** TIFF (RGB, 3 channels)
125
+ - **Dimensions:** 512 × 512 pixels
126
+ - **Original resolution:** 13 cm spatial resolution
127
+ - **Source:** Cropped from SkyScapes dataset tiles (original 5616 × 3744 pixels) using 1200 × 1200 pixel sliding windows with 600-pixel stride, then downsampled
128
 
129
+ ### Segmentation Masks
130
 
131
+ - **Format:** TIFF (binary masks)
132
+ - **Dimensions:** 512 × 512 pixels
133
+ - **Values:** Binary (0 for background, 1 for target object)
134
+ - **Generation:** Automatically generated from SkyScapes pixel-wise annotations based on referring expressions
135
 
136
+ ### Object Categories
137
 
138
+ The dataset includes 20 object categories from the SkyScapes dataset:
139
+ - **Vegetation:** low vegetation, tree
140
+ - **Roads:** paved road, non-paved road, bikeway, sidewalk, lane marking
141
+ - **Parking:** paved parking place, non-paved parking place
142
+ - **Vehicles:** car, trailer, van, truck, large truck, bus
143
+ - **Infrastructure:** building, entrance/exit, danger area
144
+ - **Other:** clutter, impervious surface
145
 
146
+ ### Referring Expressions
147
 
148
+ Natural language descriptions are generated using templates that include:
149
+ - **Categories:** Direct object names (e.g., "vehicle", "road")
150
+ - **Attributes:** Object properties (e.g., "light-duty vehicle", "heavy-duty vehicle", "long truck")
151
+ - **Spatial relationships:** Positional descriptions (e.g., "vehicle in the parking area", "light-duty vehicle driving on the road", "building with a parking lot")
152
 
153
+ Common expressions include: "car", "road", "impervious surface", "road marking", "vehicle in the parking area", "building along the road", "sidewalk along with tree"
154
 
155
+ ### FiftyOne Dataset Structure
156
+
157
+ When loaded into FiftyOne, the dataset has the following structure:
158
+
159
+ **Sample fields:**
160
+ - `filepath`: Absolute path to the image file
161
+ - `tags`: List containing the split name ("train", "test", or "val")
162
+ - `metadata`: Image metadata (dimensions, size, MIME type)
163
+ - `segmentation`: FiftyOne Segmentation object with absolute path to the mask file
164
+ - `phrase`: String containing the natural language referring expression
165
+ - `created_at`: Timestamp of sample creation
166
+ - `last_modified_at`: Timestamp of last modification
167
 
168
  ## Dataset Creation
169
 
170
  ### Curation Rationale
171
 
172
+ The RefSegRS dataset was created to address the lack of referring image segmentation datasets for remote sensing imagery. While referring image segmentation has been extensively studied for natural images, almost no research attention had been given to this task in the context of remote sensing.
173
 
174
+ The dataset enables:
175
+ - End users without domain expertise to obtain precise information from remote sensing imagery using natural language
176
+ - Targeted image analysis where users can specify objects of interest based on their individual needs
177
+ - Improved efficiency and user interactivity in remote sensing image interpretation
178
 
179
+ The dataset specifically addresses challenges unique to remote sensing imagery:
180
+ - Small and scattered objects (vehicles, road markings) that occupy fewer pixels
181
+ - Wide range of object categories in top-down views
182
+ - Objects with great scale variations
183
+ - Spatial relationships between objects in urban scenes
184
 
185
+ ### Source Data
186
 
187
  #### Data Collection and Processing
188
 
189
+ **Image Collection:**
190
+ 1. Source images from the SkyScapes dataset (16 RGB tiles, each 5616 × 3744 pixels, 13 cm spatial resolution)
191
+ 2. Crop tiles into 1200 × 1200 pixel images using sliding window with 600-pixel stride
192
+ 3. Downsample to 512 × 512 pixels to match deep neural network input requirements
193
 
194
+ **Referring Expression Generation:**
195
+ - Expressions generated using predefined templates based on how end users typically refer to objects
196
+ - Templates include: category alone, category with attributes, and spatial relationships with other entities
197
+ - Manual filtering performed to remove uninformative image-language-label triplets
198
 
199
+ **Mask Generation:**
200
+ 1. Pixel-wise annotations sourced from SkyScapes dataset (each pixel labeled with one of 20 classes)
201
+ 2. Automatic generation of binary ground truth masks based on natural language expressions
202
+ 3. Two types of conceptual relationships established:
203
+ - **Identity:** Direct mapping (e.g., "road marking" ≡ "lane marking")
204
+ - **Inclusion:** Hierarchical grouping (e.g., "light-duty vehicle" includes "car" and "van")
205
 
206
+ #### Who are the source data producers?
207
 
208
+ The source imagery comes from the **SkyScapes dataset**, which provides aerial RGB imagery with pixel-wise semantic annotations of urban scenes.
209
 
210
+ The RefSegRS dataset was curated by researchers at:
211
+ - **Technical University of Munich** (Chair of Data Science in Earth Observation)
212
+ - **Shenzhen University** (College of Civil and Transportation Engineering)
213
+ - **Munich Center for Machine Learning**
214
 
215
+ ### Annotations
216
 
217
  #### Annotation process
218
 
219
+ The annotations in RefSegRS consist of two components:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
220
 
221
+ **1. Segmentation Masks:**
222
+ - Automatically generated from existing SkyScapes pixel-wise semantic annotations
223
+ - Binary masks created by setting pixels within the target category to 1 and outside to 0
224
+ - For composite categories (e.g., "vehicle"), masks combine multiple sub-categories ("car", "van", "truck", etc.)
225
 
226
+ **2. Referring Expressions:**
227
+ - Generated using predefined templates that reflect natural user language patterns
228
+ - Templates incorporate:
229
+ - Category names (direct specification)
230
+ - Attributes (size, type, material properties)
231
+ - Spatial relationships (location, proximity to other objects)
232
+ - Manual filtering applied to remove uninformative or ambiguous triplets
233
+ - Final dataset: 4,420 curated image-language-label triplets
234
 
235
+ **Quality Control:**
236
+ - Manual review to ensure referring expressions accurately describe the corresponding masks
237
+ - Filtering of uninformative samples to maintain dataset quality
238
 
239
+ #### Who are the annotators?
 
 
 
 
240
 
241
+ - **Segmentation masks:** Derived from the SkyScapes dataset's existing pixel-wise annotations
242
+ - **Referring expressions:** Generated automatically using templates, then manually filtered by the research team at Technical University of Munich
243
 
244
+ ## Citation
245
 
246
  **BibTeX:**
247
 
248
+ ```bibtex
249
+ @article{yuan2023rrsis,
250
+ title={RRSIS: Referring Remote Sensing Image Segmentation},
251
+ author={Yuan, Zhenghang and Mou, Lichao and Hua, Yuansheng and Zhu, Xiao Xiang},
252
+ journal={arXiv preprint arXiv:2306.08625},
253
+ year={2023}
254
+ }
255
+ ```
256
 
257
  **APA:**
258
 
259
+ Yuan, Z., Mou, L., Hua, Y., & Zhu, X. X. (2024). RRSIS: Referring Remote Sensing Image Segmentation. *IEEE Transactions on Geoscience and Remote Sensing*. arXiv:2306.08625v2