332F parquet-converter commited on
Commit
b62c2b0
·
0 Parent(s):

Duplicate from ranjaykrishna/visual_genome

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +38 -0
  2. README.md +503 -0
  3. visual_genome.py +469 -0
.gitattributes ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.wasm filter=lfs diff=lfs merge=lfs -text
25
+ *.xz filter=lfs diff=lfs merge=lfs -text
26
+ *.zip filter=lfs diff=lfs merge=lfs -text
27
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
28
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
29
+ # Audio files - uncompressed
30
+ *.pcm filter=lfs diff=lfs merge=lfs -text
31
+ *.sam filter=lfs diff=lfs merge=lfs -text
32
+ *.raw filter=lfs diff=lfs merge=lfs -text
33
+ # Audio files - compressed
34
+ *.aac filter=lfs diff=lfs merge=lfs -text
35
+ *.flac filter=lfs diff=lfs merge=lfs -text
36
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
37
+ *.ogg filter=lfs diff=lfs merge=lfs -text
38
+ *.wav filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - image-to-text
18
+ - object-detection
19
+ - visual-question-answering
20
+ task_ids:
21
+ - image-captioning
22
+ paperswithcode_id: visual-genome
23
+ pretty_name: VisualGenome
24
+ dataset_info:
25
+ features:
26
+ - name: image
27
+ dtype: image
28
+ - name: image_id
29
+ dtype: int32
30
+ - name: url
31
+ dtype: string
32
+ - name: width
33
+ dtype: int32
34
+ - name: height
35
+ dtype: int32
36
+ - name: coco_id
37
+ dtype: int64
38
+ - name: flickr_id
39
+ dtype: int64
40
+ - name: regions
41
+ list:
42
+ - name: region_id
43
+ dtype: int32
44
+ - name: image_id
45
+ dtype: int32
46
+ - name: phrase
47
+ dtype: string
48
+ - name: x
49
+ dtype: int32
50
+ - name: y
51
+ dtype: int32
52
+ - name: width
53
+ dtype: int32
54
+ - name: height
55
+ dtype: int32
56
+ config_name: region_descriptions_v1.0.0
57
+ splits:
58
+ - name: train
59
+ num_bytes: 260873884
60
+ num_examples: 108077
61
+ download_size: 15304605295
62
+ dataset_size: 260873884
63
+ config_names:
64
+ - objects
65
+ - question_answers
66
+ - region_descriptions
67
+ ---
68
+
69
+ # Dataset Card for Visual Genome
70
+
71
+ ## Table of Contents
72
+ - [Table of Contents](#table-of-contents)
73
+ - [Dataset Description](#dataset-description)
74
+ - [Dataset Summary](#dataset-summary)
75
+ - [Dataset Preprocessing](#dataset-preprocessing)
76
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
77
+ - [Languages](#languages)
78
+ - [Dataset Structure](#dataset-structure)
79
+ - [Data Instances](#data-instances)
80
+ - [Data Fields](#data-fields)
81
+ - [Data Splits](#data-splits)
82
+ - [Dataset Creation](#dataset-creation)
83
+ - [Curation Rationale](#curation-rationale)
84
+ - [Source Data](#source-data)
85
+ - [Annotations](#annotations)
86
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
87
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
88
+ - [Social Impact of Dataset](#social-impact-of-dataset)
89
+ - [Discussion of Biases](#discussion-of-biases)
90
+ - [Other Known Limitations](#other-known-limitations)
91
+ - [Additional Information](#additional-information)
92
+ - [Dataset Curators](#dataset-curators)
93
+ - [Licensing Information](#licensing-information)
94
+ - [Citation Information](#citation-information)
95
+ - [Contributions](#contributions)
96
+
97
+ ## Dataset Description
98
+
99
+ - **Homepage:** https://homes.cs.washington.edu/~ranjay/visualgenome/
100
+ - **Repository:**
101
+ - **Paper:** https://doi.org/10.1007/s11263-016-0981-7
102
+ - **Leaderboard:**
103
+ - **Point of Contact:** ranjaykrishna [at] gmail [dot] com
104
+
105
+ ### Dataset Summary
106
+
107
+ Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language.
108
+
109
+ From the paper:
110
+ > Despite progress in perceptual tasks such as
111
+ image classification, computers still perform poorly on
112
+ cognitive tasks such as image description and question
113
+ answering. Cognition is core to tasks that involve not
114
+ just recognizing, but reasoning about our visual world.
115
+ However, models used to tackle the rich content in images for cognitive tasks are still being trained using the
116
+ same datasets designed for perceptual tasks. To achieve
117
+ success at cognitive tasks, models need to understand
118
+ the interactions and relationships between objects in an
119
+ image. When asked “What vehicle is the person riding?”,
120
+ computers will need to identify the objects in an image
121
+ as well as the relationships riding(man, carriage) and
122
+ pulling(horse, carriage) to answer correctly that “the
123
+ person is riding a horse-drawn carriage.”
124
+
125
+ Visual Genome has:
126
+ - 108,077 image
127
+ - 5.4 Million Region Descriptions
128
+ - 1.7 Million Visual Question Answers
129
+ - 3.8 Million Object Instances
130
+ - 2.8 Million Attributes
131
+ - 2.3 Million Relationships
132
+
133
+ From the paper:
134
+ > Our dataset contains over 108K images where each
135
+ image has an average of 35 objects, 26 attributes, and 21
136
+ pairwise relationships between objects. We canonicalize
137
+ the objects, attributes, relationships, and noun phrases
138
+ in region descriptions and questions answer pairs to
139
+ WordNet synsets.
140
+
141
+ ### Dataset Preprocessing
142
+
143
+ ### Supported Tasks and Leaderboards
144
+
145
+ ### Languages
146
+
147
+ All of annotations use English as primary language.
148
+
149
+ ## Dataset Structure
150
+
151
+ ### Data Instances
152
+
153
+ When loading a specific configuration, users has to append a version dependent suffix:
154
+ ```python
155
+ from datasets import load_dataset
156
+ load_dataset("visual_genome", "region_description_v1.2.0")
157
+ ```
158
+
159
+ #### region_descriptions
160
+
161
+ An example of looks as follows.
162
+
163
+ ```
164
+ {
165
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
166
+ "image_id": 1,
167
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
168
+ "width": 800,
169
+ "height": 600,
170
+ "coco_id": null,
171
+ "flickr_id": null,
172
+ "regions": [
173
+ {
174
+ "region_id": 1382,
175
+ "image_id": 1,
176
+ "phrase": "the clock is green in colour",
177
+ "x": 421,
178
+ "y": 57,
179
+ "width": 82,
180
+ "height": 139
181
+ },
182
+ ...
183
+ ]
184
+ }
185
+ ```
186
+
187
+ #### objects
188
+
189
+ An example of looks as follows.
190
+
191
+ ```
192
+ {
193
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
194
+ "image_id": 1,
195
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
196
+ "width": 800,
197
+ "height": 600,
198
+ "coco_id": null,
199
+ "flickr_id": null,
200
+ "objects": [
201
+ {
202
+ "object_id": 1058498,
203
+ "x": 421,
204
+ "y": 91,
205
+ "w": 79,
206
+ "h": 339,
207
+ "names": [
208
+ "clock"
209
+ ],
210
+ "synsets": [
211
+ "clock.n.01"
212
+ ]
213
+ },
214
+ ...
215
+ ]
216
+ }
217
+ ```
218
+
219
+ #### attributes
220
+
221
+ An example of looks as follows.
222
+
223
+ ```
224
+ {
225
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
226
+ "image_id": 1,
227
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
228
+ "width": 800,
229
+ "height": 600,
230
+ "coco_id": null,
231
+ "flickr_id": null,
232
+ "attributes": [
233
+ {
234
+ "object_id": 1058498,
235
+ "x": 421,
236
+ "y": 91,
237
+ "w": 79,
238
+ "h": 339,
239
+ "names": [
240
+ "clock"
241
+ ],
242
+ "synsets": [
243
+ "clock.n.01"
244
+ ],
245
+ "attributes": [
246
+ "green",
247
+ "tall"
248
+ ]
249
+ },
250
+ ...
251
+ }
252
+ ]
253
+ ```
254
+
255
+ #### relationships
256
+
257
+ An example of looks as follows.
258
+
259
+ ```
260
+ {
261
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
262
+ "image_id": 1,
263
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
264
+ "width": 800,
265
+ "height": 600,
266
+ "coco_id": null,
267
+ "flickr_id": null,
268
+ "relationships": [
269
+ {
270
+ "relationship_id": 15927,
271
+ "predicate": "ON",
272
+ "synsets": "['along.r.01']",
273
+ "subject": {
274
+ "object_id": 5045,
275
+ "x": 119,
276
+ "y": 338,
277
+ "w": 274,
278
+ "h": 192,
279
+ "names": [
280
+ "shade"
281
+ ],
282
+ "synsets": [
283
+ "shade.n.01"
284
+ ]
285
+ },
286
+ "object": {
287
+ "object_id": 5046,
288
+ "x": 77,
289
+ "y": 328,
290
+ "w": 714,
291
+ "h": 262,
292
+ "names": [
293
+ "street"
294
+ ],
295
+ "synsets": [
296
+ "street.n.01"
297
+ ]
298
+ }
299
+ }
300
+ ...
301
+ }
302
+ ]
303
+ ```
304
+ #### question_answers
305
+
306
+ An example of looks as follows.
307
+
308
+ ```
309
+ {
310
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
311
+ "image_id": 1,
312
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
313
+ "width": 800,
314
+ "height": 600,
315
+ "coco_id": null,
316
+ "flickr_id": null,
317
+ "qas": [
318
+ {
319
+ "qa_id": 986768,
320
+ "image_id": 1,
321
+ "question": "What color is the clock?",
322
+ "answer": "Green.",
323
+ "a_objects": [],
324
+ "q_objects": []
325
+ },
326
+ ...
327
+ }
328
+ ]
329
+ ```
330
+
331
+ ### Data Fields
332
+
333
+ When loading a specific configuration, users has to append a version dependent suffix:
334
+ ```python
335
+ from datasets import load_dataset
336
+ load_dataset("visual_genome", "region_description_v1.2.0")
337
+ ```
338
+
339
+ #### region_descriptions
340
+
341
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
342
+ - `image_id`: Unique numeric ID of the image.
343
+ - `url`: URL of source image.
344
+ - `width`: Image width.
345
+ - `height`: Image height.
346
+ - `coco_id`: Id mapping to MSCOCO indexing.
347
+ - `flickr_id`: Id mapping to Flicker indexing.
348
+ - `regions`: Holds a list of `Region` dataclasses:
349
+ - `region_id`: Unique numeric ID of the region.
350
+ - `image_id`: Unique numeric ID of the image.
351
+ - `x`: x coordinate of bounding box's top left corner.
352
+ - `y`: y coordinate of bounding box's top left corner.
353
+ - `width`: Bounding box width.
354
+ - `height`: Bounding box height.
355
+
356
+ #### objects
357
+
358
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
359
+ - `image_id`: Unique numeric ID of the image.
360
+ - `url`: URL of source image.
361
+ - `width`: Image width.
362
+ - `height`: Image height.
363
+ - `coco_id`: Id mapping to MSCOCO indexing.
364
+ - `flickr_id`: Id mapping to Flicker indexing.
365
+ - `objects`: Holds a list of `Object` dataclasses:
366
+ - `object_id`: Unique numeric ID of the object.
367
+ - `x`: x coordinate of bounding box's top left corner.
368
+ - `y`: y coordinate of bounding box's top left corner.
369
+ - `w`: Bounding box width.
370
+ - `h`: Bounding box height.
371
+ - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg
372
+ - `synsets`: List of `WordNet synsets`.
373
+
374
+ #### attributes
375
+
376
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
377
+ - `image_id`: Unique numeric ID of the image.
378
+ - `url`: URL of source image.
379
+ - `width`: Image width.
380
+ - `height`: Image height.
381
+ - `coco_id`: Id mapping to MSCOCO indexing.
382
+ - `flickr_id`: Id mapping to Flicker indexing.
383
+ - `attributes`: Holds a list of `Object` dataclasses:
384
+ - `object_id`: Unique numeric ID of the region.
385
+ - `x`: x coordinate of bounding box's top left corner.
386
+ - `y`: y coordinate of bounding box's top left corner.
387
+ - `w`: Bounding box width.
388
+ - `h`: Bounding box height.
389
+ - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg
390
+ - `synsets`: List of `WordNet synsets`.
391
+ - `attributes`: List of attributes associated with the object.
392
+
393
+ #### relationships
394
+
395
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
396
+ - `image_id`: Unique numeric ID of the image.
397
+ - `url`: URL of source image.
398
+ - `width`: Image width.
399
+ - `height`: Image height.
400
+ - `coco_id`: Id mapping to MSCOCO indexing.
401
+ - `flickr_id`: Id mapping to Flicker indexing.
402
+ - `relationships`: Holds a list of `Relationship` dataclasses:
403
+ - `relationship_id`: Unique numeric ID of the object.
404
+ - `predicate`: Predicate defining relationship between a subject and an object.
405
+ - `synsets`: List of `WordNet synsets`.
406
+ - `subject`: Object dataclass. See subsection on `objects`.
407
+ - `object`: Object dataclass. See subsection on `objects`.
408
+
409
+ #### question_answers
410
+
411
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
412
+ - `image_id`: Unique numeric ID of the image.
413
+ - `url`: URL of source image.
414
+ - `width`: Image width.
415
+ - `height`: Image height.
416
+ - `coco_id`: Id mapping to MSCOCO indexing.
417
+ - `flickr_id`: Id mapping to Flicker indexing.
418
+ - `qas`: Holds a list of `Question-Answering` dataclasses:
419
+ - `qa_id`: Unique numeric ID of the question-answer pair.
420
+ - `image_id`: Unique numeric ID of the image.
421
+ - `question`: Question.
422
+ - `answer`: Answer.
423
+ - `q_objects`: List of object dataclass associated with `question` field. See subsection on `objects`.
424
+ - `a_objects`: List of object dataclass associated with `answer` field. See subsection on `objects`.
425
+
426
+ ### Data Splits
427
+
428
+ All the data is contained in training set.
429
+
430
+ ## Dataset Creation
431
+
432
+ ### Curation Rationale
433
+
434
+ ### Source Data
435
+
436
+ #### Initial Data Collection and Normalization
437
+
438
+ #### Who are the source language producers?
439
+
440
+ ### Annotations
441
+
442
+ #### Annotation process
443
+
444
+ #### Who are the annotators?
445
+
446
+ From the paper:
447
+ > We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over
448
+ 33, 000 unique workers contributed to the dataset. The
449
+ dataset was collected over the course of 6 months after
450
+ 15 months of experimentation and iteration on the data
451
+ representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where
452
+ each HIT involved creating descriptions, questions and
453
+ answers, or region graphs. Each HIT was designed such
454
+ that workers manage to earn anywhere between $6-$8
455
+ per hour if they work continuously, in line with ethical
456
+ research standards on Mechanical Turk (Salehi et al.,
457
+ 2015). Visual Genome HITs achieved a 94.1% retention
458
+ rate, meaning that 94.1% of workers who completed one
459
+ of our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States.
460
+ The majority of our workers were
461
+ between the ages of 25 and 34 years old. Our youngest
462
+ contributor was 18 years and the oldest was 68 years
463
+ old. We also had a near-balanced split of 54.15% male
464
+ and 45.85% female workers.
465
+
466
+ ### Personal and Sensitive Information
467
+
468
+ ## Considerations for Using the Data
469
+
470
+ ### Social Impact of Dataset
471
+
472
+ ### Discussion of Biases
473
+
474
+ ### Other Known Limitations
475
+
476
+ ## Additional Information
477
+
478
+ ### Dataset Curators
479
+
480
+ ### Licensing Information
481
+
482
+ Visual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License.
483
+
484
+ ### Citation Information
485
+
486
+ ```bibtex
487
+ @article{Krishna2016VisualGC,
488
+ title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations},
489
+ author={Ranjay Krishna and Yuke Zhu and Oliver Groth and Justin Johnson and Kenji Hata and Joshua Kravitz and Stephanie Chen and Yannis Kalantidis and Li-Jia Li and David A. Shamma and Michael S. Bernstein and Li Fei-Fei},
490
+ journal={International Journal of Computer Vision},
491
+ year={2017},
492
+ volume={123},
493
+ pages={32-73},
494
+ url={https://doi.org/10.1007/s11263-016-0981-7},
495
+ doi={10.1007/s11263-016-0981-7}
496
+ }
497
+ ```
498
+
499
+ ### Contributions
500
+
501
+ Due to limitation of the dummy_data creation, we provide a `fix_generated_dummy_data.py` script that fix the dataset in-place.
502
+
503
+ Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
visual_genome.py ADDED
@@ -0,0 +1,469 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Visual Genome dataset."""
16
+
17
+ import json
18
+ import os
19
+ import re
20
+ from collections import defaultdict
21
+ from typing import Any, Callable, Dict, Optional
22
+ from urllib.parse import urlparse
23
+
24
+ import datasets
25
+
26
+
27
+ logger = datasets.logging.get_logger(__name__)
28
+
29
+ _CITATION = """\
30
+ @article{Krishna2016VisualGC,
31
+ title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations},
32
+ author={Ranjay Krishna and Yuke Zhu and Oliver Groth and Justin Johnson and Kenji Hata and Joshua Kravitz and Stephanie Chen and Yannis Kalantidis and Li-Jia Li and David A. Shamma and Michael S. Bernstein and Li Fei-Fei},
33
+ journal={International Journal of Computer Vision},
34
+ year={2017},
35
+ volume={123},
36
+ pages={32-73},
37
+ url={https://doi.org/10.1007/s11263-016-0981-7},
38
+ doi={10.1007/s11263-016-0981-7}
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ Visual Genome enable to model objects and relationships between objects.
44
+ They collect dense annotations of objects, attributes, and relationships within each image.
45
+ Specifically, the dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects.
46
+ """
47
+
48
+ _HOMEPAGE = "https://homes.cs.washington.edu/~ranjay/visualgenome/"
49
+
50
+ _LICENSE = "Creative Commons Attribution 4.0 International License"
51
+
52
+ _BASE_IMAGE_URLS = {
53
+ "https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip": "VG_100K",
54
+ "https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip": "VG_100K_2",
55
+ }
56
+
57
+ _LATEST_VERSIONS = {
58
+ "region_descriptions": "1.2.0",
59
+ "objects": "1.4.0",
60
+ "attributes": "1.2.0",
61
+ "relationships": "1.4.0",
62
+ "question_answers": "1.2.0",
63
+ "image_metadata": "1.2.0",
64
+ }
65
+
66
+ # ---- Features ----
67
+
68
+ _BASE_IMAGE_METADATA_FEATURES = {
69
+ "image_id": datasets.Value("int32"),
70
+ "url": datasets.Value("string"),
71
+ "width": datasets.Value("int32"),
72
+ "height": datasets.Value("int32"),
73
+ "coco_id": datasets.Value("int64"),
74
+ "flickr_id": datasets.Value("int64"),
75
+ }
76
+
77
+ _BASE_SYNTET_FEATURES = {
78
+ "synset_name": datasets.Value("string"),
79
+ "entity_name": datasets.Value("string"),
80
+ "entity_idx_start": datasets.Value("int32"),
81
+ "entity_idx_end": datasets.Value("int32"),
82
+ }
83
+
84
+ _BASE_OBJECT_FEATURES = {
85
+ "object_id": datasets.Value("int32"),
86
+ "x": datasets.Value("int32"),
87
+ "y": datasets.Value("int32"),
88
+ "w": datasets.Value("int32"),
89
+ "h": datasets.Value("int32"),
90
+ "names": [datasets.Value("string")],
91
+ "synsets": [datasets.Value("string")],
92
+ }
93
+
94
+ _BASE_QA_OBJECT_FEATURES = {
95
+ "object_id": datasets.Value("int32"),
96
+ "x": datasets.Value("int32"),
97
+ "y": datasets.Value("int32"),
98
+ "w": datasets.Value("int32"),
99
+ "h": datasets.Value("int32"),
100
+ "names": [datasets.Value("string")],
101
+ "synsets": [datasets.Value("string")],
102
+ }
103
+
104
+ _BASE_QA_OBJECT = {
105
+ "qa_id": datasets.Value("int32"),
106
+ "image_id": datasets.Value("int32"),
107
+ "question": datasets.Value("string"),
108
+ "answer": datasets.Value("string"),
109
+ "a_objects": [_BASE_QA_OBJECT_FEATURES],
110
+ "q_objects": [_BASE_QA_OBJECT_FEATURES],
111
+ }
112
+
113
+ _BASE_REGION_FEATURES = {
114
+ "region_id": datasets.Value("int32"),
115
+ "image_id": datasets.Value("int32"),
116
+ "phrase": datasets.Value("string"),
117
+ "x": datasets.Value("int32"),
118
+ "y": datasets.Value("int32"),
119
+ "width": datasets.Value("int32"),
120
+ "height": datasets.Value("int32"),
121
+ }
122
+
123
+ _BASE_RELATIONSHIP_FEATURES = {
124
+ "relationship_id": datasets.Value("int32"),
125
+ "predicate": datasets.Value("string"),
126
+ "synsets": datasets.Value("string"),
127
+ "subject": _BASE_OBJECT_FEATURES,
128
+ "object": _BASE_OBJECT_FEATURES,
129
+ }
130
+
131
+ _NAME_VERSION_TO_ANNOTATION_FEATURES = {
132
+ "region_descriptions": {
133
+ "1.2.0": {"regions": [_BASE_REGION_FEATURES]},
134
+ "1.0.0": {"regions": [_BASE_REGION_FEATURES]},
135
+ },
136
+ "objects": {
137
+ "1.4.0": {"objects": [{**_BASE_OBJECT_FEATURES, "merged_object_ids": [datasets.Value("int32")]}]},
138
+ "1.2.0": {"objects": [_BASE_OBJECT_FEATURES]},
139
+ "1.0.0": {"objects": [_BASE_OBJECT_FEATURES]},
140
+ },
141
+ "attributes": {
142
+ "1.2.0": {"attributes": [{**_BASE_OBJECT_FEATURES, "attributes": [datasets.Value("string")]}]},
143
+ "1.0.0": {"attributes": [{**_BASE_OBJECT_FEATURES, "attributes": [datasets.Value("string")]}]},
144
+ },
145
+ "relationships": {
146
+ "1.4.0": {
147
+ "relationships": [
148
+ {
149
+ **_BASE_RELATIONSHIP_FEATURES,
150
+ "subject": {**_BASE_OBJECT_FEATURES, "merged_object_ids": [datasets.Value("int32")]},
151
+ "object": {**_BASE_OBJECT_FEATURES, "merged_object_ids": [datasets.Value("int32")]},
152
+ }
153
+ ]
154
+ },
155
+ "1.2.0": {"relationships": [_BASE_RELATIONSHIP_FEATURES]},
156
+ "1.0.0": {"relationships": [_BASE_RELATIONSHIP_FEATURES]},
157
+ },
158
+ "question_answers": {"1.2.0": {"qas": [_BASE_QA_OBJECT]}, "1.0.0": {"qas": [_BASE_QA_OBJECT]}},
159
+ }
160
+
161
+ # ----- Helpers -----
162
+
163
+
164
+ def _get_decompressed_filename_from_url(url: str) -> str:
165
+ parsed_url = urlparse(url)
166
+ compressed_filename = os.path.basename(parsed_url.path)
167
+
168
+ # Remove `.zip` suffix
169
+ assert compressed_filename.endswith(".zip")
170
+ uncompressed_filename = compressed_filename[:-4]
171
+
172
+ # Remove version.
173
+ unversioned_uncompressed_filename = re.sub(r"_v[0-9]+(?:_[0-9]+)?\.json$", ".json", uncompressed_filename)
174
+
175
+ return unversioned_uncompressed_filename
176
+
177
+
178
+ def _get_local_image_path(img_url: str, folder_local_paths: Dict[str, str]) -> str:
179
+ """
180
+ Obtain image folder given an image url.
181
+
182
+ For example:
183
+ Given `https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg` as an image url, this method returns the local path for that image.
184
+ """
185
+ matches = re.fullmatch(r"^https://cs.stanford.edu/people/rak248/(VG_100K(?:_2)?)/([0-9]+\.jpg)$", img_url)
186
+ assert matches is not None, f"Got img_url: {img_url}, matched: {matches}"
187
+ folder, filename = matches.group(1), matches.group(2)
188
+ return os.path.join(folder_local_paths[folder], filename)
189
+
190
+
191
+ # ----- Annotation normalizers ----
192
+
193
+ _BASE_ANNOTATION_URL = "https://homes.cs.washington.edu/~ranjay/visualgenome/data/dataset"
194
+
195
+
196
+ def _normalize_region_description_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
197
+ """Normalizes region descriptions annotation in-place"""
198
+ # Some attributes annotations don't have an attribute field
199
+ for region in annotation["regions"]:
200
+ # `id` should be converted to `region_id`:
201
+ if "id" in region:
202
+ region["region_id"] = region["id"]
203
+ del region["id"]
204
+
205
+ # `image` should be converted to `image_id`
206
+ if "image" in region:
207
+ region["image_id"] = region["image"]
208
+ del region["image"]
209
+
210
+ return annotation
211
+
212
+
213
+ def _normalize_object_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
214
+ """Normalizes object annotation in-place"""
215
+ # Some attributes annotations don't have an attribute field
216
+ for object_ in annotation["objects"]:
217
+ # `id` should be converted to `object_id`:
218
+ if "id" in object_:
219
+ object_["object_id"] = object_["id"]
220
+ del object_["id"]
221
+
222
+ # Some versions of `object` annotations don't have `synsets` field.
223
+ if "synsets" not in object_:
224
+ object_["synsets"] = None
225
+
226
+ return annotation
227
+
228
+
229
+ def _normalize_attribute_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
230
+ """Normalizes attributes annotation in-place"""
231
+ # Some attributes annotations don't have an attribute field
232
+ for attribute in annotation["attributes"]:
233
+ # `id` should be converted to `object_id`:
234
+ if "id" in attribute:
235
+ attribute["object_id"] = attribute["id"]
236
+ del attribute["id"]
237
+
238
+ # `objects_names` should be convered to `names:
239
+ if "object_names" in attribute:
240
+ attribute["names"] = attribute["object_names"]
241
+ del attribute["object_names"]
242
+
243
+ # Some versions of `attribute` annotations don't have `synsets` field.
244
+ if "synsets" not in attribute:
245
+ attribute["synsets"] = None
246
+
247
+ # Some versions of `attribute` annotations don't have `attributes` field.
248
+ if "attributes" not in attribute:
249
+ attribute["attributes"] = None
250
+
251
+ return annotation
252
+
253
+
254
+ def _normalize_relationship_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
255
+ """Normalizes relationship annotation in-place"""
256
+ # For some reason relationships objects have a single name instead of a list of names.
257
+ for relationship in annotation["relationships"]:
258
+ # `id` should be converted to `object_id`:
259
+ if "id" in relationship:
260
+ relationship["relationship_id"] = relationship["id"]
261
+ del relationship["id"]
262
+
263
+ if "synsets" not in relationship:
264
+ relationship["synsets"] = None
265
+
266
+ subject = relationship["subject"]
267
+ object_ = relationship["object"]
268
+
269
+ for obj in [subject, object_]:
270
+ # `id` should be converted to `object_id`:
271
+ if "id" in obj:
272
+ obj["object_id"] = obj["id"]
273
+ del obj["id"]
274
+
275
+ if "name" in obj:
276
+ obj["names"] = [obj["name"]]
277
+ del obj["name"]
278
+
279
+ if "synsets" not in obj:
280
+ obj["synsets"] = None
281
+
282
+ return annotation
283
+
284
+
285
+ def _normalize_image_metadata_(image_metadata: Dict[str, Any]) -> Dict[str, Any]:
286
+ """Normalizes image metadata in-place"""
287
+ if "id" in image_metadata:
288
+ image_metadata["image_id"] = image_metadata["id"]
289
+ del image_metadata["id"]
290
+ return image_metadata
291
+
292
+
293
+ _ANNOTATION_NORMALIZER = defaultdict(lambda: lambda x: x)
294
+ _ANNOTATION_NORMALIZER.update(
295
+ {
296
+ "region_descriptions": _normalize_region_description_annotation_,
297
+ "objects": _normalize_object_annotation_,
298
+ "attributes": _normalize_attribute_annotation_,
299
+ "relationships": _normalize_relationship_annotation_,
300
+ }
301
+ )
302
+
303
+ # ---- Visual Genome loading script ----
304
+
305
+
306
+ class VisualGenomeConfig(datasets.BuilderConfig):
307
+ """BuilderConfig for Visual Genome."""
308
+
309
+ def __init__(self, name: str, version: Optional[str] = None, with_image: bool = True, **kwargs):
310
+ _version = _LATEST_VERSIONS[name] if version is None else version
311
+ _name = f"{name}_v{_version}"
312
+ super(VisualGenomeConfig, self).__init__(version=datasets.Version(_version), name=_name, **kwargs)
313
+ self._name_without_version = name
314
+ self.annotations_features = _NAME_VERSION_TO_ANNOTATION_FEATURES[self._name_without_version][
315
+ self.version.version_str
316
+ ]
317
+ self.with_image = with_image
318
+
319
+ @property
320
+ def annotations_url(self):
321
+ if self.version == _LATEST_VERSIONS[self._name_without_version]:
322
+ return f"{_BASE_ANNOTATION_URL}/{self._name_without_version}.json.zip"
323
+
324
+ major, minor = self.version.major, self.version.minor
325
+ if minor == 0:
326
+ return f"{_BASE_ANNOTATION_URL}/{self._name_without_version}_v{major}.json.zip"
327
+ else:
328
+ return f"{_BASE_ANNOTATION_URL}/{self._name_without_version}_v{major}_{minor}.json.zip"
329
+
330
+ @property
331
+ def image_metadata_url(self):
332
+ if not self.version == _LATEST_VERSIONS["image_metadata"]:
333
+ logger.warning(
334
+ f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions."
335
+ )
336
+ return f"{_BASE_ANNOTATION_URL}/image_data.json.zip"
337
+
338
+ @property
339
+ def features(self):
340
+ return datasets.Features(
341
+ {
342
+ **({"image": datasets.Image()} if self.with_image else {}),
343
+ **_BASE_IMAGE_METADATA_FEATURES,
344
+ **self.annotations_features,
345
+ }
346
+ )
347
+
348
+
349
+ class VisualGenome(datasets.GeneratorBasedBuilder):
350
+ """Visual Genome dataset."""
351
+
352
+ BUILDER_CONFIG_CLASS = VisualGenomeConfig
353
+ BUILDER_CONFIGS = [
354
+ *[VisualGenomeConfig(name="region_descriptions", version=version) for version in ["1.0.0", "1.2.0"]],
355
+ *[VisualGenomeConfig(name="question_answers", version=version) for version in ["1.0.0", "1.2.0"]],
356
+ *[
357
+ VisualGenomeConfig(name="objects", version=version)
358
+ # TODO: add support for 1.4.0
359
+ for version in ["1.0.0", "1.2.0"]
360
+ ],
361
+ *[VisualGenomeConfig(name="attributes", version=version) for version in ["1.0.0", "1.2.0"]],
362
+ *[
363
+ VisualGenomeConfig(name="relationships", version=version)
364
+ # TODO: add support for 1.4.0
365
+ for version in ["1.0.0", "1.2.0"]
366
+ ],
367
+ ]
368
+
369
+ def _info(self):
370
+ return datasets.DatasetInfo(
371
+ description=_DESCRIPTION,
372
+ features=self.config.features,
373
+ homepage=_HOMEPAGE,
374
+ license=_LICENSE,
375
+ citation=_CITATION,
376
+ version=self.config.version,
377
+ )
378
+
379
+ def _split_generators(self, dl_manager):
380
+ # Download image meta datas.
381
+ image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url)
382
+ image_metadatas_file = os.path.join(
383
+ image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url)
384
+ )
385
+
386
+ # Download annotations
387
+ annotations_dir = dl_manager.download_and_extract(self.config.annotations_url)
388
+ annotations_file = os.path.join(
389
+ annotations_dir, _get_decompressed_filename_from_url(self.config.annotations_url)
390
+ )
391
+
392
+ # Optionally download images
393
+ if self.config.with_image:
394
+ image_folder_keys = list(_BASE_IMAGE_URLS.keys())
395
+ image_dirs = dl_manager.download_and_extract(image_folder_keys)
396
+ image_folder_local_paths = {
397
+ _BASE_IMAGE_URLS[key]: os.path.join(dir_, _BASE_IMAGE_URLS[key])
398
+ for key, dir_ in zip(image_folder_keys, image_dirs)
399
+ }
400
+ else:
401
+ image_folder_local_paths = None
402
+
403
+ return [
404
+ datasets.SplitGenerator(
405
+ name=datasets.Split.TRAIN,
406
+ gen_kwargs={
407
+ "image_folder_local_paths": image_folder_local_paths,
408
+ "image_metadatas_file": image_metadatas_file,
409
+ "annotations_file": annotations_file,
410
+ "annotation_normalizer_": _ANNOTATION_NORMALIZER[self.config._name_without_version],
411
+ },
412
+ ),
413
+ ]
414
+
415
+ def _generate_examples(
416
+ self,
417
+ image_folder_local_paths: Optional[Dict[str, str]],
418
+ image_metadatas_file: str,
419
+ annotations_file: str,
420
+ annotation_normalizer_: Callable[[Dict[str, Any]], Dict[str, Any]],
421
+ ):
422
+ with open(annotations_file, "r", encoding="utf-8") as fi:
423
+ annotations = json.load(fi)
424
+
425
+ with open(image_metadatas_file, "r", encoding="utf-8") as fi:
426
+ image_metadatas = json.load(fi)
427
+
428
+ assert len(image_metadatas) == len(annotations)
429
+ for idx, (image_metadata, annotation) in enumerate(zip(image_metadatas, annotations)):
430
+ # in-place operation to normalize image_metadata
431
+ _normalize_image_metadata_(image_metadata)
432
+
433
+ # Normalize image_id across all annotations
434
+ if "id" in annotation:
435
+ # annotation["id"] corresponds to image_metadata["image_id"]
436
+ assert (
437
+ image_metadata["image_id"] == annotation["id"]
438
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['image_id']: {image_metadata['image_id']} and annotations['id']: {annotation['id']}"
439
+ del annotation["id"]
440
+ else:
441
+ assert "image_id" in annotation
442
+ assert (
443
+ image_metadata["image_id"] == annotation["image_id"]
444
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['image_id']: {image_metadata['image_id']} and annotations['image_id']: {annotation['image_id']}"
445
+
446
+ # Normalize image_id across all annotations
447
+ if "image_url" in annotation:
448
+ # annotation["image_url"] corresponds to image_metadata["url"]
449
+ assert (
450
+ image_metadata["url"] == annotation["image_url"]
451
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['url']: {image_metadata['url']} and annotations['image_url']: {annotation['image_url']}"
452
+ del annotation["image_url"]
453
+ elif "url" in annotation:
454
+ # annotation["url"] corresponds to image_metadata["url"]
455
+ assert (
456
+ image_metadata["url"] == annotation["url"]
457
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['url']: {image_metadata['url']} and annotations['url']: {annotation['url']}"
458
+
459
+ # in-place operation to normalize annotations
460
+ annotation_normalizer_(annotation)
461
+
462
+ # optionally add image to the annotation
463
+ if image_folder_local_paths is not None:
464
+ filepath = _get_local_image_path(image_metadata["url"], image_folder_local_paths)
465
+ image_dict = {"image": filepath}
466
+ else:
467
+ image_dict = {}
468
+
469
+ yield idx, {**image_dict, **image_metadata, **annotation}