Datasets:
HayatoHongo
/

ArXiv:
License:
HayatoHongo yasumasaonoe commited on
Commit
88f71c6
·
verified ·
0 Parent(s):

Duplicate from google/docci

Browse files

Co-authored-by: Yasumasa Onoe <yasumasaonoe@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +56 -0
  2. README.md +155 -0
  3. docci.py +153 -0
.gitattributes ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ docci_descriptions.jsonlines filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ language:
6
+ - en
7
+ language_creators:
8
+ - other
9
+ license:
10
+ - cc-by-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: DOCCI
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ tags: []
19
+ task_categories:
20
+ - text-to-image
21
+ - image-to-text
22
+ task_ids:
23
+ - image-captioning
24
+ ---
25
+
26
+ # Dataset Card for DOCCI
27
+
28
+ ## Table of Contents
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks](#supported-tasks)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** https://google.github.io/docci
56
+ - **Paper:** [arXiv](https://arxiv.org/pdf/2404.19753)
57
+ - **Data Explorer:** [Check images and descriptions](https://google.github.io/docci/viz.html?c=&p=1)
58
+ - **Point of Contact:** docci-dataset@google.com
59
+ - **Report an Error:** [Google Forms](https://forms.gle/v8sUoXWHvuqrWyfe9)
60
+
61
+ ### Dataset Summary
62
+
63
+ DOCCI (Descriptions of Connected and Contrasting Images) is a collection of images paired with detailed descriptions. The descriptions explain the key elements of the images, as well as secondary information such as background, lighting, and settings. The images are specifically taken to help assess the precise visual properties of images. DOCCI also includes many related images that vary in having key differences from the others. All descriptions are manually annotated to ensure they adequately distinguish each image from its counterparts.
64
+
65
+ ### Supported Tasks
66
+
67
+ Text-to-Image and Image-to-Text generation
68
+
69
+ ### Languages
70
+
71
+ English
72
+
73
+ ## Dataset Structure
74
+
75
+ ### Data Instances
76
+
77
+ ```
78
+ {
79
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1536x2048>,
80
+ 'example_id': 'qual_dev_00000',
81
+ 'description': 'An indoor angled down medium close-up front view of a real sized stuffed dog with white and black colored fur wearing a blue hard hat with a light on it. A couple inches to the right of the dog is a real sized black and white penguin that is also wearing a blue hard hat with a light on it. The dog is sitting, and is facing slightly towards the right while looking to its right with its mouth slightly open, showing its pink tongue. The dog and penguin are placed on a gray and white carpet, and placed against a white drawer that has a large gray cushion on top of it. Behind the gray cushion is a transparent window showing green trees on the outside.'
82
+ }
83
+ ```
84
+
85
+ ### Data Fields
86
+
87
+ Name | Explanation
88
+ --- | ---
89
+ `image` | PIL.JpegImagePlugin.JpegImageFile
90
+ `example_id` | The unique ID of an example follows this format: `<SPLIT_NAME>_<EXAMPLE_NUMBER>`.
91
+ `description` | Text description of the associated image.
92
+
93
+ ### Data Splits
94
+
95
+ Dataset | Train | Test | Qual Dev | Qual Test
96
+ ---| ---: | ---: | ---: | ---:
97
+ DOCCI | 9,647 | 5,000 | 100 | 100
98
+ DOCCI-AAR | 4,932 | 5,000 | -- | --
99
+
100
+
101
+ ## Dataset Creation
102
+
103
+ ### Curation Rationale
104
+
105
+ DOCCI is designed as an evaluation dataset for both text-to-image (T2I) and image-to-text (I2T) generation. Please see our paper for more details.
106
+
107
+ ### Source Data
108
+
109
+ #### Initial Data Collection
110
+
111
+ All images were taken by one of the authors and their family.
112
+
113
+ ### Annotations
114
+
115
+ #### Annotation process
116
+
117
+ All text descriptions were written by human annotators.
118
+ We do not rely on any automated process in our data annotation pipeline.
119
+ Please see Appendix A of [our paper](https://arxiv.org/pdf/2404.19753) for details about image curation.
120
+
121
+ ### Personal and Sensitive Information
122
+
123
+ We manually reviewed all images for personally identifiable information (PII), removing some images and blurring detected faces, phone numbers, and URLs to protect privacy.
124
+ For text descriptions, we instructed annotators to exclude any PII, such as people's names, phone numbers, and URLs.
125
+ After the annotation phase, we employed automatic tools to scan for PII, ensuring the descriptions remained free of such information.
126
+
127
+ ## Considerations for Using the Data
128
+
129
+ ### Social Impact of Dataset
130
+
131
+ [More Information Needed]
132
+
133
+ ### Discussion of Biases
134
+
135
+ [More Information Needed]
136
+
137
+ ### Other Known Limitations
138
+
139
+ [More Information Needed]
140
+
141
+ ### Licensing Information
142
+
143
+ CC BY 4.0
144
+
145
+ ### Citation Information
146
+
147
+ ```
148
+ @inproceedings{OnoeDocci2024,
149
+ author = {Yasumasa Onoe and Sunayana Rane and Zachary Berger and Yonatan Bitton and Jaemin Cho and Roopal Garg and
150
+ Alexander Ku and Zarana Parekh and Jordi Pont-Tuset and Garrett Tanzer and Su Wang and Jason Baldridge},
151
+ title = {{DOCCI: Descriptions of Connected and Contrasting Images}},
152
+ booktitle = {ECCV},
153
+ year = {2024}
154
+ }
155
+ ```
docci.py ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ import datasets
17
+ import glob
18
+ import json
19
+ import os
20
+
21
+ from huggingface_hub import hf_hub_url
22
+
23
+
24
+ _DESCRIPTION = """
25
+ DOCCI (Descriptions of Connected and Contrasting Images) is a collection of images paired with detailed descriptions. The descriptions explain the key elements of the images, as well as secondary information such as background, lighting, and settings. The images are specifically taken to help assess the precise visual properties of images. DOCCI also includes many related images that vary in having key differences from the others. All descriptions are manually annotated to ensure they adequately distinguish each image from its counterparts.
26
+ """
27
+
28
+ _HOMEPAGE = "https://google.github.io/docci/"
29
+
30
+ _LICENSE = "CC BY 4.0"
31
+
32
+ _URL = "https://storage.googleapis.com/docci/data/"
33
+
34
+ _URLS = {
35
+ "descriptions": _URL + "docci_descriptions.jsonlines",
36
+ "images": _URL + "docci_images.tar.gz",
37
+ }
38
+
39
+ _URL_AAR = {
40
+ "images": _URL + "docci_images_aar.tar.gz"
41
+ }
42
+
43
+ _FEATURES_DOCCI = datasets.Features(
44
+ {
45
+ "image": datasets.Image(),
46
+ "example_id": datasets.Value('string'),
47
+ "description": datasets.Value('string'),
48
+ }
49
+ )
50
+
51
+ _FEATURES_DOCCI_AAR = datasets.Features(
52
+ {
53
+ "image": datasets.Image(),
54
+ "example_id": datasets.Value('string'),
55
+ }
56
+ )
57
+
58
+
59
+ class DOCCI(datasets.GeneratorBasedBuilder):
60
+ """DOCCI"""
61
+
62
+ VERSION = datasets.Version("1.0.0")
63
+
64
+ BUILDER_CONFIGS = [
65
+ datasets.BuilderConfig(name="docci", version=VERSION, description="DOCCI images and descriptions"),
66
+ datasets.BuilderConfig(name="docci_aar", version=VERSION, description="DOCCI-AAR images"),
67
+ ]
68
+
69
+ DEFAULT_CONFIG_NAME = "docci"
70
+
71
+ def _info(self):
72
+ return datasets.DatasetInfo(
73
+ features=_FEATURES_DOCCI if self.config.name == 'docci' else _FEATURES_DOCCI_AAR,
74
+ homepage=_HOMEPAGE,
75
+ description=_DESCRIPTION,
76
+ license=_LICENSE,
77
+ )
78
+
79
+ def _split_generators(self, dl_manager):
80
+ """Returns SplitGenerators."""
81
+ if self.config.name == 'docci':
82
+ data = dl_manager.download_and_extract(_URLS)
83
+ return [
84
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={'data': data, 'split': 'train'}),
85
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={'data': data, 'split': 'test'}),
86
+ datasets.SplitGenerator(name=datasets.Split("qual_dev"), gen_kwargs={'data': data, 'split': 'qual_dev'}),
87
+ datasets.SplitGenerator(name=datasets.Split("qual_test"), gen_kwargs={'data': data, 'split': 'qual_test'}),
88
+ ]
89
+ elif self.config.name == 'docci_aar':
90
+ data = dl_manager.download_and_extract(_URL_AAR)
91
+ return [
92
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={'data': data, 'split': 'train'}),
93
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={'data': data, 'split': 'test'}),
94
+ ]
95
+
96
+ def _generate_examples(self, data, split):
97
+ if self.config.name == "docci":
98
+ return self._generate_examples_docci(data, split)
99
+ elif self.config.name == "docci_aar":
100
+ return self._generate_examples_docci_aar(data, split)
101
+
102
+ def _generate_examples_docci(self, data, split):
103
+ with open(data["descriptions"], "r") as f:
104
+ examples = [json.loads(l.strip()) for l in f]
105
+
106
+ for ex in examples:
107
+ if split == "train":
108
+ if not (ex["split"] == "train" and ex['example_id'].startswith("train")):
109
+ continue
110
+ elif split == "test":
111
+ if not (ex["split"] == "test" and ex['example_id'].startswith("test")):
112
+ continue
113
+ elif split == "qual_dev":
114
+ if not (ex["split"] == "qual_dev" and ex['example_id'].startswith("qual_dev")):
115
+ continue
116
+ elif split == "qual_test":
117
+ if not (ex["split"] == "qual_test" and ex['example_id'].startswith("qual_test")):
118
+ continue
119
+
120
+ image_path = os.path.join(data["images"], "images", ex["image_file"])
121
+
122
+ _ex = {
123
+ "image": image_path,
124
+ "example_id": ex["example_id"],
125
+ "split": ex["split"],
126
+ "image_file": ex["image_file"],
127
+ "description": ex["description"],
128
+ }
129
+
130
+ yield _ex["example_id"], _ex
131
+
132
+ def _generate_examples_docci_aar(self, data, split):
133
+ image_files = glob.glob(os.path.join(data["images"], "images_aar", "*.jpg"))
134
+
135
+ for image_path in image_files:
136
+
137
+ example_id = os.path.splitext(os.path.basename(image_path))[0]
138
+
139
+ if split == "train":
140
+ if not example_id.startswith("aar_train"):
141
+ continue
142
+ elif split == "test":
143
+ if not example_id.startswith("aar_test"):
144
+ continue
145
+
146
+ _ex = {
147
+ "image": image_path,
148
+ "example_id": example_id,
149
+ "split": split,
150
+ "image_file": os.path.basename(image_path),
151
+ }
152
+
153
+ yield _ex["example_id"], _ex