Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -34,4 +34,155 @@ dataset_info:
|
|
| 34 |
num_examples: 28473
|
| 35 |
download_size: 13295093966
|
| 36 |
dataset_size: 12966543857
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
num_examples: 28473
|
| 35 |
download_size: 13295093966
|
| 36 |
dataset_size: 12966543857
|
| 37 |
+
license: other
|
| 38 |
+
task_categories:
|
| 39 |
+
- visual-question-answering
|
| 40 |
+
language:
|
| 41 |
+
- en
|
| 42 |
+
tags:
|
| 43 |
+
- documents
|
| 44 |
+
- vqa
|
| 45 |
+
- generative
|
| 46 |
+
- document understanding
|
| 47 |
+
size_categories:
|
| 48 |
+
- 100K<n<1M
|
| 49 |
---
|
| 50 |
+
# GenDocVQA-2024
|
| 51 |
+
|
| 52 |
+
This dataset provides a broad set of documents with questions related to their contents.
|
| 53 |
+
These questions are non-extractive, meaning that the model, which solves our task should be
|
| 54 |
+
generative and compute the answers by itself.
|
| 55 |
+
|
| 56 |
+
## Dataset Details
|
| 57 |
+
|
| 58 |
+
## Uses
|
| 59 |
+
|
| 60 |
+
### Direct Use
|
| 61 |
+
|
| 62 |
+
In order to load dataset using following code:
|
| 63 |
+
|
| 64 |
+
```python
|
| 65 |
+
ds = datasets.load_dataset('lenagibee/GenDocVQA2024')
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
ds is a dict consisting from two splits `train` and `validation`.
|
| 69 |
+
|
| 70 |
+
To open the image use following example:
|
| 71 |
+
```python
|
| 72 |
+
from PIL import Image
|
| 73 |
+
im = Image.open(ds['train'][0]['image_path'])
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Dataset generator:
|
| 77 |
+
https://huggingface.co/datasets/lenagibee/GenDocVQA2024/resolve/main/GenDocVQA2024.py?download=true
|
| 78 |
+
|
| 79 |
+
## Dataset Structure
|
| 80 |
+
|
| 81 |
+
All the necessary data is stored in the following archives:
|
| 82 |
+
|
| 83 |
+
* Images: https://huggingface.co/datasets/lenagibee/GenDocVQA2024/resolve/main/archives/gendocvqa2024_imgs.tar.gz?download=true
|
| 84 |
+
* OCR: https://huggingface.co/datasets/lenagibee/GenDocVQA2024/resolve/main/archives/gendocvqa2024_ocr.tar.gz?download=true
|
| 85 |
+
* Annotations: https://huggingface.co/datasets/lenagibee/GenDocVQA2024/resolve/main/archives/gendocvqa2024_annotations.tar.gz?download=true
|
| 86 |
+
|
| 87 |
+
Data parsing is already implemented in the attached dataset generator.
|
| 88 |
+
Images should be processed by the user himself.
|
| 89 |
+
|
| 90 |
+
The train split contains 260814 questions and dev (validation) contains 28473.
|
| 91 |
+
|
| 92 |
+
### Features of dataset
|
| 93 |
+
|
| 94 |
+
The features of the dataset are the following:
|
| 95 |
+
```python
|
| 96 |
+
features = datasets.Features(
|
| 97 |
+
{
|
| 98 |
+
"unique_id": datasets.Value("int64"),
|
| 99 |
+
"image_path": datasets.Value("string"),
|
| 100 |
+
"ocr": datasets.Sequence(
|
| 101 |
+
feature={
|
| 102 |
+
'text': datasets.Value("string"),
|
| 103 |
+
'bbox': datasets.Sequence(datasets.Value("int64")),
|
| 104 |
+
'block_id': datasets.Value("int64"),
|
| 105 |
+
'text_id': datasets.Value("int64"),
|
| 106 |
+
'par_id': datasets.Value("int64"),
|
| 107 |
+
'line_id': datasets.Value("int64"),
|
| 108 |
+
'word_id': datasets.Value("int64")
|
| 109 |
+
}
|
| 110 |
+
),
|
| 111 |
+
"question": datasets.Value("string"),
|
| 112 |
+
"answer": datasets.Sequence(datasets.Value("string")),
|
| 113 |
+
|
| 114 |
+
}
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
#### Features description
|
| 118 |
+
|
| 119 |
+
* `unique_id` - integer, an id of a question
|
| 120 |
+
* `image_path` - string, path to the image for a question (includes downloaded path)
|
| 121 |
+
* `ocr` - dictionary, containing lists, where each element is an information related to a single word
|
| 122 |
+
* `text` - string, a word itself
|
| 123 |
+
* `bbox` - list of 4 integers, a bounding box of the word
|
| 124 |
+
* `block_id` - integer, an index of the block, where the word is located
|
| 125 |
+
* `text_id` - integer, an index of the set of paragraphs, where the word is located
|
| 126 |
+
* `par_id` - integer, an index of the paragraph, where the word is located
|
| 127 |
+
* `line_id` - integer, an index of the line, where the word is located
|
| 128 |
+
* `word_id` - integer, an index of the word
|
| 129 |
+
* `question` - string, containing the question
|
| 130 |
+
* `answer` - list of strings, containing the answers to the question, can be empty (non-answerable)
|
| 131 |
+
|
| 132 |
+
### Images
|
| 133 |
+
|
| 134 |
+
Are divided inside the archive into dev and train folders.
|
| 135 |
+
Just regular images in PNG, JPG formats.
|
| 136 |
+
You can use any image library to process them.
|
| 137 |
+
|
| 138 |
+
### OCR
|
| 139 |
+
|
| 140 |
+
Same as the Images are divided into dev and train folders.
|
| 141 |
+
Represented as JSON files.
|
| 142 |
+
|
| 143 |
+
#### OCR JSON Description
|
| 144 |
+
|
| 145 |
+
It is a list of elements, where each represents an information about the single word extracted
|
| 146 |
+
by the ABBYY FineReader OCR, and contains fields in following order:
|
| 147 |
+
|
| 148 |
+
1. `block_id` - integer, an index of the block, where the word is located
|
| 149 |
+
2. `text_id` - integer, an index of the set of paragraphs, where the word is located
|
| 150 |
+
3. `par_id` - integer, an index of the paragraph, where the word is located
|
| 151 |
+
4. `line_id` - integer, an index of the line, where the word is located
|
| 152 |
+
5. `word_id` - integer, an index of the word
|
| 153 |
+
6. `bbox` - list of 4 integers, a bounding box of the word
|
| 154 |
+
7. `text` - string, a word itself
|
| 155 |
+
|
| 156 |
+
### Annotations
|
| 157 |
+
|
| 158 |
+
dev (validation) and train splits are located in the archive.
|
| 159 |
+
Question lists are represtened by csv files with following columns:
|
| 160 |
+
|
| 161 |
+
1. `unique_id` - an id of the question
|
| 162 |
+
2. `split`
|
| 163 |
+
3. `question`
|
| 164 |
+
4. `answer`
|
| 165 |
+
5. `image_filename` - a filename of the related image
|
| 166 |
+
6. `ocr_filename` - a filename of the json file, containing the related OCR data
|
| 167 |
+
|
| 168 |
+
## Dataset Creation
|
| 169 |
+
|
| 170 |
+
### Source Data
|
| 171 |
+
|
| 172 |
+
The data for this dataset was collected from the following datasets:
|
| 173 |
+
1. SlideVQA - Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, and Kuniko Saito. "A Dataset for Document Visual Question Answering on Multiple Images". In Proc. of AAAI. 2023.
|
| 174 |
+
2. PDFVQA - Yihao Ding and Siwen Luo and Hyunsuk Chung and Soyeon Caren Han, PDFVQA: A New Dataset for Real-World VQA on PDF Documents, 2023
|
| 175 |
+
3. InfographicsVQA - InfographicVQA, Minesh Mathew and Viraj Bagal and Rub猫n P茅rez Tito and Dimosthenis Karatzas and Ernest Valveny and C. V Jawahar, 2021
|
| 176 |
+
4. TAT-DQA - Towards complex document understanding by discrete reasoning, Zhu, Fengbin and Lei, Wenqiang and Feng, Fuli and Wang, Chao and Zhang, Haozhou and Chua, Tat-Seng, 2022
|
| 177 |
+
5. DUDE - Document Understanding Dataset and Evaluation (DUDE), Jordy Van Landeghem and Rub茅n Tito and 艁ukasz Borchmann and Micha艂 Pietruszka and Pawe艂 J贸ziak and Rafa艂 Powalski and Dawid Jurkiewicz and Micka毛l Coustaty and Bertrand Ackaert and Ernest Valveny and Matthew Blaschko and Sien Moens and Tomasz Stanis艂awek, 2023
|
| 178 |
+
|
| 179 |
+
### Data Processing
|
| 180 |
+
|
| 181 |
+
The questions from each dataset were filtered by the types of the questions,
|
| 182 |
+
leaving only non-extractive questions, related to one page. After that the questions
|
| 183 |
+
were paraphrased.
|
| 184 |
+
|
| 185 |
+
## Dataset Card Contact
|
| 186 |
+
Please feel free to contact in the community page of this dataset of via
|
| 187 |
+
the Telegram chat of the challenge:
|
| 188 |
+
https://t.me/gendocvqa2024
|