harpreetsahota commited on
Commit
37331ee
·
verified ·
1 Parent(s): a681093

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -78
README.md CHANGED
@@ -5,24 +5,26 @@ size_categories:
5
  - n<1K
6
  task_categories:
7
  - image-classification
 
 
8
  task_ids: []
9
  pretty_name: document-haystack-10pages
10
  tags:
11
  - fiftyone
12
  - image
13
- - image-classification
14
- dataset_summary: '
15
 
16
 
17
 
18
 
19
- This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 250 samples.
 
20
 
21
 
22
  ## Installation
23
 
24
 
25
- If you haven''t already, install FiftyOne:
26
 
27
 
28
  ```bash
@@ -44,7 +46,7 @@ dataset_summary: '
44
 
45
  # Load the dataset
46
 
47
- # Note: other available arguments include ''max_samples'', etc
48
 
49
  dataset = load_from_hub("harpreetsahota/document-haystack-10pages")
50
 
@@ -54,16 +56,12 @@ dataset_summary: '
54
  session = fo.launch_app(dataset)
55
 
56
  ```
57
-
58
- '
59
  ---
60
 
61
  # Dataset Card for document-haystack-10pages
62
 
63
- <!-- Provide a quick summary of the dataset. -->
64
-
65
-
66
-
67
 
68
 
69
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 250 samples.
@@ -95,130 +93,211 @@ session = fo.launch_app(dataset)
95
 
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
99
 
 
100
 
 
 
 
 
 
 
 
101
 
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
- - **Language(s) (NLP):** en
106
- - **License:** [More Information Needed]
107
 
108
- ### Dataset Sources [optional]
109
 
110
- <!-- Provide the basic links for the dataset. -->
 
 
111
 
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
- ## Uses
117
 
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
 
120
- ### Direct Use
121
 
122
- <!-- This section describes suitable use cases for the dataset. -->
 
 
 
 
 
123
 
124
- [More Information Needed]
125
 
126
- ### Out-of-Scope Use
127
 
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
 
 
 
 
 
 
 
 
 
 
129
 
130
- [More Information Needed]
131
 
132
- ## Dataset Structure
133
-
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
135
 
136
- [More Information Needed]
137
 
138
- ## Dataset Creation
139
 
140
- ### Curation Rationale
 
141
 
142
- <!-- Motivation for the creation of this dataset. -->
143
 
144
- [More Information Needed]
145
 
146
- ### Source Data
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
149
 
150
- #### Data Collection and Processing
 
 
 
 
151
 
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
153
 
154
- [More Information Needed]
 
 
 
155
 
156
- #### Who are the source data producers?
157
 
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
 
160
- [More Information Needed]
161
 
162
- ### Annotations [optional]
 
 
 
163
 
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
165
 
166
- #### Annotation process
167
 
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
 
 
 
 
 
 
169
 
170
- [More Information Needed]
171
 
172
- #### Who are the annotators?
173
 
174
- <!-- This section describes the people or systems who created the annotations. -->
175
 
176
- [More Information Needed]
177
 
178
- #### Personal and Sensitive Information
 
 
 
179
 
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
 
 
 
181
 
182
- [More Information Needed]
183
 
184
- ## Bias, Risks, and Limitations
185
 
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
187
 
188
- [More Information Needed]
189
 
190
- ### Recommendations
191
 
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
193
 
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
 
 
 
 
 
195
 
196
- ## Citation [optional]
 
 
 
197
 
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
 
 
199
 
200
- **BibTeX:**
201
 
202
- [More Information Needed]
 
 
 
 
 
203
 
204
- **APA:**
205
 
206
- [More Information Needed]
207
 
208
- ## Glossary [optional]
209
 
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
 
 
 
 
 
 
 
211
 
212
- [More Information Needed]
213
 
214
- ## More Information [optional]
215
 
216
- [More Information Needed]
217
 
218
- ## Dataset Card Authors [optional]
219
 
220
- [More Information Needed]
 
 
 
 
221
 
222
- ## Dataset Card Contact
223
 
224
- [More Information Needed]
 
 
 
 
5
  - n<1K
6
  task_categories:
7
  - image-classification
8
+ - visual-question-answering
9
+ - visual-document-retrieval
10
  task_ids: []
11
  pretty_name: document-haystack-10pages
12
  tags:
13
  - fiftyone
14
  - image
15
+ dataset_summary: >
 
16
 
17
 
18
 
19
 
20
+ This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 250
21
+ samples.
22
 
23
 
24
  ## Installation
25
 
26
 
27
+ If you haven't already, install FiftyOne:
28
 
29
 
30
  ```bash
 
46
 
47
  # Load the dataset
48
 
49
+ # Note: other available arguments include 'max_samples', etc
50
 
51
  dataset = load_from_hub("harpreetsahota/document-haystack-10pages")
52
 
 
56
  session = fo.launch_app(dataset)
57
 
58
  ```
59
+ license: cc-by-nc-4.0
 
60
  ---
61
 
62
  # Dataset Card for document-haystack-10pages
63
 
64
+ [!image/png](document_haystacks_fo.gif)
 
 
 
65
 
66
 
67
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 250 samples.
 
93
 
94
  ### Dataset Description
95
 
96
+ Document Haystack is a comprehensive benchmark designed to evaluate the performance of Vision Language Models (VLMs) on long, visually complex documents. This FiftyOne dataset contains the 5-page subset, which serves as an entry point for testing retrieval capabilities on shorter documents.
97
 
98
+ The benchmark expands on the "Needle in a Haystack" concept by embedding needles — short key-value statements in pure text or as multimodal text+image snippets — within real-world documents. These needles test whether models can locate specific information hidden within complex documents with textual, visual, or mixed content.
99
 
100
+ **Key Features:**
101
+ - 25 real-world base documents (annual reports, financial filings, etc.)
102
+ - 5 pages per document variant
103
+ - 5 needles per document (strategically placed across pages)
104
+ - Two needle types: text-only and text+image
105
+ - 250 total samples (125 per needle type)
106
+ - 250 retrieval questions
107
 
108
+ - **Curated by:** Amazon AGI (Goeric Huybrechts, Srikanth Ronanki, Sai Muralidhar Jayanthi, Jack Fitzgerald, Srinivasan Veeravanallur)
109
+ - **Language:** English
110
+ - **License:** CC-BY-NC-4.0
 
 
111
 
112
+ ### Dataset Sources
113
 
114
+ - **Original Repository:** https://github.com/amazon-science/document-haystack
115
+ - **Original Dataset:** https://huggingface.co/datasets/AmazonScience/document-haystack
116
+ - **Paper:** [Document Haystack: A Long Context Multimodal Image/Document Understanding Vision LLM Benchmark](https://arxiv.org/abs/2507.15882)
117
 
118
+ ## Dataset Structure
 
 
119
 
120
+ ### FiftyOne Schema
121
 
122
+ Each sample in this FiftyOne dataset represents a single page image with the following fields:
123
 
124
+ #### Sample-Level Fields
125
 
126
+ | Field | Type | Description |
127
+ |-------|------|-------------|
128
+ | `filepath` | string | Path to the page image (JPG, 200 DPI) |
129
+ | `document_name` | Classification | Name of the source document (e.g., "AIG", "AmericanAirlines") |
130
+ | `page_number` | int | Page number within the document (1-5) |
131
+ | `needle_type` | Classification | Type of needles: "text" or "text_image" |
132
 
133
+ #### Needle Information Fields
134
 
135
+ These fields contain lists of information about needles on each page:
136
 
137
+ | Field | Type | Description |
138
+ |-------|------|-------------|
139
+ | `needle_texts` | list[string] | Full needle statements (e.g., "The secret currency is a \"euro\".") |
140
+ | `needle_keys` | Classifications | Extracted keys (e.g., "currency", "sport") |
141
+ | `needle_answers` | Classifications | Extracted answers (e.g., "euro", "basketball") |
142
+ | `needle_questions` | list[string] | Questions for retrieving each needle |
143
+ | `needle_font_sizes` | list[int] | Font sizes of needles |
144
+ | `needle_text_colors` | list[string] | Text colors |
145
+ | `needle_bg_colors` | list[string] | Background colors |
146
+ | `needle_fonts` | list[string] | Font families |
147
+ | `needle_scales` | list[int] | Scale values (for text+image needles) |
148
+ | `needle_locations` | Keypoints | Spatial locations of needles with (x, y) coordinates in [0, 1] |
149
 
150
+ ### Needle Categories
151
 
152
+ Needles span diverse categories including:
153
+ - Sports
154
+ - Animals
155
+ - Currencies
156
+ - Fruits
157
+ - Musical instruments
158
+ - Office supplies
159
+ - Flowers
160
+ - Landmarks
161
+ - And more...
162
 
163
+ ### Needle Format
164
 
165
+ Each needle follows the pattern: **"The secret KEY is VALUE."**
166
 
167
+ - **Text needles:** Both KEY and VALUE are text (e.g., "The secret sport is basketball.")
168
+ - **Text+Image needles:** VALUE is shown as an image (e.g., "The secret sport is [image of basketball]")
169
 
170
+ Questions follow the pattern: **"What is the secret KEY in the document?"**
171
 
172
+ ## Uses
173
 
174
+ ### Direct Use
175
 
176
+ This dataset is designed for:
177
 
178
+ 1. **Evaluating VLM retrieval capabilities** - Test how well models can locate specific information within documents
179
+ 2. **Benchmarking long-context understanding** - Even at 5 pages, this tests models' ability to process extended visual content
180
+ 3. **Comparing text vs. multimodal retrieval** - Direct comparison between text-only and text+image needle performance
181
+ 4. **Visual dataset exploration** - Use FiftyOne's visualization tools to understand needle placement patterns
182
+ 5. **Model development** - Train and validate models for document understanding tasks
183
 
184
+ ### Out-of-Scope Use
185
 
186
+ - This dataset is for **research and evaluation purposes only** (CC-BY-NC-4.0 license)
187
+ - Not intended for commercial use without proper licensing
188
+ - Not suitable for training models to extract sensitive information from documents
189
+ - The 5-page subset is not representative of truly long-context scenarios (use 50-200 page subsets for that)
190
 
191
+ ## Dataset Creation
192
 
193
+ ### Curation Rationale
194
 
195
+ The Document Haystack benchmark was created to address the lack of suitable benchmarks for evaluating VLMs on long, visually complex documents. While many benchmarks focus on perception tasks, processing long documents with both text and visual elements remains under-explored.
196
 
197
+ The 5-page subset serves as:
198
+ - An accessible entry point for initial model testing
199
+ - A faster iteration benchmark for development
200
+ - A baseline for comparing against longer document performance
201
 
202
+ ### Source Data
203
 
204
+ #### Data Collection and Processing
205
 
206
+ - **Base documents:** Real-world documents including annual reports, financial filings, and corporate documents from 25 different organizations
207
+ - **Page extraction:** Documents converted to 200 DPI page images
208
+ - **Needle insertion:** Key-value pairs strategically placed across pages with controlled randomization
209
+ - Needles placed in non-overlapping page ranges
210
+ - Same locations used for both text and text+image variants
211
+ - Various visual properties (fonts, colors, sizes) to test robustness
212
+ - **Text extraction:** OCR/parsing for text-only variants
213
 
214
+ #### Who are the source data producers?
215
 
216
+ The original documents are publicly available corporate documents (annual reports, financial statements, etc.). The benchmark itself was created by researchers at Amazon AGI.
217
 
218
+ ### Annotations
219
 
220
+ #### Annotation Process
221
 
222
+ Annotations include:
223
+ - **Needle placement metadata:** Precise coordinates, font properties, colors
224
+ - **Ground truth answers:** Extracted key-value pairs for each needle
225
+ - **Retrieval questions:** Automatically generated questions following the template "What is the secret KEY in the document?"
226
 
227
+ The placement is automated but controlled to ensure:
228
+ - Even distribution across document pages
229
+ - Non-overlapping placement
230
+ - Visibility and retrievability
231
 
232
+ #### Who are the Annotators?
233
 
234
+ Annotations were automatically generated as part of the benchmark creation process by the Amazon AGI research team.
235
 
236
+ ### Personal and Sensitive Information
237
 
238
+ The dataset consists of publicly available corporate documents (annual reports, financial filings). No personal or sensitive information beyond what is publicly available in these corporate documents is present.
239
 
240
+ Needles are synthetic key-value pairs and do not contain real sensitive information.
241
 
242
+ ## Bias, Risks, and Limitations
243
 
244
+ **Limitations:**
245
+ - **Document diversity:** Limited to 25 base documents from corporate/financial domain
246
+ - **English-only:** All documents and needles are in English
247
+ - **5-page constraint:** Not representative of truly long documents (50-200 pages)
248
+ - **Synthetic task:** Needle retrieval is a proxy task for real document understanding
249
+ - **Visual styling:** Needles use specific visual properties that may not represent all real-world scenarios
250
 
251
+ **Biases:**
252
+ - Corporate document focus may not generalize to other document types
253
+ - Needle categories reflect common Western concepts
254
+ - Single language limits cross-lingual evaluation
255
 
256
+ **Risks:**
257
+ - Models optimized for this benchmark may overfit to the needle retrieval pattern
258
+ - Performance on this task may not correlate with general document understanding
259
 
260
+ ### Recommendations
261
 
262
+ Users should:
263
+ - Use this dataset alongside other document understanding benchmarks
264
+ - Test on multiple page lengths (5, 10, 25, 50+ pages) for comprehensive evaluation
265
+ - Consider domain shift when applying models to non-corporate documents
266
+ - Validate that good performance translates to real-world document tasks
267
+ - Be aware this tests retrieval, not deeper comprehension or reasoning
268
 
269
+ ## Citation
270
 
271
+ If you use this dataset, please cite the original Document Haystack paper:
272
 
273
+ **BibTeX:**
274
 
275
+ ```bibtex
276
+ @article{huybrechts2025document,
277
+ title={Document Haystack: A Long Context Multimodal Image/Document Understanding Vision LLM Benchmark},
278
+ author={Huybrechts, Goeric and Ronanki, Srikanth and Jayanthi, Sai Muralidhar and Fitzgerald, Jack and Veeravanallur, Srinivasan},
279
+ journal={arXiv preprint arXiv:2507.15882},
280
+ year={2025}
281
+ }
282
+ ```
283
 
284
+ **APA:**
285
 
286
+ Huybrechts, G., Ronanki, S., Jayanthi, S. M., Fitzgerald, J., & Veeravanallur, S. (2025). Document Haystack: A Long Context Multimodal Image/Document Understanding Vision LLM Benchmark. arXiv preprint arXiv:2507.15882.
287
 
288
+ ## More Information
289
 
290
+ ### FiftyOne Integration
291
 
292
+ This dataset leverages FiftyOne's capabilities for:
293
+ - Visual exploration of needle locations via Keypoints
294
+ - Filtering and querying by document properties
295
+ - Classification-based analysis of keys and answers
296
+ - Integration with FiftyOne Brain for embeddings and similarity
297
 
298
+ ### Related Datasets
299
 
300
+ The full Document Haystack benchmark includes variants with:
301
+ - 5, 10, 25, 50, 75, 100, 150, and 200 pages
302
+ - 400 document variants total
303
+ - 8,250 retrieval questions