File size: 10,706 Bytes
41d03d4
 
 
 
 
 
 
37331ee
 
41d03d4
 
 
 
 
37331ee
41d03d4
 
 
 
37331ee
 
41d03d4
 
 
 
 
37331ee
41d03d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37331ee
41d03d4
 
 
 
 
 
 
 
 
37331ee
41d03d4
 
 
 
d36dc31
41d03d4
 
d4b8e41
41d03d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
585dd30
41d03d4
 
 
 
 
 
 
 
 
 
48f4b85
41d03d4
37331ee
41d03d4
37331ee
 
d4b8e41
 
37331ee
 
 
41d03d4
37331ee
 
 
41d03d4
37331ee
41d03d4
37331ee
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
 
 
 
 
 
 
 
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
 
 
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
d4b8e41
37331ee
 
 
41d03d4
37331ee
41d03d4
37331ee
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
41d03d4
37331ee
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
 
41d03d4
37331ee
 
 
 
41d03d4
37331ee
 
 
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
 
 
 
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
41d03d4
37331ee
 
 
 
 
41d03d4
37331ee
41d03d4
37331ee
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
---
annotations_creators: []
language: en
size_categories:
- n<1K
task_categories:
- image-classification
- visual-question-answering
- visual-document-retrieval
task_ids: []
pretty_name: document-haystack-10pages
tags:
- fiftyone
- image
dataset_summary: >




  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 250
  samples.


  ## Installation


  If you haven't already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  from fiftyone.utils.huggingface import load_from_hub


  # Load the dataset

  # Note: other available arguments include 'max_samples', etc

  dataset = load_from_hub("harpreetsahota/document-haystack-10pages")


  # Launch the App

  session = fo.launch_app(dataset)

  ```
license: cc-by-nc-4.0
---

# Dataset Card for document-haystack-10pages

![image/png](document_haystacks_fo.gif)


This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 250 samples. It's the 10-page subset of the [full dataset](https://huggingface.co/datasets/AmazonScience/document-haystack).

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/document-haystack-10pages")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Details

### Dataset Description

Document Haystack is a comprehensive benchmark designed to evaluate the performance of Vision Language Models (VLMs) on long, visually complex documents. This FiftyOne dataset contains the 10-page subset, which serves as an entry point for testing retrieval capabilities on shorter documents.

The benchmark expands on the "Needle in a Haystack" concept by embedding needles — short key-value statements in pure text or as multimodal text+image snippets — within real-world documents. These needles test whether models can locate specific information hidden within complex documents with textual, visual, or mixed content.

**Key Features:**
- 25 real-world base documents (annual reports, financial filings, etc.)
- 10 pages per document variant
- 10 needles per document (strategically placed across pages)
- Two needle types: text-only and text+image
- 250 total samples (125 per needle type)
- 250 retrieval questions

- **Curated by:** Amazon AGI (Goeric Huybrechts, Srikanth Ronanki, Sai Muralidhar Jayanthi, Jack Fitzgerald, Srinivasan Veeravanallur)
- **Language:** English
- **License:** CC-BY-NC-4.0

### Dataset Sources

- **Original Repository:** https://github.com/amazon-science/document-haystack
- **Original Dataset:** https://huggingface.co/datasets/AmazonScience/document-haystack
- **Paper:** [Document Haystack: A Long Context Multimodal Image/Document Understanding Vision LLM Benchmark](https://arxiv.org/abs/2507.15882)

## Dataset Structure

### FiftyOne Schema

Each sample in this FiftyOne dataset represents a single page image with the following fields:

#### Sample-Level Fields

| Field | Type | Description |
|-------|------|-------------|
| `filepath` | string | Path to the page image (JPG, 200 DPI) |
| `document_name` | Classification | Name of the source document (e.g., "AIG", "AmericanAirlines") |
| `page_number` | int | Page number within the document (1-5) |
| `needle_type` | Classification | Type of needles: "text" or "text_image" |

#### Needle Information Fields

These fields contain lists of information about needles on each page:

| Field | Type | Description |
|-------|------|-------------|
| `needle_texts` | list[string] | Full needle statements (e.g., "The secret currency is a \"euro\".") |
| `needle_keys` | Classifications | Extracted keys (e.g., "currency", "sport") |
| `needle_answers` | Classifications | Extracted answers (e.g., "euro", "basketball") |
| `needle_questions` | list[string] | Questions for retrieving each needle |
| `needle_font_sizes` | list[int] | Font sizes of needles |
| `needle_text_colors` | list[string] | Text colors |
| `needle_bg_colors` | list[string] | Background colors |
| `needle_fonts` | list[string] | Font families |
| `needle_scales` | list[int] | Scale values (for text+image needles) |
| `needle_locations` | Keypoints | Spatial locations of needles with (x, y) coordinates in [0, 1] |

### Needle Categories

Needles span diverse categories including:
- Sports
- Animals
- Currencies
- Fruits
- Musical instruments
- Office supplies
- Flowers
- Landmarks
- And more...

### Needle Format

Each needle follows the pattern: **"The secret KEY is VALUE."**

- **Text needles:** Both KEY and VALUE are text (e.g., "The secret sport is basketball.")
- **Text+Image needles:** VALUE is shown as an image (e.g., "The secret sport is [image of basketball]")

Questions follow the pattern: **"What is the secret KEY in the document?"**

## Uses

### Direct Use

This dataset is designed for:

1. **Evaluating VLM retrieval capabilities** - Test how well models can locate specific information within documents
2. **Benchmarking long-context understanding** - Even at 10 pages, this tests models' ability to process extended visual content
3. **Comparing text vs. multimodal retrieval** - Direct comparison between text-only and text+image needle performance
4. **Visual dataset exploration** - Use FiftyOne's visualization tools to understand needle placement patterns
5. **Model development** - Train and validate models for document understanding tasks

### Out-of-Scope Use

- This dataset is for **research and evaluation purposes only** (CC-BY-NC-4.0 license)
- Not intended for commercial use without proper licensing
- Not suitable for training models to extract sensitive information from documents
- The 5-page subset is not representative of truly long-context scenarios (use 50-200 page subsets for that)

## Dataset Creation

### Curation Rationale

The Document Haystack benchmark was created to address the lack of suitable benchmarks for evaluating VLMs on long, visually complex documents. While many benchmarks focus on perception tasks, processing long documents with both text and visual elements remains under-explored.

The 5-page subset serves as:
- An accessible entry point for initial model testing
- A faster iteration benchmark for development
- A baseline for comparing against longer document performance

### Source Data

#### Data Collection and Processing

- **Base documents:** Real-world documents including annual reports, financial filings, and corporate documents from 25 different organizations
- **Page extraction:** Documents converted to 200 DPI page images
- **Needle insertion:** Key-value pairs strategically placed across pages with controlled randomization
  - Needles placed in non-overlapping page ranges
  - Same locations used for both text and text+image variants
  - Various visual properties (fonts, colors, sizes) to test robustness
- **Text extraction:** OCR/parsing for text-only variants

#### Who are the source data producers?

The original documents are publicly available corporate documents (annual reports, financial statements, etc.). The benchmark itself was created by researchers at Amazon AGI.

### Annotations

#### Annotation Process

Annotations include:
- **Needle placement metadata:** Precise coordinates, font properties, colors
- **Ground truth answers:** Extracted key-value pairs for each needle
- **Retrieval questions:** Automatically generated questions following the template "What is the secret KEY in the document?"

The placement is automated but controlled to ensure:
- Even distribution across document pages
- Non-overlapping placement
- Visibility and retrievability

#### Who are the Annotators?

Annotations were automatically generated as part of the benchmark creation process by the Amazon AGI research team.

### Personal and Sensitive Information

The dataset consists of publicly available corporate documents (annual reports, financial filings). No personal or sensitive information beyond what is publicly available in these corporate documents is present.

Needles are synthetic key-value pairs and do not contain real sensitive information.

## Bias, Risks, and Limitations

**Limitations:**
- **Document diversity:** Limited to 25 base documents from corporate/financial domain
- **English-only:** All documents and needles are in English
- **5-page constraint:** Not representative of truly long documents (50-200 pages)
- **Synthetic task:** Needle retrieval is a proxy task for real document understanding
- **Visual styling:** Needles use specific visual properties that may not represent all real-world scenarios

**Biases:**
- Corporate document focus may not generalize to other document types
- Needle categories reflect common Western concepts
- Single language limits cross-lingual evaluation

**Risks:**
- Models optimized for this benchmark may overfit to the needle retrieval pattern
- Performance on this task may not correlate with general document understanding

### Recommendations

Users should:
- Use this dataset alongside other document understanding benchmarks
- Test on multiple page lengths (5, 10, 25, 50+ pages) for comprehensive evaluation
- Consider domain shift when applying models to non-corporate documents
- Validate that good performance translates to real-world document tasks
- Be aware this tests retrieval, not deeper comprehension or reasoning

## Citation

If you use this dataset, please cite the original Document Haystack paper:

**BibTeX:**

```bibtex
@article{huybrechts2025document,
  title={Document Haystack: A Long Context Multimodal Image/Document Understanding Vision LLM Benchmark},
  author={Huybrechts, Goeric and Ronanki, Srikanth and Jayanthi, Sai Muralidhar and Fitzgerald, Jack and Veeravanallur, Srinivasan},
  journal={arXiv preprint arXiv:2507.15882},
  year={2025}
}
```

**APA:**

Huybrechts, G., Ronanki, S., Jayanthi, S. M., Fitzgerald, J., & Veeravanallur, S. (2025). Document Haystack: A Long Context Multimodal Image/Document Understanding Vision LLM Benchmark. arXiv preprint arXiv:2507.15882.

## More Information

### FiftyOne Integration

This dataset leverages FiftyOne's capabilities for:
- Visual exploration of needle locations via Keypoints
- Filtering and querying by document properties
- Classification-based analysis of keys and answers
- Integration with FiftyOne Brain for embeddings and similarity

### Related Datasets

The full Document Haystack benchmark includes variants with:
- 5, 10, 25, 50, 75, 100, 150, and 200 pages
- 400 document variants total
- 8,250 retrieval questions