hardness_data_mix / README.md
Kimhi's picture
Update citation to correct arXiv format
5d0d645 verified
---
license: cc-by-4.0
task_categories:
- image-classification
- visual-question-answering
tags:
- document-understanding
- resolution-selection
- multi-resolution
- vision-language
- document-vqa
language:
- en
size_categories:
- 100K<n<1M
source_datasets:
- textvqa
- docvqa
- chartqa
- infographicvqa
- hme100k
---
# Hardness Data Mix - Resolution Sufficiency Dataset
A large-scale dataset of document images with labels indicating the minimum resolution required to accurately answer questions about those documents.
## Dataset Description
This dataset contains 81,924 document image-question pairs labeled with resolution sufficiency information. Each sample is annotated with a "hardness" label indicating the minimum resolution level needed to answer questions about that document accurately.
### Dataset Summary
- **Total Samples**: 81,924
- **Image Formats**: JPEG, PNG
- **Resolutions Available**: Low (384×384), Medium (512×512), High (768×768+)
- **Features**: Multi-path image storage (low, mid, high resolution versions)
- **Languages**: English
- **Domains**: Mixed document types (text, charts, infographics, documents)
### Key Statistics
```
Class Distribution:
Class 0 (Low res sufficient): 38,537 samples (47.0%)
Class 1 (Medium res needed): 19,929 samples (24.3%)
Class 2 (High res required): 23,458 samples (28.6%)
Total Size: ~4.92 MB (parquet format)
Average Sample Size: ~60 KB
```
## Dataset Fields
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique sample identifier |
| `question` | string | Question about the document |
| `low_path` | string | Path to low-resolution image (384×384) |
| `mid_path` | string | Path to medium-resolution image (512×512) |
| `high_path` | string | Path to high-resolution image (768×768+) |
| `hard` | int | Label: 0=low res enough, 1=medium needed, 2=high needed |
## Data Sources
The dataset is a curated mix from multiple established VQA and document understanding benchmarks:
### Source Datasets
1. **TextVQA** (~25%)
- Text-rich images from scenes and documents
- Focus on reading and understanding text in images
2. **DocVQA** (~30%)
- Document-focused question answering
- Scanned document images
3. **ChartQA** (~15%)
- Charts and figure understanding
- Questions about data visualization
4. **InfographicVQA** (~20%)
- Complex infographic understanding
- Multi-element visual reasoning
5. **HME100K** (~10%)
- Handwritten mathematical expressions
- Document analysis
## Labeling Strategy
Each sample was labeled based on:
1. **Resolution Effectiveness Analysis**: Performance of VLMs at each resolution level
2. **Question Complexity**: Type and difficulty of the question
3. **Image Content**: Visual elements requiring high resolution
4. **Error Analysis**: Where models fail at lower resolutions
### Class Definitions
- **Class 0 (Low - 384×384)**: VLM achieves ≥95% accuracy at low resolution
- **Class 1 (Medium - 512×512)**: VLM needs medium resolution for adequate performance
- **Class 2 (High - 768×768+)**: VLM requires high resolution for accurate answers
## Usage
### Loading with Hugging Face Datasets
```python
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("Kimhi/hardness_data_mix")
# Access splits
train_split = dataset["train"] # If available
full_data = dataset["hardness_data_mix"]
# Display sample
sample = full_data[0]
print(sample)
```
### Loading with Pandas
```python
import pandas as pd
# Load parquet file
df = pd.read_parquet("hardness_data_mix.parquet")
# Inspect
print(f"Shape: {df.shape}")
print(f"Columns: {df.columns.tolist()}")
print(df.head())
# Get class distribution
print(df['hard'].value_counts().sort_index())
```
### Use in Training
```python
import pandas as pd
from sklearn.model_selection import train_test_split
# Load data
df = pd.read_parquet("hardness_data_mix.parquet")
# Split
train_df, val_df = train_test_split(
df,
test_size=0.1,
stratify=df['hard'],
random_state=42
)
# Use with training scripts
train_df.to_parquet("train_data.parquet")
val_df.to_parquet("val_data.parquet")
```
## Dataset Applications
This dataset is designed for:
1. **Resolution Selection Research**
- Training classifiers to predict required resolution
- Understanding resolution vs. accuracy tradeoffs
2. **Efficient VLM Inference**
- Optimizing multi-resolution inference
- Reducing computational costs
- Adaptive resolution selection
3. **Model Benchmarking**
- Evaluating VLM robustness at different resolutions
- Comparing resolution handling strategies
4. **Academic Research**
- Understanding visual information requirements
- Document understanding challenges
## Related Models
This dataset is used to train the CARES (Context-Aware Resolution Selection) models:
### SmolVLM Resolution Gate
- **Model**: [Kimhi/smolvlm-res-gate](https://huggingface.co/Kimhi/smolvlm-res-gate)
- **Approach**: Lightweight classifier on frozen features
- **Use Case**: Fast, on-device inference
### Granite-Docling Resolution Gate
- **Model**: [Kimhi/granite-docling-res-gate-lora](https://huggingface.co/Kimhi/granite-docling-res-gate-lora)
- **Approach**: Autoregressive SFT with LoRA
- **Use Case**: Production deployment
## Ethical Considerations
### Intended Use
- Academic research and development
- Industrial document understanding applications
- Model benchmarking and evaluation
- Responsible AI research
### Potential Risks
- Dataset reflects biases in source datasets
- May not generalize to specific document domains
- Quality varies based on document type
- Labels are proxy measures of resolution necessity
### Mitigation
- Stratified sampling ensures class balance
- Multi-source composition reduces single-domain bias
- Regular validation against real-world tasks
- Transparent documentation of limitations
## Limitations
1. **Domain Specificity**: Primarily document-focused
2. **Language**: Primarily English
3. **Quality Variation**: Mixed-quality source data
4. **Labeling**: Labels based on model performance, not human judgment
5. **Representation**: May not include all document types equally
## Citation
If you use this dataset, please cite:
```bibtex
@misc{kimhi2025carescontextawareresolutionselector,
title={CARES: Context-Aware Resolution Selector for VLMs},
author={Moshe Kimhi and Nimrod Shabtay and Raja Giryes and Chaim Baskin and Eli Schwartz},
year={2025},
eprint={2510.19496},
archivePrefix={arXiv},
primaryClass={cs.CV},
}
```
## Acknowledgements
- Dataset sources: TextVQA, DocVQA, ChartQA, InfographicVQA, HME100K communities
- Infrastructure: Hugging Face Hub
- Hosting: Hugging Face Datasets
## License
CC BY 4.0 - See LICENSE for details
## Contact
For questions about this dataset, please open an issue on the [CARES GitHub repository](https://github.com/mkimhi/CARES).
---
**Dataset Version**: 1.0
**Last Updated**: 2024
**Recommended Citation**: hardness_data_mix, Kimhi (2024)