ArcBench / README.md
ozdentarikcan's picture
Add task categories and link to paper (#2)
14fa756
---
language:
- en
license: mit
size_categories:
- n<1K
pretty_name: ArcBench ML Conference Oral Paper-Presentation Benchmark
task_categories:
- other
tags:
- machine-learning
- academic-papers
- slides
- multimodal
- benchmark
- nlp
- computer-vision
- oral-presentations
---
# ArcBench: ML Conference Oral Paper-Presentation Benchmark
This benchmark is from the paper **[Narrative-Driven Paper-to-Slide Generation via ArcDeck](https://huggingface.co/papers/2604.11969)**.
A curated benchmark dataset of **100 oral presentation papers** from top-tier machine learning conferences (CVPR, ICCV, ICLR, ICML, NeurIPS), spanning 2022–2025. Each entry includes the full paper PDF, presentation slides PDF, and rich metadata.
[![arXiv](https://img.shields.io/badge/arXiv-2604.11969-b31b1b.svg)](https://arxiv.org/abs/2604.11969)
[![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://arcdeck.org/)
[![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-Dataset-yellow)](https://huggingface.co/datasets/ArcDeck/ArcBench)
[![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/RehgLab/ArcDeck)
---
## Dataset Summary
This benchmark is designed to support research on multimodal document understanding, slide generation, paper-to-slide alignment, and LLM evaluation tasks. Papers were selected from oral presentations only — the highest-quality subset of each conference — and filtered to ensure rich content (≥3 figures, ≥3 tables) and availability of both the original paper PDF and presentation slides.
---
## Dataset Structure
### Files
```
benchmark.csv # Metadata for all 100 papers
papers/ # 100 original full-length paper PDFs
└── paper{i}_{Title}_{Conference}_{Year}.pdf
slides/ # 100 presentation slide PDFs
└── slide{i}_{Title}_{Conference}_{Year}.pdf
```
### Metadata Fields (`benchmark.csv`)
| Column | Type | Description |
|--------|------|-------------|
| `Paper Title` | string | Full paper title |
| `Year` | int | Publication year (2022–2025) |
| `Conference` | string | Conference name (CVPR, ICCV, ICLR, ICML, NeurIPS) |
| `Presentation Type` | string | Always `Oral` in this benchmark |
| `Number of Figures` | int | Number of figures in the paper |
| `Number of Equations` | int | Number of equations in the paper |
| `Number of Tables` | int | Number of tables in the paper |
| `Appendix` | string | Whether paper has an appendix (`Yes`/`No`) |
| `Slide Animations` | string | Notes on slide animations, if any |
| `Character_Count` | int | Total character count of the paper (extracted via PDF) |
| `Number_of_Slides` | int | Number of pages/slides in the slide PDF |
| `Topics` | string | Semicolon-separated LLM-extracted research topics |
### Naming Convention
Files are named as `{type}{index}_{CleanTitle}_{Conference}_{Year}.pdf` where:
- `index` is 0-based, consistent across `papers/` and `slides/` for matched pairs
- `CleanTitle` has special characters removed and spaces replaced by underscores (max 100 chars)
---
## Dataset Statistics
### Distribution by Conference
| Conference | Papers |
|------------|--------|
| ICML | 51 |
| ICLR | 31 |
| NeurIPS | 12 |
| ICCV | 4 |
| CVPR | 2 |
### Distribution by Year
| Year | Papers |
|------|--------|
| 2022 | 15 |
| 2023 | 15 |
| 2024 | 26 |
| 2025 | 44 |
### Content Statistics
| Metric | Mean | Min | Max |
|--------|------|-----|-----|
| Figures per paper | 6.0 | 3 | 18 |
| Tables per paper | 5.3 | 3 | — |
| Slides per paper | 27.5 | 8 | 85 |
| Characters per paper | 50,411 | — | — |
- **92%** of papers include an appendix
- **100%** are oral presentations
### Top Research Topics
Extracted via GPT-4o-mini from paper abstracts:
> Contrastive Learning · Graph Neural Networks · Causal Inference · Multimodal Large Language Models · Federated Learning · Sampling Efficiency · Reinforcement Learning · Diffusion Models · Self-Supervised Learning · Vision-Language Models
---
## Selection Criteria
Papers were selected using the following filters applied to a broader 994-paper dataset:
- **Presentation type:** Oral only
- **Minimum figures:** ≥ 3
- **Minimum tables:** ≥ 3
- **Original paper available:** Must have the full (non-anonymized) version
- **Balanced sampling:** Proportional stratified sampling across year × conference to reach exactly 100 papers
---
## Intended Uses
This dataset is suited for:
- **Slide generation / paper-to-slide summarization**: Given `papers/`, generate slides comparable to `slides/`
- **Slide-grounded QA**: Answer questions about a paper using its slides as context
- **Cross-modal retrieval**: Match papers to their corresponding slides
- **LLM evaluation**: Benchmark LLM understanding of dense scientific documents
- **Multimodal document analysis**: Study relationships between figures, tables, equations, and slide content
---
## Source
Papers were collected from official proceedings of:
- [ICML](https://icml.cc) (2022–2025)
- [ICLR](https://iclr.cc) (2024–2025)
- [NeurIPS](https://neurips.cc) (2022–2025)
- [CVPR](https://cvpr.thecvf.com) (2024–2025)
- [ICCV](https://iccv2023.thecvf.com) (2025)
---
## Citation
If you use this dataset in your research, please cite:
```bibtex
@article{ozden2026arcdeck,
title = {Narrative-Driven Paper-to-Slide Generation via ArcDeck},
author = {Ozden, Tarik Can and VS, Sachidanand and Horoz, Furkan
and Kara, Ozgur and Kim, Junho and Rehg, James M.},
journal = {arXiv preprint arXiv:2604.11969},
year = {2026}
}
```
---
## License
MIT.
Individual paper and slide PDFs remain under their original authors' copyright. Please consult each paper's license before redistribution.