File size: 2,457 Bytes
82d3568
 
 
 
 
5a1ae39
82d3568
 
 
aad3e27
5a1ae39
82d3568
7fe6aaf
cd84058
 
 
7fe6aaf
82d3568
cd84058
 
 
 
 
 
 
82d3568
cd84058
 
 
 
 
82d3568
 
49dbe08
82d3568
e6d7d6e
49dbe08
cd84058
700b062
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
tags:
- agent
---
# Paper2Video: Automatic Video Generation From Scientifuc Papers

<!-- Provide a quick summary of the dataset. -->

[📃Arxiv]() | [🌐 Project Page](https://showlab.github.io/Paper2Video/) | [💻Github](https://github.com/showlab/Paper2Video)


## Dataset Description
The Paper2Video Benchmark includes 101 curated paper–video pairs spanning diverse research topics. Each paper averages about 13.3K words, 44.7 figures, and 28.7 pages, providing rich multimodal long-document inputs. Presentations contain on average 16 slides and run for about 6 minutes 15 seconds, with some reaching up to 14 minutes. Rather than focusing only on video generation, Bench is designed to evaluate long-horizon agentic tasks that require integrating text, figures, slides, and spoken presentations.

## Dataset Structure
This repository contains two main components:

- **Excel file with metadata and presentation links**  
  Each entry includes:  
  - **paper**: the title of the paper  
  - **paper_link**: the URL of the paper (e.g., PDF or LaTeX source)  
  - **presentation_link**: the URL of the author-recorded presentation video (some entries also include original slides)  
  - **conference**: the conference where the paper was published  
  - **year**: the publication year of the paper  

- **Author identity file**  
  This file contains author information, including voice samples and images, which can be used for tasks such as personalized talk synthesis or avatar generation.  
  Each folder includes:  
  - **ref_img.png**: the identity image of the author  
  - **ref_audio.wav**: the identity voice sample of the author  


## Ethics

The author identity data (images and voice samples) provided in this repository are strictly for **research purposes only**. They must **not** be used for any commercial applications, deepfake creation, impersonation, or other misuse that could harm the rights, privacy, or reputation of the individuals. All usage should comply with ethical guidelines and respect the identity and intellectual property of the authors.
  
## Citation
**BibTeX:**
```bibtex
@misc{paper2video,
      title={Paper2Video: Automatic Video Generation from Scientific Papers}, 
      author={Zeyu Zhu and Kevin Qinghong Lin and Mike Zheng Shou},
      year={2025},
      eprint={2510.05096},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.05096}, 
}
```