Add task categories, relevant tags, paper link, and sample usage for Paper2Video dataset

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +59 -2
README.md CHANGED
@@ -2,12 +2,19 @@
2
  license: mit
3
  tags:
4
  - agent
 
 
 
 
 
 
5
  ---
6
- # Paper2Video: Automatic Video Generation From Scientifuc Papers
 
7
 
8
  <!-- Provide a quick summary of the dataset. -->
9
 
10
- [📃Arxiv]() | [🌐 Project Page](https://showlab.github.io/Paper2Video/) | [💻Github](https://github.com/showlab/Paper2Video)
11
 
12
 
13
  ## Dataset Description
@@ -30,6 +37,56 @@ This repository contains two main components:
30
  - **ref_img.png**: the identity image of the author
31
  - **ref_audio.wav**: the identity voice sample of the author
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Ethics
35
 
 
2
  license: mit
3
  tags:
4
  - agent
5
+ - multimodal
6
+ - video-generation
7
+ task_categories:
8
+ - any-to-any
9
+ language:
10
+ - en
11
  ---
12
+
13
+ # Paper2Video: Automatic Video Generation From Scientific Papers
14
 
15
  <!-- Provide a quick summary of the dataset. -->
16
 
17
+ [📃 Paper](https://huggingface.co/papers/2510.05096) | [🌐 Project Page](https://showlab.github.io/Paper2Video/) | [💻Github](https://github.com/showlab/Paper2Video)
18
 
19
 
20
  ## Dataset Description
 
37
  - **ref_img.png**: the identity image of the author
38
  - **ref_audio.wav**: the identity voice sample of the author
39
 
40
+ ## Sample Usage
41
+
42
+ The PaperTalker framework, part of Paper2Video, enables automatic academic presentation video generation. Follow these steps to get started:
43
+
44
+ ### Requirements
45
+ Prepare the environment by installing necessary packages and cloning dependent repositories:
46
+
47
+ ```bash
48
+ cd src
49
+ conda create -n p2v python=3.10
50
+ conda activate p2v
51
+ pip install -r requirements.txt
52
+ ````
53
+ Download the dependent code and follow the instructions in **[Hallo2](https://github.com/fudan-generative-vision/hallo2)** to download the model weight.
54
+ ```bash
55
+ git clone https://github.com/fudan-generative-vision/hallo2.git
56
+ git clone https://github.com/Paper2Poster/Paper2Poster.git
57
+ ```
58
+ You need to **prepare the environment separately for talking-head generation** to potentially avoid package conflicts. Please refer to [Hallo2](https://github.com/fudan-generative-vision/hallo2). After installing, use `which python` to get the python environment path.
59
+ ```bash
60
+ cd hallo2
61
+ conda create -n hallo python=3.10
62
+ conda activate hallo
63
+ pip install -r requirements.txt
64
+ ```
65
+
66
+ ### Configure LLMs
67
+ Export your API credentials:
68
+ ```bash
69
+ export GEMINI_API_KEY="your_gemini_key_here"
70
+ export OPENAI_API_KEY="your_openai_key_here"
71
+ ```
72
+
73
+ ### Inference
74
+ The `pipeline.py` script provides an automated pipeline for generating academic presentation videos. It takes **LaTeX paper sources** together with **reference image/audio** as input to produce a complete presentation video. The minimum recommended GPU for running this pipeline is **NVIDIA A6000** with 48G.
75
+
76
+ Run the following command to launch a full generation:
77
+ ```bash
78
+ python pipeline.py \
79
+ --model_name_t gpt-4.1 \
80
+ --model_name_v gpt-4.1 \
81
+ --model_name_talking hallo2 \
82
+ --result_dir /path/to/output \
83
+ --paper_latex_root /path/to/latex_proj \
84
+ --ref_img /path/to/ref_img.png \
85
+ --ref_audio /path/to/ref_audio.wav \
86
+ --talking_head_env /path/to/hallo2_env \
87
+ --gpu_list [0,1,2,3,4,5,6,7]
88
+ ```
89
+ For a detailed list of arguments and further instructions, please refer to the [GitHub repository](https://github.com/showlab/Paper2Video).
90
 
91
  ## Ethics
92