File size: 4,281 Bytes
333bebf
 
 
 
 
 
 
 
 
 
 
 
c4d74a8
 
333bebf
 
d637830
 
5ece87d
 
 
0202fa7
4d70985
69b0cfc
d637830
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5af8aa7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d637830
 
 
 
 
 
 
5af8aa7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d637830
5af8aa7
 
 
 
 
 
 
 
 
 
 
 
d637830
 
 
 
 
 
11c9aee
d637830
 
 
c26979c
 
 
 
11c9aee
c26979c
d637830
11c9aee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
- video-text-to-text
---

# LearningPaper24 Dataset

[![website](https://img.shields.io/badge/website-76b900?style=for-the-badge&logo=safari&labelColor=555555)](https://vivianchen98.github.io/VIBE_website/)
[![Arxiv](https://img.shields.io/badge/Arxiv-b31b1b?style=for-the-badge&logo=arxiv&labelColor=555555)](https://arxiv.org/abs/2505.17423)


This dataset contains video recordings and metadata from ICLR and NeurIPS 2024 conference talks. It includes both poster and oral presentations, along with their associated metadata such as titles, abstracts, keywords, and primary areas.
The paper list is originally sourced from [Paperlists](https://github.com/papercopilot/paperlists).

## Dataset Structure

```
learningpaper24/
├── README.md
├── metadata/
│   └── catalog.json
└── video/
    ├── {openreview_id}_{slideslive_id}.mp4
    └── ...
```

## Data Format

### Catalog (metadata/catalog.json)
The catalog contains metadata for each talk in JSON format with the following fields:
- `video_file`: Filename of the video recording in the format `{openreview_id}_{slideslive_id}.mp4`
- `openreview_id`: Unique identifier from OpenReview
- `slideslive_id`: Video identifier from SlidesLive
- `venue`: Conference venue (e.g., "iclr2024")
- `title`: Paper title
- `status`: Presentation type (e.g., "Poster", "Oral")
- `keywords`: Research keywords
- `tldr`: Short summary
- `abstract`: Full paper abstract
- `primary_area`: Main research area
- `site`: Link to the conference page

### Videos
Videos are stored in the `video` directory with filenames following the format: `{openreview_id}_{slideslive_id}.mp4`

## Usage
For easy access to the dataset, you can use the HuggingFace Datasets library. Here's a simple example to load and explore the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("vivianchen98/LearningPaper24", data_files="metadata/catalog.jsonl", split="train")

for i in range(3):
    print(f"\nSAMPLE {i + 1}")
    print("-" * 30)
    print(f"Video file: video/{dataset[i].get('video_file', 'N/A')}")
    print(f"TL;DR: {dataset[i].get('tldr', 'N/A')}")
    print(f"Primary area: {dataset[i].get('primary_area', 'N/A')}")
    print("=" * 50)
```

You can also use the example code provided in `example.py` for more systematic data exploration.

## Purpose

This dataset can be used for:
- Video understanding and summarization
- Natural language processing tasks
- Video-text alignment studies

## Dataset Statistics

The LearningPaper24 dataset contains a diverse collection of machine learning conference presentations:

### Overview
- 📊 **Total entries**: 2,287 conference talks

### Presentation Types
- 📝 **Poster presentations**: 1,986 (86.8%)
- 🔍 **Spotlight presentations**: 256 (11.2%)
- 🎤 **Oral presentations**: 45 (2.0%)

### Conference Distribution
- 🏢 **NeurIPS 2024**: 1,726 talks (75.5%)
- 🏢 **ICLR 2024**: 561 talks (24.5%)

### Research Areas
The dataset covers a wide range of machine learning topics, with the top 10 research areas being:

| Research Area | Count | Percentage |
|---------------|-------|------------|
| Machine Vision | 218 | 9.5% |
| Reinforcement Learning | 125 | 5.5% |
| Natural Language Processing | 123 | 5.4% |
| Optimization | 112 | 4.9% |
| Learning Theory | 111 | 4.9% |
| Diffusion-based Models | 87 | 3.8% |
| Deep Learning Architectures | 79 | 3.5% |
| Generative Models | 79 | 3.5% |
| Probabilistic Methods | 74 | 3.2% |
| Generative Models | 74 | 3.2% |


## License

This dataset is released under the [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/).

## Citation

If you use this dataset, please cite:
```
@inproceedings{chen2025vibevideototextinformationbottleneck,
  title={VIBE: Annotation-Free Video-to-Text Information Bottleneck Evaluation for {TL};{DR}},
  author={Shenghui Chen and Po-han Li and Sandeep Chichali and Ufuk Topcu},
  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
  year={2025},
  url={https://openreview.net/forum?id=C35FCYZBXp}
}
```