license: cc-by-4.0
task_categories:
- summarization
- text-generation
language:
- en
tags:
- croissant
size_categories:
- 10K<n<100K
dataset_info:
- config_name: title_10K
features:
- name: url
dtype: string
- name: published
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: gt
dtype: string
- name: primary_cat
dtype: string
- name: paper_cat
dtype: string
- name: updated
dtype: string
- name: main_content
dtype: string
- name: authors
dtype: string
- name: label
dtype: string
- name: cats
sequence: string
- config_name: abs_9K
features:
- name: url
dtype: string
- name: published
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: gt
dtype: string
- name: primary_cat
dtype: string
- name: paper_cat
dtype: string
- name: updated
dtype: string
- name: main_content
dtype: string
- name: authors
dtype: string
- name: label
dtype: string
- name: cats
sequence: string
- config_name: intro_8K
features:
- name: url
dtype: string
- name: published
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: gt
dtype: string
- name: primary_cat
dtype: string
- name: paper_cat
dtype: string
- name: updated
dtype: string
- name: main_content
dtype: string
- name: authors
dtype: string
- name: label
dtype: string
- name: cats
sequence: string
configs:
- config_name: title_10K
data_files:
- split: test
path: title_10K/test_*.json
- config_name: abs_9K
data_files:
- split: test
path: abs_9K/test_*.json
- config_name: intro_8K
data_files:
- split: test
path: intro_8K/test_*.json
pretty_name: AcademicEval
AcademicEval Benchmark Introduction
We proposed AcademicEval, a live benchmark for evaluating LLMs over long-context generation tasks. AcademicEval adopts papers on arXiv to introduce several acadeic writing tasks with long-context inputs, i.e., Title, Abstract, Introduction, Related Work, wich covers a wide range of abstraction levels and require no manual labeling.
Comparing to existing long-context LLM benchmarks, our Comparing to existing long-context LLM benchmarks, our AcademicEval offers flexible length, automatic annotation, hierarchical abstraction, few-shot demonstrations, and live updates without data leakage risks.
🌟Note🌟: currently, for the ease of downloading, we only uploaded the test set of AcademicEval (The rest of AcademicEval, i.e., train and val set, can be accessed via AcademicEval Full). The data viewer above shows the preview data information of title-10K, abs-9K, and intro-8K. For the complete test set data, please check "Files and versions" in this page.
| Benchmark | Avg Len | Automatic Annotation | Hierarchical Abstraction | Few-shot Demonstrations | Live Update |
|---|---|---|---|---|---|
| ZeroSCROLLS (Shaham et al., 2023) | ~10K | ✓ | ✘ | ✘ | ✘ |
| L-Eval (An et al., 2023) | ~8K | ✘ | ✘ | ✘ | ✘ |
| BAMBOO (Dong et al., 2023) | ~16K | ✘ | ✘ | ✘ | ✘ |
| LongBench (Bai et al., 2023) | ~8K | ✘ | ✘ | ✓ | ✘ |
| LooGLE (Li et al., 2023) | ~20K | ✘ | ✘ | ✘ | ✘ |
| ∞Bench (Zhang et al., 2024) | ~200K | ✘ | ✘ | ✘ | ✘ |
| AcademicEval (ours) | Flexible | ✓ | ✓ | ✓ | ✓ |
Dataset Structure
Data Settings
Title Writing
title_10K
title_30K
title_31K_G
Abstract Writing
abs_9K
abs_28K
abs_29K_G
Introduction Writing
intro_8K
intro_28K
intro_28K_G
Related Work Writing
related_34K
related_53K
related_53K_G
Main Data Fields
url: the url of the original paper on arXiv
title: the title of the paper
abstract: the abstract of the paper
authors: the authors of the paper
published: the publication timestamp of the paper
primary_cat: arXiv category
gt: the ground truth of the corresponding task
main_content: the main body of the paper (w/o the corresponding section content)
additional_info: the few-shot demonstrations from randomly selected papers (the data fields of each demonstration are the same as above)
additional_graph_info: the few-shot demonstrations with the co-author subgraph structure from co-author papers (the data fields of each demonstration are the same as above)