PreSciencePreScience commited on
Commit
319ed26
·
verified ·
1 Parent(s): 323b8ce

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ task_categories:
4
+ - text-generation
5
+ - question-answering
6
+ tags:
7
+ - scientific-papers
8
+ - arxiv
9
+ - citation-prediction
10
+ - author-prediction
11
+ - collaboration-prediction
12
+ - research-forecasting
13
+ size_categories:
14
+ - 100K<n<1M
15
+ language:
16
+ - en
17
+ ---
18
+
19
+ # PreScience: A Dataset and Benchmark for Scientific Forecasting
20
+
21
+ ## Dataset Summary
22
+
23
+ Can AI systems trained on the existing scientific record forecast the advances that will follow? We introduce PreScience, a dataset and benchmark for scientific forecasting built around 98K recent AI research papers, together with companion papers covering author publication histories and citation links, yielding 502K papers in total. The resulting paper records include titles, abstracts, disambiguated author identities, influential references, topic labels, citation trajectories, and metadata snapshotted to respect temporal cutoffs. We instantiate seven exemplar tasks: five paper-anchored tasks---contribution generation, collaborator prediction, prior work selection, citation count prediction, and future combination prediction---and two aggregate topic trend forecasting variants. We develop baselines ranging from simple heuristics and embedding methods to frontier language models and agentic systems, and introduce LACER, an LLM-based metric for evaluating similarity of generated contribution descriptions that agrees better with human judgments than existing metrics. Finally, we compose task models to generate a 12-month synthetic corpus and find that the resulting papers to be systematically less diverse and less novel than human-authored research from the same period.
24
+
25
+ ## Dataset Statistics
26
+
27
+ | Split | Target Papers | Total Papers | Unique Authors | Date Range |
28
+ |-------|--------------|--------------|----------------|-------------|
29
+ | Train | 44,984 | 373,707 | 106,904 | Oct 2023 - Sept 2024 |
30
+ | Test | 52,836 | 464,942 | 129,020 | Oct 2024 - Sept 2025 |
31
+ | **Total** | **97,820** | **501,866** | **182,727** | Oct 2023 - Sept 2025 |
32
+
33
+ **arXiv Categories**: cs.CL, cs.LG, cs.AI, cs.CV, cs.IR, cs.NE
34
+
35
+ **Average Statistics** (computed over target papers):
36
+ - Authors per paper: 5.15
37
+ - Words in abstract: 187.1
38
+ - Key references: 3.08 (median: 3)
39
+ - Citations at 12 months: 5.57
40
+
41
+ ## Dataset Structure
42
+
43
+ ### Dataset Construction
44
+
45
+ PreScience is built from research papers posted to arXiv from October 2023 to September 2025 in six AI-adjacent categories: **cs.CL, cs.LG, cs.AI, cs.CV, cs.IR, and cs.NE**. These constitute the **target papers** in our benchmark. Papers are represented by their titles and abstracts (full texts are not included).
46
+
47
+ We include a set of **companion papers** consisting of:
48
+ - Key references of target papers
49
+ - Prior publications of target authors
50
+ - Key references of those prior publications
51
+
52
+ Together, these form the historical corpus H<sub><t</sub> used to condition all tasks.
53
+
54
+ ### Ensuring Dataset Quality
55
+
56
+ We apply several design choices to ensure that PreScience supports reliable modeling and evaluation rather than reflecting artifacts of noisy metadata or degenerate task instances:
57
+
58
+ - **Author disambiguation**: We disambiguate author profiles using the S2AND pipeline (Subramanian et al., 2021), yielding better author clusters than the current Semantic Scholar Academic Graph release
59
+ - **Key reference filtering**: We restrict target papers to those with between 1 and 10 key references, excluding instances with zero or unusually large key-reference sets
60
+ - **Temporal alignment**: All author- and reference-level metadata (publication counts, citation counts, h-indices) are temporally aligned to each paper's publication date to prevent leakage of future information into task inputs
61
+
62
+ ### Files
63
+
64
+ This dataset contains:
65
+
66
+ 1. **`train.parquet`**: Training period papers (373,707 papers from Oct 2023 - Sept 2024)
67
+ 2. **`test.parquet`**: Test period papers (464,942 papers from Oct 2024 - Sept 2025)
68
+ 3. **`author_disambiguation.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG author IDs
69
+ 4. **`author_publications.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG corpus IDs of their publications
70
+
71
+ ### Notes on author mapping files
72
+
73
+ The two author mapping files (`author_disambiguation.jsonl` and `author_publications.jsonl`) are the merged outputs of the S2AND disambiguation pass run over a broader pre-pruning mention pool than the final corpus.
74
+
75
+ - **Empty publication lists are expected.** Some sd_ids have empty publication lists either because (a) the author has publications only in the *other* split's corpus, or (b) every paper carrying that author's signature was removed during corpus filtering (non-arXiv versions, missing abstracts, max-key-references caps, etc.). The disambiguation cluster identity is preserved in `author_disambiguation.jsonl` regardless.
76
+ - **Keys are unioned across splits.** For each sd_id present in both train and test, the published `author_publications.jsonl` and `author_disambiguation.jsonl` entries are the per-key union of the two splits' values. Treat these as global per-cluster aggregations, not per-split views.
77
+
78
+ ### Paper Schema
79
+
80
+ #### Roles
81
+ Papers in the dataset are each assigned a subset of the following roles:
82
+ - `target`: Primary evaluation papers (Oct 2023 - Sept 2024 for train, Oct 2024 - Sept 2025 for test)
83
+ - `target.key_reference`: Highly influential papers cited by targets
84
+ - `target.author.publication_history`: Prior work by target paper authors
85
+ - `target.author.publication_history.key_reference`: Key refs of authors' prior work
86
+
87
+ Each paper record contains:
88
+
89
+ ```python
90
+ {
91
+ # Basic metadata (available for all papers)
92
+ "corpus_id": str, # S2AG corpus ID
93
+ "arxiv_id": str, # arXiv identifier
94
+ "date": str, # Publication date (YYYY-MM-DD)
95
+ "categories": list[str], # arXiv categories
96
+ "title": str, # Paper title
97
+ "abstract": str, # Paper abstract
98
+ "topics": list[str], # Multi-label topic assignments (target papers only; from a fixed list of 202 topics)
99
+ "roles": list[str], # Paper roles in dataset
100
+
101
+ # Citation data (available for target papers [guaranteed] and target.author.publication_history papers [best-effort])
102
+ "key_references": list[{ # Highly influential references
103
+ "corpus_id": str,
104
+ "num_citations": int # Citations at target paper date
105
+ }],
106
+
107
+ # Author data (availability for target papers [guaranteed] and target.author.publication_history papers [best-effort])
108
+ "authors": list[{ # Author roster
109
+ "author_id": str, # S2AND-disambiguated ID
110
+ "name": str,
111
+ "publication_history": list[str], # Prior corpus_ids
112
+ "h_index": int, # At target paper date
113
+ "num_papers": int,
114
+ "num_citations": int
115
+ }],
116
+
117
+ # Impact data (target papers only)
118
+ "citation_trajectory": list[int] # Monthly cumulative citation counts
119
+ }
120
+ ```
121
+
122
+
123
+ ## Usage
124
+
125
+ ### Using with PreScience codebase
126
+
127
+ The PreScience codebase includes a helper function to load data from HuggingFace:
128
+
129
+ ```python
130
+ import utils
131
+
132
+ # Load from HuggingFace
133
+ all_papers, author_disambiguation, embeddings = utils.load_corpus(
134
+ hf_repo_id="PreSciencePreScience/PreScience",
135
+ split="test",
136
+ embeddings_dir="./embeddings", # Optional: for embedding-based baselines
137
+ embedding_type="grit" # Optional: gtr, specter2, or grit
138
+ )
139
+ ```
140
+
141
+ ### Adhoc Loading
142
+
143
+ ```python
144
+ from datasets import load_dataset
145
+
146
+ # Load dataset
147
+ dataset = load_dataset("PreSciencePreScience/PreScience")
148
+
149
+ # Access a paper
150
+ paper = dataset["test"][0]
151
+ print(f"Title: {paper['title']}")
152
+ print(f"Authors: {len(paper['authors'])}")
153
+ print(f"Roles: {paper['roles']}")
154
+ ```
155
+
156
+ ## Computing Embeddings
157
+
158
+ Embeddings are not included in this dataset, but can be computed using the `dataset/embeddings/compute_paper_embeddings.py` script provided with the PreScience codebase.
159
+