--- license: odc-by task_categories: - text-generation - question-answering tags: - scientific-papers - arxiv - citation-prediction - author-prediction - collaboration-prediction - research-forecasting size_categories: - 100K used to condition all tasks. ### Ensuring Dataset Quality We apply several design choices to ensure that PreScience supports reliable modeling and evaluation rather than reflecting artifacts of noisy metadata or degenerate task instances: - **Author disambiguation**: We disambiguate author profiles using the S2AND pipeline (Subramanian et al., 2021), yielding better author clusters than the current Semantic Scholar Academic Graph release - **Key reference filtering**: We restrict target papers to those with between 1 and 10 key references, excluding instances with zero or unusually large key-reference sets - **Temporal alignment**: All author- and reference-level metadata (publication counts, citation counts, h-indices) are temporally aligned to each paper's publication date to prevent leakage of future information into task inputs ### Files This dataset contains: 1. **`train.parquet`**: Training period papers (373,707 papers from Oct 2023 - Sept 2024) 2. **`test.parquet`**: Test period papers (464,942 papers from Oct 2024 - Sept 2025) 3. **`author_disambiguation.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG author IDs 4. **`author_publications.jsonl`**: Mapping from S2AND-disambiguated author ID → S2AG corpus IDs of their publications ### Notes on author mapping files The two author mapping files (`author_disambiguation.jsonl` and `author_publications.jsonl`) are the merged outputs of the S2AND disambiguation pass run over a broader pre-pruning mention pool than the final corpus. - **Empty publication lists are expected.** Some sd_ids have empty publication lists either because (a) the author has publications only in the *other* split's corpus, or (b) every paper carrying that author's signature was removed during corpus filtering (non-arXiv versions, missing abstracts, max-key-references caps, etc.). The disambiguation cluster identity is preserved in `author_disambiguation.jsonl` regardless. - **Keys are unioned across splits.** For each sd_id present in both train and test, the published `author_publications.jsonl` and `author_disambiguation.jsonl` entries are the per-key union of the two splits' values. Treat these as global per-cluster aggregations, not per-split views. ### Paper Schema #### Roles Papers in the dataset are each assigned a subset of the following roles: - `target`: Primary evaluation papers (Oct 2023 - Sept 2024 for train, Oct 2024 - Sept 2025 for test) - `target.key_reference`: Highly influential papers cited by targets - `target.author.publication_history`: Prior work by target paper authors - `target.author.publication_history.key_reference`: Key refs of authors' prior work Each paper record contains: ```python { # Basic metadata (available for all papers) "corpus_id": str, # S2AG corpus ID "arxiv_id": str, # arXiv identifier "date": str, # Publication date (YYYY-MM-DD) "categories": list[str], # arXiv categories "title": str, # Paper title "abstract": str, # Paper abstract "topics": list[str], # Multi-label topic assignments (target papers only; from a fixed list of 202 topics) "roles": list[str], # Paper roles in dataset # Citation data (available for target papers [guaranteed] and target.author.publication_history papers [best-effort]) "key_references": list[{ # Highly influential references "corpus_id": str, "num_citations": int # Citations at target paper date }], # Author data (availability for target papers [guaranteed] and target.author.publication_history papers [best-effort]) "authors": list[{ # Author roster "author_id": str, # S2AND-disambiguated ID "name": str, "publication_history": list[str], # Prior corpus_ids "h_index": int, # At target paper date "num_papers": int, "num_citations": int }], # Impact data (target papers only) "citation_trajectory": list[int] # Monthly cumulative citation counts } ``` ## Usage ### Using with PreScience codebase The PreScience codebase includes a helper function to load data from HuggingFace: ```python import utils # Load from HuggingFace all_papers, author_disambiguation, embeddings = utils.load_corpus( hf_repo_id="PreSciencePreScience/PreScience", split="test", embeddings_dir="./embeddings", # Optional: for embedding-based baselines embedding_type="grit" # Optional: gtr, specter2, or grit ) ``` ### Adhoc Loading ```python from datasets import load_dataset # Load dataset dataset = load_dataset("PreSciencePreScience/PreScience") # Access a paper paper = dataset["test"][0] print(f"Title: {paper['title']}") print(f"Authors: {len(paper['authors'])}") print(f"Roles: {paper['roles']}") ``` ## Computing Embeddings Embeddings are not included in this dataset, but can be computed using the `dataset/embeddings/compute_paper_embeddings.py` script provided with the PreScience codebase.