You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Scientific Corpus (Cleaned)

This corpus contains ≈11 M English scientific documents cleaned via the DataTrove pipeline. It was used to continue pretraining T5-base (EN‑T5-Sci) before sliding-window materialization. Each document is provided as a row in one of 75 Parquet shards together with extensive per-document QA metadata.

Dataset Details

Uses

Direct Use

  • Continued pretraining / domain adaptation of encoder-decoder LMs on scientific text.
  • Building scientific QA, summarization, or retrieval benchmarks for English.

Dataset Structure

  • Split: single train split (≈11 M docs).
  • Fields: text (string), id (string), metadata (struct with QA metrics such as length, fasttext score, citation counts, publisher/year).
  • Files: 75 Parquet shards + stats/summary/* JSONs with descriptive statistics.

Dataset Creation

Curation Rationale

Provide a reproducible, high-quality English scientific corpus for EN‑T5-Sci pretraining and subsequent cross-lingual transfer.

Source Data

  • Data Collection: Unpaywall snapshot curated by the DFKI Scilons team (PDF → text via GROBID).
  • Processing: DataTrove + custom scripts (citation removal, structural filtering, FastText EN filter ≥0.75, conservative normalization). Outputs include cleaned text and per-document QA metadata.
  • Producers: Scientific publishers indexed by Unpaywall; metadata retains publisher/journal/year when available.

Bias, Risks, and Limitations

-domain bias toward STEM fields; humanities/social sciences underrepresented.

  • Potential PII leakage; language filter may drop multilingual documents.
  • Residual OCR artifacts may remain.
Downloads last month
5

Collection including rausch/scientific_corpus_cleaned