configs:
- config_name: default
data_files:
- split: train
path:
- v0/documents/*.gz
task_categories:
- text-generation
language:
- en
pretty_name: PeS2o
PeS2o
Description
This dataset is a version of the peS2o dataset restricted to openly licensed articles.
Pes2o is derived from S2ORC, a corpus of openly licensed abstract and full-text papers that have been converted to a structured format using Grobid.
Starting from Grobid’s XML output, peS2o filters papers that are too short, have incorrect metadata, are in languages other than English, and contain OCR errors using a combination of heuristic- and model-based filtering steps.
Please refer to the peS2o datasheet and code for more details on the peS2o processing pipeline.
For the openly licensed articles in this collection, per-document license information is available in the license entry of the metadata field of each example.
Dataset Statistics
| Documents | UTF-8 GB |
|---|---|
| 6,294,020 | 188.2 |
License Issues
While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see our paper). If you believe you have found an instance of incorrect licensing in this dataset, please start a discussion on this repository.
Other Versions
This is the "raw" version of the openly licensed peS2o dataset. If you are looking for the filtered version used to train Comma v0.1, you can find it here.
Citation
If you use this dataset, please cite:
@article{kandpal2025common,
title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
journal={arXiv preprint},
year={2025}
}
@techreport{peS2o,
author = {Luca Soldaini and Kyle Lo},
year = 2023,
title = {{peS2o (Pretraining Efficiently on S2ORC) Dataset}},
institution = {{Allen Institute for AI}},
note = {ODC-By, \url{https://github.com/allenai/pes2o}}
}
{{ also add an additional citation if the dataset is sourced from some prior work }}