Datasets:
Dataset Card for the ProofLang Corpus
Dataset Summary
The ProofLang Corpus includes 3.7M proofs (558 million words) mechanically extracted from papers that were posted on arXiv.org between 1992 and 2020.
The focus of this corpus is proofs, rather than the explanatory text that surrounds them, and more specifically on the language used in such proofs.
Specific mathematical content is filtered out, resulting in sentences such as Let MATH be the restriction of MATH to MATH.
This dataset reflects how people prefer to write (non-formalized) proofs, and is also amenable to statistical analyses and experiments with Natural Language Processing (NLP) techniques. We hope it can serve as an aid in the development of language-based proof assistants and proof checkers for professional and educational purposes.
Dataset Structure
There are two versions of the data: proofs divides up the data proof-by-proof, and sentences further divides up the same data sentence-by-sentence.
The data in
proofsconsists of afileIDthat specifies the paper where the proof was extracted, and theproofas a string.The data in
sentencesconsists of afileIDthat specifies the paper where the sentence occurred, and thesentenceas a string.
Dataset Size
proofsis 3197091800 bytes and has 3681901 examples.sentencesis 3736579062 bytes and has 38899130 examples.
Dataset Statistics
The average length of
sentencesis 14.064921040650523 words.The average length of
proofsis 10.535469856468167 sentences.
Dataset Usage
Data can be downloaded as TSV files. Accessing the data programmatically from Python is also possible using the Datasets library. For example, to print the first 10 proofs:
from datasets import load_dataset
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming=`True`)
for d in dataset.take(10):
print(d['fileID'], d['proof'])
To look at individual sentences from the proofs, 'proofs' and d['proof'] by sentences and d['sentence'] .
Data Splits
There is currently no train/test split; all the data is in train.
Dataset Creation
We started with the LaTeX source of 1.6M papers that were submitted to arXiv.org between 1992 and April 2022.
The proofs were extracted using a Python script simulating parts of LaTeX (including defining and expanding macros).
It does no actual typesetting, throws away output not between \begin{proof}...\end{proof}, and skips math content. During extraction,
- Math-mode formulas (signalled by
$,\begin{equation}, etc.) becomeMATH \ref{...}and variants (autoref,\subref, etc.) becomeREF\cite{...}and variants (\Citet,\shortciteNP, etc.) becomeCITE- Words that appear to be proper names become
NAME \itembecomesCASE:
We then run a cleanup pass on the extracted proofs that includes
- Cleaning up common extraction errors (e.g., due to uninterpreted macros)
- Replacing more references by
REF, e.g.,Theorem 2(a)orPostulate (*) - Replacing more citations with
CITE, e.g.,Page 47 of CITE - Replacing more proof-case markers with
CASE:, e.g.,Case (a). - Fixing a few common misspellings
Additional Information
This dataset is released under the Creative Commons Attribution 4.0 licence.
Copyright for the actual proofs remains with the authors of the papers on arXiv.org, but these simplified snippets are fair use under US copyright law.