Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found prooflang.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1167, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found prooflang.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for the ProofLang Corpus

Dataset Summary

The ProofLang Corpus includes 3.7M proofs (558 million words) mechanically extracted from papers that were posted on arXiv.org between 1992 and 2020. The focus of this corpus is proofs, rather than the explanatory text that surrounds them, and more specifically on the language used in such proofs. Specific mathematical content is filtered out, resulting in sentences such as Let MATH be the restriction of MATH to MATH.

This dataset reflects how people prefer to write (non-formalized) proofs, and is also amenable to statistical analyses and experiments with Natural Language Processing (NLP) techniques. We hope it can serve as an aid in the development of language-based proof assistants and proof checkers for professional and educational purposes.

Dataset Structure

There are multiple TSV versions of the data. Primarily, proofs divides up the data proof-by-proof, and sentences further divides up the same data sentence-by-sentence. The raw dataset is a less-cleaned-up version of proofs. More usefully, the tags dataset gives arXiv subject tags for each paper ID found in the other data files.

  • The data in proofs (and raw) consists of a paper ID (identifying where the proof was extracted from), and the proof as a string.

  • The data in sentences consists of a paper ID, and the sentence as a string.

  • The data in tags consists of a paper ID, and the arXiv subject tags for that paper as a single comma-separated string.

Further metadata about papers can be queried from arXiv.org using the paper ID.

In particular, each paper <id> in the dataset can be accessed online at the url https://arxiv.org/abs/<id>

Dataset Size

  • proofs is 3,094,779,182 bytes (unzipped) and has 3,681,893 examples.
  • sentences is 3,545,309,822 bytes (unzipped) and has 38,899,132 examples.
  • tags is 7,967,839 bytes (unzipped) and has 328,642 rows.
  • raw is 3,178,997,379 bytes (unzipped) and has 3,681,903 examples.

Dataset Statistics

  • The average length of sentences is 14.1 words.

  • The average length of proofs is 10.5 sentences.

Dataset Usage

Data can be downloaded as (zipped) TSV files.

Accessing the data programmatically from Python is also possible using the Datasets library. For example, to print the first 10 proofs:

from datasets import load_dataset
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
for d in dataset.take(10):
   print(d['paper'], d['proof'])

To look at individual sentences from the proofs,

dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
for d in dataset.take(10):
   print(d['paper'], d['sentence'])

To get a comma-separated list of arXiv subject tags for each paper,

from datasets import load_dataset
dataset = load_dataset('proofcheck/prooflang', 'tags', split='train', streaming='True')
for d in dataset.take(10):
   print(d['paper'], d['tags'])

Finally, to look at a version of the proofs with less aggressive cleanup (straight from the LaTeX extraction),

dataset = load_dataset('proofcheck/prooflang', 'raw', split='train', streaming='True')
for d in dataset.take(10):
   print(d['paper'], d['proof'])

Data Splits

There is currently no train/test split; all the data is in train.

Dataset Creation

We started with the LaTeX source of 1.6M papers that were submitted to arXiv.org between 1992 and April 2022.

The proofs were extracted using a Python script simulating parts of LaTeX (including defining and expanding macros). It does no actual typesetting, throws away output not between \begin{proof}...\end{proof}, and skips math content. During extraction,

  • Math-mode formulas (signalled by $, \begin{equation}, etc.) become MATH
  • \ref{...} and variants (autoref, \subref, etc.) become REF
  • \cite{...} and variants (\Citet, \shortciteNP, etc.) become CITE
  • Words that appear to be proper names become NAME
  • \item becomes CASE:

We then run a cleanup pass on the extracted proofs that includes

  • Cleaning up common extraction errors (e.g., due to uninterpreted macros)
  • Replacing more references by REF, e.g., Theorem 2(a) or Postulate (*)
  • Replacing more citations with CITE, e.g., Page 47 of CITE
  • Replacing more proof-case markers with CASE:, e.g., Case (a).
  • Fixing a few common misspellings

Additional Information

This dataset is released under the Creative Commons Attribution 4.0 licence.

Copyright for the actual proofs remains with the authors of the papers on arXiv.org, but these simplified snippets are fair use under US copyright law.

Downloads last month
17