math-docs-dataset / README.md
Monketoo's picture
Upload README.md with huggingface_hub
e1cbc03 verified
metadata
license: mit
task_categories:
  - text-classification
  - object-detection
language:
  - en
size_categories:
  - 10K<n<100K

Mathematical Documents Dataset

This dataset contains 36,661 scientific documents with OCR-extracted text and mathematical content probability scores. Documents were filtered from the CommonCrawl PDF corpus based on mathematical content probability.

Quick Start

from datasets import load_dataset
import json

# Load metadata
with open("metadata.jsonl") as f:
    for line in f:
        doc = json.loads(line)
        doc_id = doc['doc_id']
        
        # Read extracted text for each page
        # texts/{doc_id}/page_1.md, page_2.md, ...
        with open(f"texts/{doc_id}/page_1.md") as page:
            text = page.read()
            print(text)
        break

Dataset Structure

math-docs-dataset/
├── metadata.jsonl           # Document metadata with probability scores
├── metadata_updated.jsonl   # Updated metadata (if applicable)
├── token_counts.jsonl       # Token counts per document
├── token_stats.json         # Aggregate token statistics
├── texts/                   # OCR-extracted text (2.5GB)
│   ├── {doc_id}/
│   │   ├── page_1.md
│   │   ├── page_2.md
│   │   └── ...
└── samples/                 # 50 sample documents for preview
    ├── pdfs/
    │   └── {doc_id}.pdf
    ├── texts/
    │   └── {doc_id}/
    └── sample_metadata.jsonl

Statistics

  • Total documents: 36,661
  • Total pages: 885,333
  • Average pages per document: 24.1
  • Mean probability range: [0.8007, 1.0000]

Token Statistics

  • Total tokens: 756,843,504
  • Average tokens per document: 20,644
  • Average tokens per page: 854

Token counts calculated using tiktoken (cl100k_base encoding, GPT-4 tokenizer).

Accessing Full PDFs

Due to size constraints, full PDF files (30+ GB) are hosted on Wasabi S3 storage.

Download All PDFs

# Install AWS CLI if needed
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install -i ~/.local/aws-cli -b ~/.local/bin

# Download PDFs (no authentication required)
aws s3 sync s3://igor-bucket/math_docs_dataset/pdfs/ ./pdfs/ \
  --endpoint-url=https://s3.eu-central-1.wasabisys.com \
  --no-sign-request

Download Specific PDF

# Download single document
aws s3 cp s3://igor-bucket/math_docs_dataset/pdfs/{doc_id}.pdf ./pdfs/ \
  --endpoint-url=https://s3.eu-central-1.wasabisys.com \
  --no-sign-request

Preview Samples

50 sample PDFs are included in the samples/ directory for preview without downloading the full dataset.

Metadata Fields

Each entry in metadata.jsonl contains:

  • doc_id: Unique document identifier
  • pdf_path: Relative path to PDF file
  • num_pages: Number of pages in the document
  • mean_proba: Mean probability that document contains mathematical content

Data Collection

  1. Source: CommonCrawl PDF corpus
  2. Filtering: Documents classified by mathematical content probability
  3. Text Extraction: doct.ocr

Usage Examples

Load and Process Documents

import json
from pathlib import Path

# Load metadata
docs = []
with open("metadata.jsonl") as f:
    for line in f:
        docs.append(json.loads(line))

# Filter high-quality math documents
high_quality = [d for d in docs if d['mean_proba'] > 0.95]
print(f"Found {len(high_quality)} high-quality documents")

# Read document text
def read_document(doc_id):
    text_dir = Path(f"texts/{doc_id}")
    full_text = []
    
    for page_file in sorted(text_dir.glob("page_*.md")):
        with open(page_file) as f:
            full_text.append(f.read())
    
    return "\n\n".join(full_text)

# Example usage
doc = high_quality[0]
text = read_document(doc['doc_id'])
print(f"Document {doc['doc_id']}: {len(text)} characters")

Token Analysis

import json

# Load token statistics
with open("token_stats.json") as f:
    stats = json.load(f)
    print(f"Total tokens: {stats['total_tokens']:,}")
    print(f"Avg tokens/doc: {stats['avg_tokens_per_doc']:.0f}")

# Load per-document token counts
with open("token_counts.jsonl") as f:
    for line in f:
        doc_tokens = json.loads(line)
        # Process individual document token counts
        break

Citation

If you use this dataset, please cite:

@dataset{math_docs_dataset,
  title={Mathematical Documents Dataset},
  author={Your Name},
  year={2025},
  publisher={HuggingFace},
  url={https://huggingface.co/datasets/your-username/math-docs-dataset}
}

License

MIT License

Contact

For questions or issues, please open an issue on the dataset repository.