renma's picture
Update README.md
590e7d3 verified
metadata
task_categories:
  - text-generation
language:
  - en
tags:
  - pretrain
size_categories:
  - 10B<n<100B

Top 30B token SlimPajama Subset selected by the Readability rater

This repository contains the dataset described in the paper Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models.

Code: https://github.com/opendatalab/Meta-rater

Dataset Description

This dataset contains the top 30B tokens from the SlimPajama-627B corpus, selected using the Readability dimension of the PRRC (Professionalism, Readability, Reasoning, Cleanliness) framework. Each document in this subset is scored and filtered by a ModernBERT-based rater fine-tuned to assess the clarity, coherence, and ease of understanding of the text.

  • Source: SlimPajama-627B Annotated Dataset
  • Selection: Top 30B tokens by PRRC-Readability score
  • Quality metric: Readability (0–5 scale, see below)
  • Annotation coverage: 100% of selected subset

Dataset Statistics

  • Total tokens: 30B (subset of SlimPajama-627B)
  • Selection method: Top-ranked by PRRC-Readability ModernBERT rater
  • Domains: Same as SlimPajama (CommonCrawl, C4, GitHub, Books, ArXiv, Wikipedia, StackExchange)
  • Annotation: Each document has a readability score (0–5)

Readability Quality Metric

Readability evaluates the clarity, coherence, and ease of understanding of the text. Higher scores indicate content that is clear, well-structured, and easy to follow, while lower scores reflect text that is difficult to comprehend due to poor structure, grammar, or vocabulary.

  • 0–1: Significant issues with clarity or coherence; difficult to read
  • 2–3: Generally clear but with some sections that are hard to understand
  • 4–5: Very clear, coherent, and easy to read

Scores are assigned by a ModernBERT model fine-tuned on Llama-3.3-70B-Instruct annotations, as described in the Meta-rater paper.

Annotation Process

  • Initial annotation: Llama-3.3-70B-Instruct rated 500k+ SlimPajama samples for readability
  • Model training: ModernBERT fine-tuned on these annotations
  • Scoring: All SlimPajama documents scored by ModernBERT; top 30B tokens selected

Citation

If you use this dataset, please cite:

@article{zhuang2025meta,
  title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
  author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
  journal={arXiv preprint arXiv:2504.14194},
  year={2025}
}

License

This dataset is released under the same license as the original SlimPajama dataset. See the original SlimPajama repository for details.

Contact


Made with ❤️ by the OpenDataLab team