Datasets:
metadata
license: apache-2.0
task_categories:
- text-generation
- fill-mask
language:
- en
size_categories:
- 1B<n<10B
LM1B - One Billion Word Benchmark
Dataset Description
The One Billion Word Benchmark is a large language modeling dataset. It contains approximately one billion words of training data derived from news articles.
How was this dataset built?
We download the full LM1B dataset from TensorFlow Datasets (TFDS) and convert it to HuggingFace format automatically. The full script is in lm1b.py. The required environment is:
- tensorflow==2.20.0
- tensorflow-datasets==4.9.9
- huggingface_hub==1.3.3
- datasets==4.4.1
pip install tensorflow==2.20.0 tensorflow-datasets==4.9.9 huggingface_hub==1.3.3 datasets==4.4.1
python lm1b_builder.py --action all
Dataset Structure
Data Fields
text: A string containing the text content
Data Splits
| Split | Examples |
|---|---|
| train | 30,301,028 |
| test | 306,688 |
Citation
@inproceedings{chelba2013one,
title={One billion word benchmark for measuring progress in statistical language modeling},
author={Chelba, Ciprian and Mikolov, Tomas and Schuster, Mike and Ge, Qi and Brants, Thorsten and Koehn, Phillipp and Robinson, Tony},
booktitle={Interspeech},
year={2014}
}
License
Apache 2.0