File size: 1,343 Bytes
0199c0e
 
 
5ff6d18
 
0199c0e
5ff6d18
0199c0e
5ff6d18
0199c0e
 
 
 
 
 
 
 
 
520cd55
 
 
 
 
 
 
 
 
 
39b9942
520cd55
 
0199c0e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: apache-2.0
task_categories:
  - text-generation
  - fill-mask
language:
  - en
size_categories:
  - 1B<n<10B
---

# LM1B - One Billion Word Benchmark

## Dataset Description

The One Billion Word Benchmark is a large language modeling dataset.
It contains approximately one billion words of training data derived from news articles.

## How was this dataset built?

We download the full LM1B dataset from TensorFlow Datasets (TFDS) and convert it to HuggingFace format automatically. The full script is in `lm1b.py`. The required environment is:
- tensorflow==2.20.0
- tensorflow-datasets==4.9.9
- huggingface_hub==1.3.3
- datasets==4.4.1

```bash
pip install tensorflow==2.20.0 tensorflow-datasets==4.9.9 huggingface_hub==1.3.3 datasets==4.4.1
python lm1b_builder.py --action all
```

## Dataset Structure

### Data Fields

- `text`: A string containing the text content

### Data Splits

| Split | Examples |
|-------|----------|
| train | 30,301,028 |
| test | 306,688 |

## Citation

```bibtex
@inproceedings{chelba2013one,
  title={One billion word benchmark for measuring progress in statistical language modeling},
  author={Chelba, Ciprian and Mikolov, Tomas and Schuster, Mike and Ge, Qi and Brants, Thorsten and Koehn, Phillipp and Robinson, Tony},
  booktitle={Interspeech},
  year={2014}
}
```

## License

Apache 2.0