File size: 2,142 Bytes
93ff79c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b415ba9
 
 
 
93ff79c
 
 
 
 
 
6b0c4b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: dump
    dtype: string
  - name: url
    dtype: string
  - name: date
    dtype: string
  - name: file_path
    dtype: string
  - name: offset
    dtype: int64
  - name: token_count
    dtype: int64
  - name: language
    dtype: string
  - name: page_average_lid
    dtype: string
  - name: page_average_lid_score
    dtype: float64
  - name: full_doc_lid
    dtype: string
  - name: full_doc_lid_score
    dtype: float64
  - name: per_page_languages
    list: string
  - name: is_truncated
    dtype: bool
  - name: extractor
    dtype: string
  - name: page_ends
    list: int64
  splits:
  - name: train
    num_bytes: 413952746
    num_examples: 18616
  download_size: 205109157
  dataset_size: 413952746
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---


## Sampling Methodology

This dataset was created using **reservoir sampling**, a statistically unbiased random sampling algorithm that guarantees each sample from the source dataset has an equal probability of being included. This ensures the 100M token sample is representative of the full dataset's characteristics.

**Source Dataset**: [HuggingFaceFW/finepdfs](https://huggingface.co/datasets/HuggingFaceFW/finepdfs)
**Sample Size**: 100M tokens
**Content**: High-quality textbook-style pdfs

Reservoir sampling enables rapid experimentation and ablation studies without processing the entire source dataset, while maintaining statistical validity of results.

For details on how this dataset was used in optimal pre-training data composition research, see the [blog post](https://huggingface.co/blog/codelion/optimal-dataset-mixing/).

## Citation

If you use this model/dataset, please cite:

```bibtex
@article{sharma2025billion,
  title={The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix},
  author={Sharma, Asankhaya},
  year={2025},
  url={https://huggingface.co/blog/codelion/optimal-dataset-mixing/}
}
```

For more details, see the [blog post](https://huggingface.co/blog/codelion/optimal-dataset-mixing/).