File size: 3,710 Bytes
63e7929
a9cd4f1
 
 
 
 
 
 
63e7929
 
 
 
03a8ab0
 
 
 
 
 
 
 
63e7929
 
03a8ab0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63e7929
 
 
 
a9cd4f1
03a8ab0
a9cd4f1
 
 
 
 
 
 
 
03a8ab0
a9cd4f1
 
 
 
03a8ab0
a9cd4f1
 
 
 
 
 
 
 
 
 
 
 
 
 
03a8ab0
a9cd4f1
 
 
 
 
 
 
 
 
 
 
 
03a8ab0
 
a9cd4f1
 
 
 
03a8ab0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9cd4f1
 
 
 
 
 
 
 
 
 
 
22e2f3c
 
 
 
 
 
 
 
 
 
 
 
a9cd4f1
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
---
language:
- en
license: apache-2.0
tags:
- wikipedia
- finewiki
- sampled
dataset_info:
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: wikiname
    dtype: string
  - name: page_id
    dtype: int64
  - name: title
    dtype: string
  - name: url
    dtype: string
  - name: date_modified
    dtype: string
  - name: in_language
    dtype: string
  - name: wikidata_id
    dtype: string
  - name: bytes_html
    dtype: int64
  - name: wikitext
    dtype: string
  - name: version
    dtype: int64
  - name: infoboxes
    dtype: string
  - name: has_math
    dtype: bool
  splits:
  - name: train
    num_examples: 52721
---

# FineWiki Sampled Dataset (1,000,000,332 tokens)

This is a sampled subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki) containing approximately **1,000,000,332 tokens**.

## Dataset Details

### Source
- **Original Dataset**: HuggingFaceFW/finewiki (English subset, train split)
- **Sampling Method**: Reservoir sampling (unbiased random sampling)
- **Target Token Count**: 1,000,000,332 tokens
- **Tokenizer**: GPT-2 (50,257 vocabulary)

### Sampling Statistics
- **Documents Sampled**: 52,721
- **Average Tokens/Doc**: 18971.0
- **Random Seed**: 42

### Sampling Method

This dataset was created using **reservoir sampling**, which ensures:
- ✅ Unbiased random sample from the full dataset
- ✅ Every document has equal probability of being selected
- ✅ No distribution bias (early/late documents equally represented)
- ✅ Streaming-based (no need to download full dataset)

The sampling algorithm:
1. Streams through HuggingFaceFW/finewiki without downloading
2. Uses GPT-2 tokenizer to count tokens per document
3. Maintains a reservoir of documents using standard reservoir sampling
4. Stops when target token count is reached

## Usage

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("codelion/finewiki-1B")

# Access the training data
for example in dataset['train']:
    print(example['text'])
    print(example['title'])
    print(example['url'])
```

## Dataset Structure

Each example contains all fields from the original FineWiki dataset:

- **text** (string): The Wikipedia article text (primary content)
- **id** (string): Unique identifier
- **wikiname** (string): Wikipedia source name
- **page_id** (int64): Wikipedia page ID
- **title** (string): Article title
- **url** (string): Source Wikipedia URL
- **date_modified** (string): Last modification date
- **in_language** (string): Language code (always 'en' for this subset)
- **wikidata_id** (string): Wikidata identifier
- **bytes_html** (int64): Size of HTML content
- **wikitext** (string): Original wikitext markup
- **version** (int64): Article version number
- **infoboxes** (string): Extracted infobox data
- **has_math** (bool): Whether article contains mathematical formulas

## Use Cases

This sampled dataset is ideal for:
- 🔬 Small-scale language model pretraining experiments
- 📊 Dataset composition studies
- ⚡ Quick prototyping and testing
- 💰 Low-cost training runs

## Citation

If you use this model/dataset, please cite:

```bibtex
@article{sharma2025billion,
  title={The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix},
  author={Sharma, Asankhaya},
  year={2025},
  url={https://huggingface.co/blog/codelion/optimal-dataset-mixing/}
}
```

For more details, see the [blog post](https://huggingface.co/blog/codelion/optimal-dataset-mixing/).

## License

Apache 2.0 (same as original FineWiki dataset)

## Dataset Card Authors

CodeLion

## Dataset Card Contact

For questions or issues, please open an issue on the dataset repository.