Datasets:

Formats:
parquet
Languages:
English
Libraries:
Datasets
Dask
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
input_ids
sequencelengths
2.05k
2.05k
idx
int64
0
1.19M
[ 1416, 281, 32, 187, 52, 4213, 19500, 7253, 2, 330, 8601, 348, 1503, 4894, 466, 457, 69, 2, 187, 42, 9428, 309, 452, 368, 281, 5717, 323, 326, 21643, 4320, 4370, 675, 2069, 290, 309, 2238, 273, 1239, 285, 6704, 2, 187, 42, 816, 3...
0
[ 34601, 2968, 3261, 751, 352, 369, 247, 1270, 3959, 15, 187, 25263, 359, 452, 281, 4456, 326, 627, 369, 271, 8644, 39561, 682, 11356, 3909, 25961, 1964, 80, 2127, 13, 347, 973, 347, 247, 10369, 19649, 10921, 2086, 15, 1893, 320, 2119, ...
1
[ 21301, 84, 387, 253, 5625, 12651, 327, 322, 36148, 9753, 275, 1457, 2816, 13, 2918, 9403, 275, 4397, 273, 1016, 807, 15, 496, 1635, 13, 247, 1643, 273, 253, 21301, 84, 273, 253, 1755, 17045, 2561, 9380, 588, 320, 12470, 281, 1246, 6...
2
[ 22541, 665, 310, 253, 9966, 313, 284, 387, 4163, 4765, 481, 49892, 14, 2973, 66, 3580, 403, 18461, 275, 670, 2800, 4343, 285, 3361, 273, 253, 9341, 452, 644, 4860, 762, 1457, 12123, 708, 401, 27520, 17739, 13, 9943, 31508, 1131, 13, ...
3
[ 2203, 14, 28935, 64, 15337, 64, 76, 1797, 5473, 17, 6038, 551, 187, 50274, 11814, 14, 4897, 27, 1852, 39, 18, 34, 26, 34, 26, 28, 187, 94, 187, 187, 1206, 32989, 27, 7053, 14, 7003, 551, 187, 50274, 4897, 27, 1852, 7931, 37, 397...
4
[ 19108, 583, 15, 7645, 9, 25318, 2402, 13, 278, 19108, 583, 20838, 10, 6824, 2379, 50276, 94, 190, 187, 9897, 190, 187, 11202, 190, 187, 20, 10, 14815, 368, 4496, 48399, 634, 1953, 32, 187, 29, 15697, 64, 13982, 31, 19721, 64, 20, ...
5
[ 281, 253, 10755, 273, 253, 8039, 390, 281, 253, 18719, 22910, 313, 338, 908, 10, 285, 7011, 39065, 407, 271, 22831, 3085, 3828, 13, 390, 253, 313, 3899, 38347, 2133, 3139, 481, 187, 1552, 11461, 4483, 253, 9763, 281, 2736, 253, 5052, ...
6
[ 1425, 187, 187, 1628, 1898, 752, 310, 352, 309, 971, 7721, 187, 187, 1628, 4967, 5490, 13, 368, 971, 281, 320, 634, 3331, 4661, 85, 15319, 1425, 2490, 187, 1552, 637, 369, 417, 1966, 15, 1621, 13, 22188, 284, 369, 417, 690, 10891, ...
7
[ 42828, 14, 1994, 2788, 434, 2716, 14, 41978, 28, 310, 326, 521, 3392, 434, 3347, 32, 3032, 6692, 3392, 434, 3347, 32, 1310, 594, 13, 752, 310, 344, 2509, 1060, 32, 1680, 521, 2716, 14, 41978, 671, 2716, 14, 44, 37173, 984, 326, 43...
8
[ 14, 10613, 481, 1292, 840, 846, 11734, 13, 352, 816, 11359, 327, 7004, 619, 23820, 15, 187, 74, 671, 3597, 24497, 352, 3066, 26721, 13, 187, 31064, 3944, 1358, 19721, 27, 15760, 33, 22922, 27, 2055, 2055, 16, 26971, 14, 10613, 16, 6...
9
[ 13, 533, 359, 476, 1287, 247, 12435, 326, 15, 5317, 326, 4620, 760, 7019, 275, 253, 1533, 323, 625, 685, 436, 2546, 285, 9577, 347, 15, 12349, 1119, 275, 253, 1495, 273, 6868, 1078, 352, 9142, 3651, 265, 617, 285, 13471, 617, 15, ...
10
[ 611, 4398, 2154, 31, 4398, 2162, 31, 187, 60, 3175, 4985, 39, 3915, 13801, 62, 187, 1615, 313, 3899, 261, 247, 6325, 2013, 19, 893, 307, 3310, 64, 14019, 3310, 64, 250, 17708, 3588, 64, 4347, 6232, 10, 187, 60, 3175, 4985, 39, 260...
11
[ 187, 50273, 11, 1214, 2309, 3781, 380, 1980, 7141, 1854, 187, 50273, 8480, 187, 50274, 4387, 1159, 755, 64, 3140, 3914, 3140, 3967, 13, 370, 3140, 10, 551, 187, 50270, 14456, 370, 34949, 28, 187, 50270, 5, 3298, 426, 34667, 28199, 138...
12
[ 281, 452, 38614, 5793, 32837, 275, 253, 1453, 273, 253, 24866, 403, 2007, 19079, 342, 835, 253, 31551, 5837, 273, 253, 15, 308, 422, 3320, 1885, 17904, 3587, 281, 1501, 1247, 14, 37392, 247, 5652, 18890, 398, 13235, 391, 7588, 509, 16...
13
[ 1743, 3259, 3713, 1991, 92, 49, 2306, 83, 1009, 285, 840, 6523, 849, 1016, 629, 310, 1289, 2107, 281, 1016, 643, 846, 3192, 253, 370, 6648, 578, 9296, 3713, 1991, 92, 34, 2306, 78, 6567, 3822, 187, 187, 34, 27, 496, 12146, 953, 72...
14
[ 253, 34679, 14472, 13225, 2791, 15, 5427, 13, 4135, 13, 285, 10869, 403, 14149, 10169, 275, 253, 34679, 14472, 13225, 2791, 15, 4135, 310, 581, 273, 253, 22583, 14, 33601, 14149, 10169, 275, 253, 4156, 34679, 14472, 13225, 2791, 13, 285...
15
[ 11474, 13, 323, 760, 767, 1897, 326, 3548, 626, 3981, 247, 2257, 15, 187, 2598, 13, 275, 253, 990, 309, 574, 253, 4327, 273, 13383, 327, 323, 247, 1355, 3874, 281, 1790, 285, 3343, 323, 247, 1027, 5931, 12415, 254, 13, 390, 851, 2...
16
[ 253, 690, 1941, 4828, 253, 12700, 13, 849, 1916, 22900, 28424, 34180, 249, 12414, 3327, 3459, 15952, 407, 15, 12369, 436, 7741, 310, 2193, 760, 390, 247, 2285, 13, 253, 2408, 273, 4831, 11659, 3066, 10550, 13, 2829, 23354, 15, 611, 60...
17
[ 253, 4760, 273, 253, 2129, 5722, 323, 253, 1273, 673, 436, 2952, 15, 754, 19301, 253, 2165, 14, 23166, 1673, 4736, 342, 2164, 7253, 1669, 281, 564, 275, 253, 2165, 347, 47594, 571, 7171, 659, 15, 2516, 434, 281, 3464, 253, 760, 1556...
18
[ 46877, 17025, 310, 247, 8474, 26481, 2201, 13, 10621, 1497, 5884, 387, 41426, 2499, 285, 310, 8927, 432, 17372, 13, 11314, 15, 1500, 556, 617, 1211, 5311, 285, 15167, 5048, 835, 703, 25339, 12989, 432, 14608, 13, 1655, 3394, 285, 6831, ...
19
[ 10054, 271, 30278, 27170, 13, 534, 309, 2868, 556, 644, 12422, 731, 8696, 285, 2301, 4351, 731, 323, 2583, 15, 187, 42, 476, 923, 326, 512, 7223, 275, 253, 2670, 452, 5559, 41220, 15, 50276, 6436, 13, 253, 30278, 27170, 29939, 281, ...
20
End of preview. Expand in Data Studio

The Pretokenized Dolma Dataset

A pre-tokenized, pre-shuffled version of Dolma, the high-quality text corpus from AI2. This dataset is designed to be plug-and-play with the pico-train library.

Overview

Key Features:

  • Tokenized with allenai/OLMo-7B-0724-hf, a BPE-tokenized with a vocabulary size of 50280
  • Sequence length: 2049 tokens (2048 + 1 for next-token prediction)
  • Sharded into 10,000 Parquet files (~78MB each)
  • 420B tokens total size (perfect for training a model for 200K steps at batch size 1024)
  • Ready for streaming via datasets.load_dataset(..., streaming=True)
  • Pre-shuffling ensures that the order in which data is shown to models is consistent across training runs

How it was built

We first downloaded the full Dolma corpus and selected a random 30% subset for preprocessing. Using the OLMo tokenizer, the text was tokenized and chunked into sequences of 2049 tokens. Each document is separated by an end-of-sequence () token.

After tokenization, we shuffled and evenly sampled from the token stream to create 100 uniform shards. These were then further divided into 10,000 smaller shards to support fast loading and parallel training. Only full-length sequences are retained to ensure consistency across samples.

The dataset is stored as Parquet files, each containing token sequences under the key input_ids.

We release the exact scripts we use to create this dataset in our pico-lm/pico-dataset GitHub repo.

Usage

from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True)
Downloads last month
4,043

Models trained or fine-tuned on pico-lm/pretokenized-dolma

Collection including pico-lm/pretokenized-dolma