joelniklaus HF Staff commited on
Commit
11b8866
·
verified ·
1 Parent(s): 49b09cb

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +63 -24
README.md CHANGED
@@ -1,29 +1,68 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path: data/train-*
7
  dataset_info:
8
  features:
9
- - name: text
10
- dtype: string
11
- - name: id
12
- dtype: string
13
- - name: url
14
- dtype: string
15
- - name: language
16
- dtype: string
17
- - name: language_score
18
- dtype: float64
19
- - name: fasttext_score
20
- dtype: float64
21
- - name: dataset
22
- dtype: string
23
  splits:
24
- - name: train
25
- num_bytes: 522435902444
26
- num_examples: 89269902
27
- download_size: 315974473934
28
- dataset_size: 522435902444
 
 
 
 
 
 
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  dataset_info:
3
  features:
4
+ - name: text
5
+ dtype: string
6
+ - name: id
7
+ dtype: string
8
+ - name: url
9
+ dtype: string
10
+ - name: language
11
+ dtype: string
12
+ - name: language_score
13
+ dtype: float64
14
+ - name: fasttext_score
15
+ dtype: float64
16
+ - name: dataset
17
+ dtype: string
18
  splits:
19
+ - name: train
20
+ num_examples: 89269902
21
+ license: odc-by
22
+ language:
23
+ - en
24
+ size_categories:
25
+ - 10M<n<100M
26
+ tags:
27
+ - pretraining
28
+ - smol-data
29
+ pretty_name: DCLM 100BT (Shuffled)
30
  ---
31
+
32
+ # DCLM 100BT (Shuffled)
33
+
34
+ A globally shuffled version of [HuggingFaceFW/dclm_100BT](https://huggingface.co/datasets/HuggingFaceFW/dclm_100BT).
35
+
36
+ Part of the [Smol-Data](https://huggingface.co/collections/HuggingFaceFW/smol-data) collection — tried and tested mixes for strong pretraining.
37
+
38
+ ## Dataset Description
39
+
40
+ This dataset contains the same ~100B tokens as [dclm_100BT](https://huggingface.co/datasets/HuggingFaceFW/dclm_100BT) but with all documents globally shuffled (seed=42). Use this version when you need randomized document ordering for pretraining.
41
+
42
+ ## How It Was Created
43
+
44
+ The unshuffled dataset was loaded into memory, shuffled with `dataset.shuffle(seed=42)`, and re-uploaded with 100 shards. See the [smol_data.py](https://github.com/huggingface/datatrove/blob/main/examples/smol_data.py) script for details.
45
+
46
+ ## Usage
47
+
48
+ ```python
49
+ from datasets import load_dataset
50
+
51
+ ds = load_dataset("HuggingFaceFW/dclm_100BT-shuffled", split="train", streaming=True)
52
+ for sample in ds:
53
+ print(sample["text"][:200])
54
+ break
55
+ ```
56
+
57
+ ## Citation
58
+
59
+ ```bibtex
60
+ @misc{niklaus2025dclm100bt,
61
+ title={DCLM 100BT},
62
+ author={Joel Niklaus and Hynek Kydl{\'\i}{\v{c}}ek},
63
+ year={2026},
64
+ publisher={Hugging Face},
65
+ journal={Hugging Face repository},
66
+ howpublished={\url{https://huggingface.co/datasets/HuggingFaceFW/dclm_100BT-shuffled}}
67
+ }
68
+ ```