krisbailey commited on
Commit
5b4a1a0
·
verified ·
1 Parent(s): 3169d1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -60
README.md CHANGED
@@ -1,60 +1,67 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - text-generation
5
- language:
6
- - en
7
- tags:
8
- - falcon
9
- - refinedweb
10
- - 1B
11
- - parquet
12
- size_categories:
13
- - 1B<n<10B
14
- ---
15
-
16
- # Falcon RefinedWeb 1B
17
-
18
- ## Dataset Description
19
- This is a **1.01 Billion token** subset of the [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) dataset. It was created by streaming the dataset with a large shuffle buffer to ensure a random, representative sample of the web data.
20
-
21
- ## Motivation
22
- RefinedWeb is a high-quality filtered web dataset, but the full version is massive. This 1B token slice provides a perfect testbed for evaluating model architecture changes or for use in curriculum learning experiments.
23
-
24
- ## Dataset Details
25
- - **Total Tokens:** 1,005,000,041 (~1.01B)
26
- - **Source:** [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
27
- - **Method:** Streamed with shuffle buffer (size=5000).
28
- - **Format:** Parquet (Snappy compression)
29
- - **Producer:** Kris Bailey (kris@krisbailey.com)
30
-
31
- ## Usage
32
-
33
- ```python
34
- from datasets import load_dataset
35
-
36
- ds = load_dataset("krisbailey/falcon-refinedweb-1B", split="train")
37
- print(ds[0])
38
- ```
39
-
40
- ## Subsets & Slicing
41
- Since this dataset was randomly shuffled during creation, you can safely slice it to get smaller, representative datasets (e.g., for scaling laws experiments) without needing to download the full dataset.
42
-
43
- ```python
44
- # 100M Token Subset (approx 10%)
45
- ds_100m = load_dataset("krisbailey/falcon-refinedweb-1B", split="train[:10%]")
46
-
47
- # 500M Token Subset (approx 50%)
48
- ds_500m = load_dataset("krisbailey/falcon-refinedweb-1B", split="train[:50%]")
49
- ```
50
-
51
- ## Citation
52
- ```bibtex
53
- @article{penedo2023refinedweb,
54
- title={The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only},
55
- author={Penedo, Guilherme and Malartic, Quentin and Hesslow, Daniel and Cojocaru, Ruxandra and Cappelli, Alessandro and Alobeidli, Hamza and Pannier, Baptiste and Almazrouei, Ebtesam and Launay, Julien},
56
- journal={arXiv preprint arXiv:2306.01116},
57
- year={2023}
58
- }
59
- ```
60
-
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - falcon
9
+ - refinedweb
10
+ - 1B
11
+ - parquet
12
+ - web-refined
13
+ - text-generation
14
+ - clean-web-corpus
15
+ - llm-pretrain
16
+ - domain-agnostic
17
+ - sentence-quality-filtered
18
+ - huggingface-refinedweb
19
+ size_categories:
20
+ - 1B<n<10B
21
+ ---
22
+
23
+ # Falcon RefinedWeb 1B
24
+
25
+ ## Dataset Description
26
+ This is a **1.01 Billion token** subset of the [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) dataset. It was created by streaming the dataset with a large shuffle buffer to ensure a random, representative sample of the web data.
27
+
28
+ ## Motivation
29
+ RefinedWeb is a high-quality filtered web dataset, but the full version is massive. This 1B token slice provides a perfect testbed for evaluating model architecture changes or for use in curriculum learning experiments.
30
+
31
+ ## Dataset Details
32
+ - **Total Tokens:** 1,005,000,041 (~1.01B)
33
+ - **Source:** [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
34
+ - **Method:** Streamed with shuffle buffer (size=5000).
35
+ - **Format:** Parquet (Snappy compression)
36
+ - **Producer:** Kris Bailey (kris@krisbailey.com)
37
+
38
+ ## Usage
39
+
40
+ ```python
41
+ from datasets import load_dataset
42
+
43
+ ds = load_dataset("krisbailey/falcon-refinedweb-1B", split="train")
44
+ print(ds[0])
45
+ ```
46
+
47
+ ## Subsets & Slicing
48
+ Since this dataset was randomly shuffled during creation, you can safely slice it to get smaller, representative datasets (e.g., for scaling laws experiments) without needing to download the full dataset.
49
+
50
+ ```python
51
+ # 100M Token Subset (approx 10%)
52
+ ds_100m = load_dataset("krisbailey/falcon-refinedweb-1B", split="train[:10%]")
53
+
54
+ # 500M Token Subset (approx 50%)
55
+ ds_500m = load_dataset("krisbailey/falcon-refinedweb-1B", split="train[:50%]")
56
+ ```
57
+
58
+ ## Citation
59
+ ```bibtex
60
+ @article{penedo2023refinedweb,
61
+ title={The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only},
62
+ author={Penedo, Guilherme and Malartic, Quentin and Hesslow, Daniel and Cojocaru, Ruxandra and Cappelli, Alessandro and Alobeidli, Hamza and Pannier, Baptiste and Almazrouei, Ebtesam and Launay, Julien},
63
+ journal={arXiv preprint arXiv:2306.01116},
64
+ year={2023}
65
+ }
66
+ ```
67
+