danwil commited on
Commit
0b22c32
·
verified ·
1 Parent(s): ef9b96e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -1,3 +1,19 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pretty_name: OpenWebText n-grams
4
+ size_categories:
5
+ - 100K<n<1M
6
+ tags:
7
+ - openwebtext
8
+ - gpt2
9
+ ---
10
+
11
+ This dataset contains the most common 1-6 contiguous token subsequences (n-grams) in an [open-source replication of the OpenWebText (OWT) dataset](https://huggingface.co/datasets/Skylion007/openwebtext). The OWT replication was compiled by Aaron Gokaslan and Vanya Cohen of Brown University.
12
+
13
+ Below, we list the number of n-grams included. Alongside each, we show the minimum number of times each sequence occurs in the ~9B-token dataset (its frequency). We include all individual tokens (1-grams). Note that if an n-gram occurs >N times, then every contiguous subsequence must also occur >N times.
14
+
15
+ | dataset | total | n=1 | n=2 | n=3 | n=4 | n=5 | n=6 |
16
+ | --- | ----- |------ | ------ | ------ | ------ | ------ | ------ |
17
+ | owt_1-6grams_246k | 245831 | 50257 (freq >= 0) | 58302 (freq >= 10000) | 44560 (freq >= 10000) | 32831 (freq > 5000) | 13566 (freq > 5000) | 12495 (freq > 2000) |
18
+
19
+ This dataset was used by the authors to show that gpt2-small sparse autoencoders memorize the most commonly presented n-grams more exactly.