danwil commited on
Commit
7e2be7f
·
verified ·
1 Parent(s): 9e0e25d

Add summary/usage/loading code

Browse files
Files changed (1) hide show
  1. README.md +39 -6
README.md CHANGED
@@ -8,12 +8,45 @@ tags:
8
  - gpt2
9
  ---
10
 
11
- This dataset contains the most common 1-6 contiguous token subsequences (n-grams) in an [open-source replication of the OpenWebText (OWT) dataset](https://huggingface.co/datasets/Skylion007/openwebtext). The OWT replication was compiled by Aaron Gokaslan and Vanya Cohen of Brown University.
12
 
13
- Below, we list the number of n-grams included. Alongside each, we show the minimum number of times each sequence occurs in the ~9B-token dataset (its frequency). We include all individual tokens (1-grams). Note that if an n-gram occurs >N times, then every contiguous subsequence must also occur >N times.
14
 
15
- | dataset | total | n=1 | n=2 | n=3 | n=4 | n=5 | n=6 |
16
- | --- | ----- |------ | ------ | ------ | ------ | ------ | ------ |
17
- | owt_1-6grams_246k | 245831 | 50257 (freq >= 0) | 58302 (freq >= 10000) | 44560 (freq >= 10000) | 32831 (freq > 5000) | 13566 (freq > 5000) | 12495 (freq > 2000) |
18
 
19
- This dataset was used by the authors to show that gpt2-small sparse autoencoders memorize the most commonly presented n-grams more exactly.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - gpt2
9
  ---
10
 
11
+ # Dataset Card for OpenWebText n-grams
12
 
13
+ ## Dataset Summary
14
 
15
+ This dataset contains 246K of the most common token-based (GPT-2/GPT-3) n-grams (n=1 to n=6), in the [OpenWebText (OWT) dataset](https://huggingface.co/datasets/Skylion007/openwebtext).
 
 
16
 
17
+ For convenient searching, it provides full tokens/strings, as well as per-position tokens/strings.
18
+
19
+ ## Usage
20
+
21
+ Generally, this dataset allows identifying the most common n-grams in a text corpus.
22
+
23
+ When researching LLMs [tokenized similarly to GPT-2/GPT-3](https://platform.openai.com/tokenizer), it allows:
24
+ - Constructing intermediate vectors spanning the most common short phrases (n-grams), e.g. for similarity sampling.
25
+ - Fast searches for common phrases containing particular tokens or substrings (and in particular sequence positions).
26
+ - Showing the effects of training set n-gram frequency.
27
+
28
+ The authors used this dataset to show that sparse auto-encoders are biased toward reconstructing the most common n-grams.
29
+
30
+ ## Loading the Dataset
31
+
32
+ We recommend you convert the dataset to a Pandas DataFrame for easy querying:
33
+
34
+ ```python
35
+ from datasets import load_dataset
36
+
37
+ ngrams = load_dataset('danwil/owt-ngrams')['train'].to_pandas()
38
+ ```
39
+
40
+ ## Contents
41
+
42
+ Below, we list the number of n-grams and their count/frequency in the original ~9B-token OWT corpus.
43
+ - We include all individual tokens (1-grams).
44
+ - Note that if an n-gram occurs >N times, then every contiguous subsequence must also occur >N times.
45
+
46
+ | | total | n=1 | n=2 | n=3 | n=4 | n=5 | n=6 |
47
+ | --- | ----- |------ | ------ | ------ | ------ | ------ | ------ |
48
+ | owt_1-6grams_246k | 245831 | 50257 | 58302 | 44560 | 32831 | 13566 | 12495 |
49
+ | count in OWT | | >= 0 | >= 10000 | >= 10000 | > 5000 | > 5000 | > 2000 |
50
+
51
+
52
+ **Point of Contact:** [Dan Wilhelm](mailto:dan@danwil.com)