owt-ngrams / README.md
danwil's picture
Update README.md
b33bdbc verified
metadata
license: mit
pretty_name: OpenWebText n-grams
size_categories:
  - 100K<n<1M
tags:
  - openwebtext
  - gpt2

Dataset Card for OpenWebText n-grams

Dataset Summary

This dataset contains 246K of the most common token-based (GPT-2/GPT-3) n-grams (n=1 to n=6), in the OpenWebText (OWT) dataset.

For convenient searching, it provides full tokens/strings, as well as per-position tokens/strings.

Usage

Generally, this dataset allows identifying the most common n-grams in a text corpus.

When researching LLMs tokenized similarly to GPT-2/GPT-3, it allows:

  • Constructing intermediate vectors spanning the most common short phrases (n-grams), e.g. for similarity sampling.
  • Fast searches for common phrases containing particular tokens or substrings (and in particular sequence positions).
  • Showing the effects of training set n-gram frequency.

The authors (Thomas Dooms and Dan Wilhelm) used this dataset to show that sparse auto-encoders are biased toward reconstructing the most common n-grams.

Loading the Dataset

We recommend you convert the dataset to a Pandas DataFrame for easy querying:

from datasets import load_dataset

ngrams = load_dataset('danwil/owt-ngrams')['train'].to_pandas()

Contents

Below, we list the number of n-grams and their count/frequency in the original ~9B-token OWT corpus.

  • We include all individual tokens (1-grams).
  • Note that if an n-gram occurs >N times, then every contiguous subsequence must also occur >N times.
total n=1 n=2 n=3 n=4 n=5 n=6
owt_1-6grams_246k 245831 50257 58302 44560 32831 13566 12495
count in OWT >= 0 >= 10000 >= 10000 > 5000 > 5000 > 2000

Point of Contact: Dan Wilhelm