# Dataset Card for gpt2_eli5_sae_features This dataset aims to create a corpus of data to help guide research into monosemantic features using SAE's. It has been generated using [this raw template](https://github.com/manik-sethi/hallucination-circuits) ## Dataset Details ### Dataset Description This dataset takes the eli5 subset from facebook/kilt_tasks and processes it to be used in SAE research. More specifically, we take the inputs, and tokenize them using the gpt2-small tokenizer. The outputs are tokenized, embedded , and encoded using the - **Curated by:** Manik Sethi - **License:** MIT ### Dataset Sources [optional] - **Repository:** [hallucination_circuits](https://github.com/manik-sethi/hallucination-circuits) - **Paper:** On the way! - **Demo:** On the way! ## Uses ### Direct Use This dataset is meant to train a multi-layer-perceptron to predict what the SAE feature activations would be in the answer for a prompt. Note that this dataset doesn't actually have the answers to the tokenized questions, but rather a ~25k long vector of activations for different features in a trained SAE. ## Dataset Structure Currently, we only have the train split from the ELI5 dataset. Therefore, this dataset is very similarly structured. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The source of this data is from the subreddit "Explain Like I'm 5", or ELI5 for short. #### Data Collection and Processing The inputs are tokenized prompts. The labels are tokenized, embedded, and then encoded with the given SAE layer. ## Bias, Risks, and Limitations [More Information Needed] ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Dataset Card Contact mksethi@ucdavis.edu