Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
input_ids
listlengths
256
256
attention_mask
listlengths
256
256
sae_features
listlengths
24.6k
24.6k
[259,4346,45038,262,966,286,24430,262,717,734,5341,351,257,10484,532,510,262,3504,532,407,3218,10484(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[8.838875770568848,11.513120651245117,9.798502922058105,10.893499374389648,18.604169845581055,9.6286(...TRUNCATED)
[29366,290,6911,2247,50256,50256,50256,50256,50256,50256,50256,50256,50256,50256,50256,50256,50256,5(...TRUNCATED)
[1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[3.588880777359009,9.64111042022705,7.575667381286621,5.150374412536621,14.9397611618042,6.494133472(...TRUNCATED)
[2061,389,922,290,2089,5389,286,10107,290,11353,3708,7733,30,50256,50256,50256,50256,50256,50256,502(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[6.816588878631592,9.995450019836426,9.314075469970703,7.427464485168457,15.717525482177734,7.699777(...TRUNCATED)
[2437,466,12749,1663,30,50256,50256,50256,50256,50256,50256,50256,50256,50256,50256,50256,50256,5025(...TRUNCATED)
[1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[6.15381383895874,7.143537521362305,7.363134860992432,5.823132514953613,10.330984115600586,5.4236931(...TRUNCATED)
[5195,857,262,1660,422,616,9592,277,14272,316,6938,1180,621,262,1660,422,616,12436,277,14272,316,30,(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[7.615189552307129,11.256817817687988,8.812464714050293,10.182482719421387,18.340274810791016,9.4264(...TRUNCATED)
[1532,3223,18915,17565,517,4894,11,1521,857,1657,4168,4245,4577,621,3223,4168,30,50256,50256,50256,5(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[8.300969123840332,9.530838966369629,8.913466453552246,8.859293937683105,15.153658866882324,8.115330(...TRUNCATED)
[5195,318,9015,42557,4785,355,281,4044,30,50256,50256,50256,50256,50256,50256,50256,50256,50256,5025(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[6.855428218841553,6.900235652923584,7.505431652069092,5.022180080413818,9.728654861450195,4.9480109(...TRUNCATED)
[6090,3761,14449,705,16939,6,257,1339,611,484,760,511,7534,389,6717,30,50256,50256,50256,50256,50256(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[10.603768348693848,10.572132110595703,9.913844108581543,11.692523956298828,16.012619018554688,9.218(...TRUNCATED)
[5195,389,617,5916,11945,35988,11,290,1854,389,407,30,50256,50256,50256,50256,50256,50256,50256,5025(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[8.176478385925293,11.20761489868164,9.537814140319824,10.61679458618164,18.344642639160156,9.631392(...TRUNCATED)
[2061,318,1180,287,262,3632,16585,326,45482,3612,546,3867,616,3211,290,1682,3867,340,30,50256,50256,(...TRUNCATED)
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[7.5862717628479,11.14503002166748,9.32994270324707,9.181411743164062,17.67551040649414,8.9653253555(...TRUNCATED)
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Dataset Card for gpt2_eli5_sae_features

This dataset aims to create a corpus of data to help guide research into monosemantic features using SAE's. It has been generated using this raw template

Dataset Details

Dataset Description

This dataset takes the eli5 subset from facebook/kilt_tasks and processes it to be used in SAE research. More specifically, we take the inputs, and tokenize them using the gpt2-small tokenizer. The outputs are tokenized, embedded , and encoded using the

  • Curated by: Manik Sethi
  • License: MIT

Dataset Sources [optional]

Uses

Direct Use

This dataset is meant to train a multi-layer-perceptron to predict what the SAE feature activations would be in the answer for a prompt. Note that this dataset doesn't actually have the answers to the tokenized questions, but rather a ~25k long vector of activations for different features in a trained SAE.

Dataset Structure

Currently, we only have the train split from the ELI5 dataset. Therefore, this dataset is very similarly structured.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

The source of this data is from the subreddit "Explain Like I'm 5", or ELI5 for short.

Data Collection and Processing

The inputs are tokenized prompts. The labels are tokenized, embedded, and then encoded with the given SAE layer.

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Dataset Card Contact

mksethi@ucdavis.edu

Downloads last month
3