Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
| dataset_info: | |
| features: | |
| - name: text | |
| dtype: string | |
| - name: id | |
| dtype: string | |
| - name: metadata | |
| struct: | |
| - name: file_path | |
| dtype: string | |
| - name: input_ids | |
| list: int32 | |
| - name: attention_mask | |
| list: int8 | |
| splits: | |
| - name: train | |
| num_bytes: 239231368 | |
| num_examples: 45736 | |
| download_size: 125597135 | |
| dataset_size: 239231368 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| license: odc-by | |
| task_categories: | |
| - text-generation | |
| language: | |
| - en | |
| tags: | |
| - language-modeling | |
| - causal-lm | |
| - llm | |
| size_categories: | |
| - 10K<n<100K | |
| <!-- Provide a quick summary of the dataset. --> | |
| This dataset is a sample of [Dolma v1.7](https://huggingface.co/datasets/allenai/dolma) via the 3B version [dolma-v1_7-3B](emozilla/dolma-v1_7-3B). | |
| Our sample contains slightly more than 20M tokens (45,736 example texts). | |
| As a pure sample, it maintains the [ODC-BY](https://opendatacommons.org/licenses/by/1-0/) license. | |
| ## Dataset Description | |
| <!-- Provide a longer summary of what this dataset is. --> | |
| The columns "id", and "metadata" are copied from the larger dataset, in order to facilitate tracing the source of a particular example. | |
| The columns "input_ids" and "attention_mask" were created with the [OLMo](allenai/OLMo-1B-hf) tokenizer | |
| (a modified version of the GPT-NeoX-20B tokenizer, with some added special tokens). | |
| The first token is always "<|endoftext|>". | |
| The original "text" strings are also kept, so users can use another tokenizer if they prefer. | |
| Every example is truncated to at most 1024 tokens (the end is cut off). | |
| This affects the "input_ids" (and "attention_mask") column, but not the "text" column. | |
| 6791 examples are affected by this. | |
| ## Curation Rationale | |
| <!-- Motivation for the creation of this dataset. --> | |
| This dataset was primarily created for our project [GLUScope](https://sjgerstner.github.io/neuroscope), | |
| which visualizes strong neuron activations on precisely this dataset. | |
| We wanted the dataset to be as lightweight as possible while still providing meaningful information on neuron activations. | |
| ## Uses | |
| <!-- Address questions around how the dataset is intended to be used. --> | |
| The primary intended use is model analysis work like ours. | |
| It is likely to work especially well for OLMo models, since they were trained on Dolma. | |
| However, as with any text dataset, there are many possible use cases. | |
| For example, users could use it to train very small language models, | |
| do controlled experiments with continued pretraining, and more. | |
| ## Citation | |
| <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> | |
| **BibTeX:** | |
| [More Information Needed] | |
| ## Contact | |
| [More Information Needed] |