File size: 2,756 Bytes
349ea75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9efeff6
 
 
 
 
 
 
 
 
 
 
349ea75
9efeff6
 
 
 
 
 
 
 
2dc28d5
9efeff6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2dc28d5
9efeff6
 
 
 
 
 
 
2dc28d5
9efeff6
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: metadata
    struct:
    - name: file_path
      dtype: string
  - name: input_ids
    list: int32
  - name: attention_mask
    list: int8
  splits:
  - name: train
    num_bytes: 239231368
    num_examples: 45736
  download_size: 125597135
  dataset_size: 239231368
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- causal-lm
- llm
size_categories:
- 10K<n<100K
---

<!-- Provide a quick summary of the dataset. -->

This dataset is a sample of [Dolma v1.7](https://huggingface.co/datasets/allenai/dolma) via the 3B version [dolma-v1_7-3B](emozilla/dolma-v1_7-3B).
Our sample contains slightly more than 20M tokens (45,736 example texts).

As a pure sample, it maintains the [ODC-BY](https://opendatacommons.org/licenses/by/1-0/) license.

## Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

The columns "id", and "metadata" are copied from the larger dataset, in order to facilitate tracing the source of a particular example.

The columns "input_ids" and "attention_mask" were created with the [OLMo](allenai/OLMo-1B-hf) tokenizer
(a modified version of the GPT-NeoX-20B tokenizer, with some added special tokens).
The first token is always "<|endoftext|>".

The original "text" strings are also kept, so users can use another tokenizer if they prefer.

Every example is truncated to at most 1024 tokens (the end is cut off).
This affects the "input_ids" (and "attention_mask") column, but not the "text" column.
6791 examples are affected by this.

## Curation Rationale

<!-- Motivation for the creation of this dataset. -->

This dataset was primarily created for our project [GLUScope](https://sjgerstner.github.io/neuroscope),
which visualizes strong neuron activations on precisely this dataset.
We wanted the dataset to be as lightweight as possible while still providing meaningful information on neuron activations.

## Uses

<!-- Address questions around how the dataset is intended to be used. -->

The primary intended use is model analysis work like ours.
It is likely to work especially well for OLMo models, since they were trained on Dolma.

However, as with any text dataset, there are many possible use cases.
For example, users could use it to train very small language models,
do controlled experiments with continued pretraining, and more.

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

## Contact

[More Information Needed]