File size: 2,255 Bytes
b419a74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2831dda
93d0920
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2831dda
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: mit
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: NanoText
    num_bytes: 6090436
    num_examples: 1203
  - name: MiniText
    num_bytes: 60622575
    num_examples: 12382
  - name: MidiText
    num_bytes: 181684879
    num_examples: 36368
  - name: CoreText
    num_bytes: 606330424
    num_examples: 121414
  - name: MegaText
    num_bytes: 1819500227
    num_examples: 364168
  download_size: 1627122618
  dataset_size: 2674228541
configs:
- config_name: default
  data_files:
  - split: NanoText
    path: data/NanoText-*
  - split: MiniText
    path: data/MiniText-*
  - split: MidiText
    path: data/MidiText-*
  - split: CoreText
    path: data/CoreText-*
  - split: MegaText
    path: data/MegaText-*
---

# OpenNeuro: A Dataset to Compute Brain Score Scaling Laws

This repository hosts the splits used to train the 20 language models discussed in the associated paper on brain score scaling laws. Each split provides a progressively larger corpus of text, allowing for systematic experimentation at different scales. Below are the key subsets and their statistics.

---

## Subset Details

### NanoText
- **num_bytes**: 6,090,436  
- **num_examples**: 1,203  
- **Total words**: 1M  
- **Average words/example**: 831.6  

### MiniText
- **num_bytes**: 60,622,575  
- **num_examples**: 12,382  
- **Total words**: 10M  
- **Average words/example**: 808.1  

### MidiText
- **num_bytes**: 181,684,879  
- **num_examples**: 36,368  
- **Total words**: 30M  
- **Average words/example**: 824.9  

### CoreText
- **num_bytes**: 606,330,424  
- **num_examples**: 121,414  
- **Total words**: 100M  
- **Average words/example**: 823.6  

### MegaText
- **num_bytes**: 1,819,500,227  
- **num_examples**: 364,168  
- **Total words**: 300M  
- **Average words/example**: 823.8  

---

## Usage

To load any or all of these subsets in Python, install the [🤗 Datasets library](https://github.com/huggingface/datasets) and use:

```python
from datasets import load_dataset

# Load the entire DatasetDict (all splits)
dataset_dict = load_dataset("IParraMartin/OpenNeuro")
print(dataset_dict)

# Or load a specific subset
nano_text = load_dataset("IParraMartin/OpenNeuro", split="NanoText")
print(nano_text)