metadata
license: mit
dataset_info:
features:
- name: text
dtype: string
splits:
- name: NanoText
num_bytes: 6090436
num_examples: 1203
- name: MiniText
num_bytes: 60622575
num_examples: 12382
- name: MidiText
num_bytes: 181684879
num_examples: 36368
- name: CoreText
num_bytes: 606330424
num_examples: 121414
- name: MegaText
num_bytes: 1819500227
num_examples: 364168
download_size: 1627122618
dataset_size: 2674228541
configs:
- config_name: default
data_files:
- split: NanoText
path: data/NanoText-*
- split: MiniText
path: data/MiniText-*
- split: MidiText
path: data/MidiText-*
- split: CoreText
path: data/CoreText-*
- split: MegaText
path: data/MegaText-*
OpenNeuro: A Dataset to Compute Brain Score Scaling Laws
This repository hosts the splits used to train the 20 language models discussed in the associated paper on brain score scaling laws. Each split provides a progressively larger corpus of text, allowing for systematic experimentation at different scales. Below are the key subsets and their statistics.
Subset Details
NanoText
- num_bytes: 6,090,436
- num_examples: 1,203
- Total words: 1M
- Average words/example: 831.6
MiniText
- num_bytes: 60,622,575
- num_examples: 12,382
- Total words: 10M
- Average words/example: 808.1
MidiText
- num_bytes: 181,684,879
- num_examples: 36,368
- Total words: 30M
- Average words/example: 824.9
CoreText
- num_bytes: 606,330,424
- num_examples: 121,414
- Total words: 100M
- Average words/example: 823.6
MegaText
- num_bytes: 1,819,500,227
- num_examples: 364,168
- Total words: 300M
- Average words/example: 823.8
Usage
To load any or all of these subsets in Python, install the 🤗 Datasets library and use:
from datasets import load_dataset
# Load the entire DatasetDict (all splits)
dataset_dict = load_dataset("IParraMartin/OpenNeuro")
print(dataset_dict)
# Or load a specific subset
nano_text = load_dataset("IParraMartin/OpenNeuro", split="NanoText")
print(nano_text)