Update README.md
Browse files
README.md
CHANGED
|
@@ -37,37 +37,58 @@ configs:
|
|
| 37 |
path: data/MegaText-*
|
| 38 |
---
|
| 39 |
|
| 40 |
-
# OpenNeuro: A Dataset to Compute Brain Score
|
| 41 |
-
|
| 42 |
-
This
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
**
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
**
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
**
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
**
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
|
|
|
| 37 |
path: data/MegaText-*
|
| 38 |
---
|
| 39 |
|
| 40 |
+
# OpenNeuro: A Dataset to Compute Brain Score Scaling Laws
|
| 41 |
+
|
| 42 |
+
This repository hosts the splits used to train the 20 language models discussed in the associated paper on brain score scaling laws. Each split provides a progressively larger corpus of text, allowing for systematic experimentation at different scales. Below are the key subsets and their statistics.
|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
## Subset Details
|
| 47 |
+
|
| 48 |
+
### NanoText
|
| 49 |
+
- **num_bytes**: 6,090,436
|
| 50 |
+
- **num_examples**: 1,203
|
| 51 |
+
- **Total words**: 1M
|
| 52 |
+
- **Average words/example**: 831.6
|
| 53 |
+
|
| 54 |
+
### MiniText
|
| 55 |
+
- **num_bytes**: 60,622,575
|
| 56 |
+
- **num_examples**: 12,382
|
| 57 |
+
- **Total words**: 10M
|
| 58 |
+
- **Average words/example**: 808.1
|
| 59 |
+
|
| 60 |
+
### MidiText
|
| 61 |
+
- **num_bytes**: 181,684,879
|
| 62 |
+
- **num_examples**: 36,368
|
| 63 |
+
- **Total words**: 30M
|
| 64 |
+
- **Average words/example**: 824.9
|
| 65 |
+
|
| 66 |
+
### CoreText
|
| 67 |
+
- **num_bytes**: 606,330,424
|
| 68 |
+
- **num_examples**: 121,414
|
| 69 |
+
- **Total words**: 100M
|
| 70 |
+
- **Average words/example**: 823.6
|
| 71 |
+
|
| 72 |
+
### MegaText
|
| 73 |
+
- **num_bytes**: 1,819,500,227
|
| 74 |
+
- **num_examples**: 364,168
|
| 75 |
+
- **Total words**: 300M
|
| 76 |
+
- **Average words/example**: 823.8
|
| 77 |
+
|
| 78 |
+
---
|
| 79 |
+
|
| 80 |
+
## Usage
|
| 81 |
+
|
| 82 |
+
To load any or all of these subsets in Python, install the [🤗 Datasets library](https://github.com/huggingface/datasets) and use:
|
| 83 |
+
|
| 84 |
+
```python
|
| 85 |
+
from datasets import load_dataset
|
| 86 |
+
|
| 87 |
+
# Load the entire DatasetDict (all splits)
|
| 88 |
+
dataset_dict = load_dataset("IParraMartin/OpenNeuro")
|
| 89 |
+
print(dataset_dict)
|
| 90 |
+
|
| 91 |
+
# Or load a specific subset
|
| 92 |
+
nano_text = load_dataset("IParraMartin/OpenNeuro", split="NanoText")
|
| 93 |
+
print(nano_text)
|
| 94 |
|