IParraMartin commited on
Commit
93d0920
·
verified ·
1 Parent(s): 2831dda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -33
README.md CHANGED
@@ -37,37 +37,58 @@ configs:
37
  path: data/MegaText-*
38
  ---
39
 
40
- # OpenNeuro: A Dataset to Compute Brain Score Scalling Laws
41
-
42
- This dataset contains the splits used to train the 20 language models of the paper.
43
-
44
- **NanoText**
45
- num_bytes: 6090436
46
- num_examples: 1203
47
- Total words: 1M
48
- Average words per example: 831.6
49
-
50
- **MiniText**
51
- num_bytes: 60622575
52
- num_examples: 12382
53
- Total words: 10M
54
- Average words per example: 808.1
55
-
56
- **MidiText**
57
- num_bytes: 181684879
58
- num_examples: 36368
59
- Total words: 30M
60
- Average words per example: 824.9
61
-
62
- **CoreText**
63
- num_bytes: 606330424
64
- num_examples: 121414
65
- Total words: 100M
66
- Average words per example: 823.6
67
-
68
- **MegaText**
69
- num_bytes: 1819500227
70
- num_examples: 364168
71
- Total words: 300M
72
- Average words per example: 823.8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
 
37
  path: data/MegaText-*
38
  ---
39
 
40
+ # OpenNeuro: A Dataset to Compute Brain Score Scaling Laws
41
+
42
+ This repository hosts the splits used to train the 20 language models discussed in the associated paper on brain score scaling laws. Each split provides a progressively larger corpus of text, allowing for systematic experimentation at different scales. Below are the key subsets and their statistics.
43
+
44
+ ---
45
+
46
+ ## Subset Details
47
+
48
+ ### NanoText
49
+ - **num_bytes**: 6,090,436
50
+ - **num_examples**: 1,203
51
+ - **Total words**: 1M
52
+ - **Average words/example**: 831.6
53
+
54
+ ### MiniText
55
+ - **num_bytes**: 60,622,575
56
+ - **num_examples**: 12,382
57
+ - **Total words**: 10M
58
+ - **Average words/example**: 808.1
59
+
60
+ ### MidiText
61
+ - **num_bytes**: 181,684,879
62
+ - **num_examples**: 36,368
63
+ - **Total words**: 30M
64
+ - **Average words/example**: 824.9
65
+
66
+ ### CoreText
67
+ - **num_bytes**: 606,330,424
68
+ - **num_examples**: 121,414
69
+ - **Total words**: 100M
70
+ - **Average words/example**: 823.6
71
+
72
+ ### MegaText
73
+ - **num_bytes**: 1,819,500,227
74
+ - **num_examples**: 364,168
75
+ - **Total words**: 300M
76
+ - **Average words/example**: 823.8
77
+
78
+ ---
79
+
80
+ ## Usage
81
+
82
+ To load any or all of these subsets in Python, install the [🤗 Datasets library](https://github.com/huggingface/datasets) and use:
83
+
84
+ ```python
85
+ from datasets import load_dataset
86
+
87
+ # Load the entire DatasetDict (all splits)
88
+ dataset_dict = load_dataset("IParraMartin/OpenNeuro")
89
+ print(dataset_dict)
90
+
91
+ # Or load a specific subset
92
+ nano_text = load_dataset("IParraMartin/OpenNeuro", split="NanoText")
93
+ print(nano_text)
94