|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: full |
|
|
num_bytes: 6903405 |
|
|
num_examples: 29 |
|
|
- name: retain |
|
|
num_bytes: 2806550 |
|
|
num_examples: 25 |
|
|
download_size: 5566028 |
|
|
dataset_size: 9709955 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: full |
|
|
path: data/full-* |
|
|
- split: retain |
|
|
path: data/retain-* |
|
|
--- |
|
|
|
|
|
# MUSE-Books-Train |
|
|
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
|
|
This dataset is a simple merger of the pretraining data from the original MUSE-Books dataset. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
|
|
|
### Dataset Sources [optional] |
|
|
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
|
|
- **Repository:** https://huggingface.co/datasets/muse-bench/MUSE-Books |
|
|
- **Paper:** https://arxiv.org/pdf/2407.06460 |
|
|
|
|
|
|
|
|
|
|
|
## Dataset Creation |
|
|
To create this dataset, we simply started from the muse-bench dataset and selected the `train` subset. |
|
|
Then, by merging the `retain1` and `retain2` splits we get the actual `retain` subset, and by further merging this with the original `forget` split, |
|
|
we get the full dataset used for pre-training on the specific MUSE task. |