Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
MuCE-Pref / README.md
mismayil's picture
Update README.md
c586b78 verified
---
dataset_info:
- config_name: full_agreement
features:
- name: dataset
dtype: string
- name: task
dtype: string
- name: score_label
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: novelty_chosen
dtype: float64
- name: surprise_chosen
dtype: float64
- name: diversity_chosen
dtype: float64
- name: quality_chosen
dtype: float64
splits:
- name: train
num_bytes: 39599062
num_examples: 42058
- name: val
num_bytes: 4568262
num_examples: 4880
- name: test
num_bytes: 4165937
num_examples: 4533
- name: heldout_item
num_bytes: 2291947
num_examples: 3521
- name: heldout_task
num_bytes: 34413
num_examples: 52
- name: val_sample1024
num_bytes: 698068
num_examples: 744
- name: val_sample4096
num_bytes: 2232149
num_examples: 2356
- name: test_sample1024
num_bytes: 674423
num_examples: 720
- name: test_sample4096
num_bytes: 2220465
num_examples: 2373
download_size: 2170014
dataset_size: 56484726
configs:
- config_name: full_agreement
data_files:
- split: train
path: full_agreement/train-*
- split: val
path: full_agreement/val-*
- split: test
path: full_agreement/test-*
- split: heldout_item
path: full_agreement/heldout_item-*
- split: heldout_task
path: full_agreement/heldout_task-*
- split: val_sample1024
path: full_agreement/val_sample1024-*
- split: val_sample4096
path: full_agreement/val_sample4096-*
- split: test_sample1024
path: full_agreement/test_sample1024-*
- split: test_sample4096
path: full_agreement/test_sample4096-*
---
# Multi-task Creativity Evaluation Dataset - Preference subset (MuCE-Pref)
This dataset is introduced in the [Creative Preference Optimization](https://arxiv.org/abs/2505.14442) and contains human responses and ratings for multiple creativity assessments.
It is derived from the [MuCE](https://huggingface.co/datasets/CNCL-Penn-State/MuCE) dataset and is in the [TRL Preference dataset format](https://huggingface.co/docs/trl/en/dataset_formats).
### Dataset Sources
See [Creative Preference Optimization](https://arxiv.org/abs/2505.14442) for a list of sources.
## Citation
```
@misc{ismayilzada2025creativepreferenceoptimization,
title={Creative Preference Optimization},
author={Mete Ismayilzada and Antonio Laverghetta Jr. and Simone A. Luchini and Reet Patel and Antoine Bosselut and Lonneke van der Plas and Roger E. Beaty},
year={2025},
eprint={2505.14442},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.14442},
}
```