metadata
dataset_info:
features:
- name: chosen_comment_id
dtype: string
- name: rejected_comment_id
dtype: string
splits:
- name: train
num_bytes: 52382
num_examples: 2381
download_size: 43960
dataset_size: 52382
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-ranking
language:
- en
tags:
- creative-writing
- llm-evaluation
- reward-model
- preference-alignment
This repository contains LitBench, a standardized benchmark and paired dataset for reliable evaluation of creative writing. It was presented in the paper LitBench: A Benchmark and Dataset for Reliable Evaluation of Creative Writing.
LitBench comprises a held-out test set of 2,480 debiased, human-labeled story comparisons drawn from Reddit and a 43,827-pair training corpus of human preference labels. It is designed to provide a vetted resource for reliable, automated evaluation and optimization of creative writing systems, particularly those powered by large language models (LLMs).
Project page: https://huggingface.co/collections/SAA-Lab/litbench-68267b5da3aafe58f9e43461