File size: 2,001 Bytes
776dc67 a6e84b9 776dc67 d358f57 776dc67 53c6b06 a6e84b9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
task_categories:
- text-ranking
tags:
- creative-writing
- llm-evaluation
- preference-alignment
- reward-modeling
- benchmark
- reddit
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen_story
dtype: string
- name: rejected_story
dtype: string
- name: chosen_timestamp
dtype: timestamp[ns]
- name: rejected_timestamp
dtype: timestamp[ns]
- name: chosen_upvotes
dtype: int64
- name: rejected_upvotes
dtype: int64
splits:
- name: train
num_bytes: 276261399
num_examples: 43827
download_size: 172500713
dataset_size: 276261399
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# LitBench: A Benchmark and Dataset for Reliable Evaluation of Creative Writing
LitBench is the first standardized benchmark and paired dataset for reliable evaluation of creative writing generated by large language models (LLMs). It addresses the challenge of evaluating open-ended narratives, which lack ground truths. The dataset comprises a held-out test set of 2,480 debiased, human-labeled story comparisons drawn from Reddit and a 43,827-pair training corpus of human preference labels. LitBench facilitates benchmarking zero-shot LLM judges and training reward models for creative writing verification and optimization.
**Paper:** [LitBench: A Benchmark and Dataset for Reliable Evaluation of Creative Writing](https://huggingface.co/papers/2507.00769)
**Project Page (Hugging Face Collection):** https://huggingface.co/collections/SAA-Lab/litbench-68267b5da3aafe58f9e43461
### Sample Usage
You can load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("SAA-Lab/LitBench")
# Access the training split
train_dataset = dataset["train"]
# Print the first example
print(train_dataset[0])
```
If you are the author of any comment in this dataset and would like it removed, please contact us and we will comply promptly. |