Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
LitBench-Train / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add metadata, paper link, project page, and sample usage
a6e84b9 verified
|
raw
history blame
2 kB
metadata
task_categories:
  - text-ranking
tags:
  - creative-writing
  - llm-evaluation
  - preference-alignment
  - reward-modeling
  - benchmark
  - reddit
dataset_info:
  features:
    - name: prompt
      dtype: string
    - name: chosen_story
      dtype: string
    - name: rejected_story
      dtype: string
    - name: chosen_timestamp
      dtype: timestamp[ns]
    - name: rejected_timestamp
      dtype: timestamp[ns]
    - name: chosen_upvotes
      dtype: int64
    - name: rejected_upvotes
      dtype: int64
  splits:
    - name: train
      num_bytes: 276261399
      num_examples: 43827
  download_size: 172500713
  dataset_size: 276261399
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

LitBench: A Benchmark and Dataset for Reliable Evaluation of Creative Writing

LitBench is the first standardized benchmark and paired dataset for reliable evaluation of creative writing generated by large language models (LLMs). It addresses the challenge of evaluating open-ended narratives, which lack ground truths. The dataset comprises a held-out test set of 2,480 debiased, human-labeled story comparisons drawn from Reddit and a 43,827-pair training corpus of human preference labels. LitBench facilitates benchmarking zero-shot LLM judges and training reward models for creative writing verification and optimization.

Paper: LitBench: A Benchmark and Dataset for Reliable Evaluation of Creative Writing

Project Page (Hugging Face Collection): https://huggingface.co/collections/SAA-Lab/litbench-68267b5da3aafe58f9e43461

Sample Usage

You can load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("SAA-Lab/LitBench")

# Access the training split
train_dataset = dataset["train"]

# Print the first example
print(train_dataset[0])

If you are the author of any comment in this dataset and would like it removed, please contact us and we will comply promptly.