metadata
dataset_info:
features:
- name: index
dtype: int64
- name: prompt
dtype: string
- name: name
dtype: string
- name: story
dtype: string
splits:
- name: train
num_bytes: 656176355
num_examples: 66851
download_size: 370967759
dataset_size: 656176355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
This is a medium-length story dataset introduced in LongStory: Coherent, Complete and Length Controlled Long story Generation : https://arxiv.org/abs/2311.15208.
The dataset was collected by the authors from Reedsy Prompts : https://blog.reedsy.com/short-stories/, as of May 2023.
In the original paper, the authors use a 60k/4k/4k train/validation/test split.
For comparison:
WritingPrompts (Fan et al., 2018): ~768 tokens per story
Booksum (Kryściński et al., 2021): ~6,065 tokens per story
BookCorpus (Project Gutenberg): ~90,000 words per story (approx.)
(ours) ReedsyPrompts : ~2426 tokens per story
Use this :
from datasets import load_dataset
ds = load_dataset("Iyan/reedsyPrompts")
print(ds["train"][0]['index'])
print(ds["train"][0]['prompt']) # the prompt of the story
print(ds["train"][0]['name']) # the title of the story
print(ds["train"][0]['story'])