File size: 1,414 Bytes
01cfad5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f8a8800
4cf914d
5911468
841c78c
5911468
 
 
 
 
 
 
 
4cf914d
5911468
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
dataset_info:
  features:
  - name: index
    dtype: int64
  - name: prompt
    dtype: string
  - name: name
    dtype: string
  - name: story
    dtype: string
  splits:
  - name: train
    num_bytes: 656176355
    num_examples: 66851
  download_size: 370967759
  dataset_size: 656176355
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

This is a medium-length story dataset introduced in  **LongStory: Coherent, Complete and Length Controlled Long story Generation** : [https://arxiv.org/abs/2311.15208](https://arxiv.org/abs/2311.15208). 

The dataset was collected by the authors from Reedsy Prompts : [https://blog.reedsy.com/short-stories/](https://blog.reedsy.com/short-stories/), as of May 2023.

In the original paper, the authors use a 60k/4k/4k train/validation/test split.

For comparison:  
- **WritingPrompts** (Fan et al., 2018): ~768 tokens per story  
- **Booksum** (Kryściński et al., 2021): ~6,065 tokens per story  
- **BookCorpus** (Project Gutenberg): ~90,000 words per story (approx.)

- (ours) **ReedsyPrompts** : ~2426 tokens per story

Use this :
<pre><code>
  from datasets import load_dataset 
  ds = load_dataset("Iyan/reedsyPrompts") 
  print(ds["train"][0]['index'])
  print(ds["train"][0]['prompt']) # the prompt of the story
  print(ds["train"][0]['name']) # the title of the story
  print(ds["train"][0]['story'])
  
</code></pre>