annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
pretty_name: DramaCV
dataset_info:
- config_name: scene
splits:
- name: train
num_examples: 1507
- name: validation
num_examples: 1557
- name: test
num_examples: 1319
- config_name: play
splits:
- name: train
num_examples: 226
- name: validation
num_examples: 917
- name: test
num_examples: 1214
configs:
- config_name: scene
data_files:
- split: train
path: scene/train.json
- split: validation
path: scene/validation.json
- split: test
path: scene/test.json
- config_name: play
data_files:
- split: train
path: play/train.json
- split: validation
path: play/validation.json
- split: test
path: play/test.json
Dataset Card for DramaCV
Dataset Summary
The DramaCV Dataset is an English-language dataset containing utterances of fictional characters in drama plays collected from Project Gutenberg. The dataset was automatically created by parsing 499 drama plays from the 15th to 20th century on Project Gutenberg, that are then parsed to attribute each character line to its speaker.
Task
This dataset was developed for Authorship Verification of literary characters. Each data instance contains lines from a characters, which we desire to distinguish from different lines uttered by other characters.
Subsets
This dataset supports two subsets:
- Scene: We split each play in scenes, a small segment unit of drama that is supposed to contain actions occurring at a specific time and place with the same characters. If a play has no
<scene>tag, we instead split it in acts, with the<act>tag. Acts are larger segment units, composed of multiple scenes. For this split, we only consider plays that have at least one of these tags. A total of 169 plays were parsed for this subset. - Play: We do not segment play and use all character lines in a play. Compared to the scene segment, the number of candidate characters is higher, and discussions could include various topics. A total of 287 plays were parsed for this subset.
Dataset Statistics
We randomly sample each subset in 80/10/10 splits for train, validation and test.
| Split | Segments | Utterances | Queries | Targets/Query (avg) | |
|---|---|---|---|---|---|
| Train | 1507 | 263270 | 5392 | 5.0 | |
| Scene | Val | 240 | 50670 | 1557 | 8.8 |
| Test | 203 | 41830 | 1319 | 8.7 | |
| Train | 226 | 449407 | 4109 | 90.7 | |
| Play | Val | 30 | 63934 | 917 | 55.1 |
| Test | 31 | 74738 | 1214 | 108.5 |
Usage
Loading the dataset
from datasets import load_dataset
# Loads the scene split
scene_data = load_dataset("gasmichel/DramaCV", "scene")
print(scene_data)
# DatasetDict({
# train: Dataset({
# features: ['query', 'true_target', 'play_index', 'act_index'],
# num_rows: 1507
# })
# validation: Dataset({
# features: ['query', 'true_target', 'play_index', 'act_index'],
# num_rows: 1557
# })
# test: Dataset({
# features: ['query', 'true_target', 'play_index', 'act_index'],
# num_rows: 1319
# })
#})
# Loads the play split
play_data = load_dataset("gasmichel/DramaCV/", "play")
Train vs Val/Test
The train splits contain only queries which are collections of utterances spoken by the same character in a segmentation unit (a scene for the scene split, or the full play for the play split).
The validation and test data contain both queries and targets:
- Queries contain half of the utterances of a character, randomly sampled in the same segmentation unit
- Targets contain the other half of these utterances.
Act and Play Index
Each collection of utterances is assigned a specific act_index and play_index, spcecifying the act/scene and play it was taken from respectively.
DramaCV can be used to train Authorship Verification models by restricting the training data to come from the same act_index and play_index. In other words, an Authorship Verifcation model can be trained by distinguishing utterances of characters in the same play or scene.