File size: 4,528 Bytes
80c9ff4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12f1658
80c9ff4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d6c7b3
7498d93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
pretty_name: DramaCV
dataset_info:
- config_name: scene
  splits:
  - name: train
    num_examples: 1507
  - name: validation
    num_examples: 1557
  - name: test
    num_examples: 1319
- config_name: play
  splits:
  - name: train
    num_examples: 226
  - name: validation
    num_examples: 917
  - name: test
    num_examples: 1214
configs:
- config_name: scene
  data_files:
  - split: train
    path: scene/train.json
  - split: validation
    path: scene/validation.json
  - split: test
    path: scene/test.json
- config_name: play
  data_files:
  - split: train
    path: play/train.json
  - split: validation
    path: play/validation.json
  - split: test
    path: play/test.json
---
# Dataset Card for DramaCV

##  Dataset Summary 

The DramaCV Dataset is an English-language dataset containing utterances of fictional characters in drama plays collected from Project Gutenberg. The dataset was automatically created by parsing 499 drama plays from the 15th to 20th century on Project Gutenberg, that are then parsed to attribute each character line to its speaker.

## Task

This dataset was developed for Authorship Verification of literary characters. Each data instance contains lines from a characters, which we desire to distinguish from different lines uttered by other characters.

## Subsets

This dataset supports two subsets:

- **Scene**: We split each play in scenes, a small segment unit of drama that is supposed to contain actions occurring at a specific time and place with the same characters. If a play has no `<scene>` tag, we instead split it in acts, with the `<act>` tag. Acts are larger segment units, composed of multiple scenes. For this split, we only consider plays that have at least one of these tags. A total of **169** plays were parsed for this subset.
- **Play**: We do not segment play and use all character lines in a play. Compared to the scene segment, the number of candidate characters is higher, and discussions could include various topics. A total of **287** plays were parsed for this subset.

## Dataset Statistics

We randomly sample each subset in 80/10/10 splits for train, validation and test.

|       | Split | Segments | Utterances | Queries | Targets/Query (avg) |
|-------|-------|----------|------------|---------|---------------------|
|       | Train | 1507     | 263270     | 5392    | 5.0                 |
| **Scene** | Val   | 240      | 50670      | 1557    | 8.8                 |
|       | Test  | 203      | 41830      | 1319    | 8.7                 |
|       | Train | 226      | 449407     | 4109    | 90.7                |
| **Play**  | Val   | 30       | 63934      | 917     | 55.1                |
|       | Test  | 31       | 74738      | 1214    | 108.5               |


# Usage

## Loading the dataset

```python
from datasets import load_dataset

# Loads the scene split 
scene_data = load_dataset("gasmichel/DramaCV", "scene")
print(scene_data)

# DatasetDict({
#    train: Dataset({
#        features: ['query', 'true_target', 'play_index', 'act_index'],
#        num_rows: 1507
#    })
#    validation: Dataset({
#        features: ['query', 'true_target', 'play_index', 'act_index'],
#        num_rows: 1557
#    })
#    test: Dataset({
#        features: ['query', 'true_target', 'play_index', 'act_index'],
#        num_rows: 1319
#    })
#})


# Loads the play split
play_data = load_dataset("gasmichel/DramaCV/", "play")
```

## Train vs Val/Test

The train splits contain only *queries* which are collections of utterances spoken by the same character in a segmentation unit (a *scene* for the *scene* split, or the *full play* for the *play* split).

The validation and test data contain both *queries* and *targets*:

- *Queries* contain half of the utterances of a character, randomly sampled in the same segmentation unit
- *Targets* contain the other half of these utterances.

## Act and Play Index
Each collection of utterances is assigned a specific `act_index` and `play_index`, spcecifying the act/scene and play it was taken from respectively.
DramaCV can be used to train Authorship Verification models by restricting the training data to come from the same `act_index` and `play_index`. In other words, an Authorship Verifcation model can be trained by distinguishing utterances of characters in the same `play` or `scene`.