File size: 6,295 Bytes
edc41f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
---
language: en
license: cc-by-nc-4.0
tags:
- movies
- screenplays
- oscar
- text-classification
- embeddings
size_categories:
- 1K<n<10K
task_categories:
- text-classification
pretty_name: Movie-O-Label
dataset_info:
  features:
    - name: movie_name
      dtype: string
    - name: imdb_id
      dtype: string
    - name: title
      dtype: string
    - name: year
      dtype: int64
    - name: summary
      dtype: string
    - name: script
      dtype: string
    - name: script_plain
      dtype: string
    - name: script_clean
      dtype: string
    - name: nominated
      dtype: int64
    - name: winner
      dtype: int64
---





# Movie-O-Label

**Movie-O-Label** is a dataset created by merging the [MovieSum](https://huggingface.co/datasets/rohitsaxena/MovieSum) screenplay collection with Oscar nomination labels derived from [David V. Lu’s Oscar Data](https://github.com/DLu/oscar_data).  
It provides **screenplays, summaries, and metadata** together with binary labels indicating whether a movie’s screenplay received an **Oscar nomination** and whether it **won**.

---

##  Contents

Each entry includes:


| column          | type    | description                                                                 |
|-----------------|---------|-----------------------------------------------------------------------------|
| `movie_name`    | string  | Title and year combined, e.g. `The Social Network_2010`                     |
| `title`         | string  | Movie title                                                                 |
| `year`          | int     | Release year                                                                |
| `imdb_id`       | string  | IMDb identifier (e.g. `tt1285016`)                                          |
| `summary`       | string  | Plot summary of the movie                                                   |
| `script_clean`  | string  | script_plain cleaned (unicode normaliziation, stage directions and scene transitions stripped where possible, whitespace reduced)|
| `script_plain`  | string  | Original screenplay text (only xml-tags removed from script)                |
| `script`        | string  | Raw script field from MovieSum (for reference)                              |
| `nominated`     | int     | `1` if the screenplay was nominated for an Academy Award (Writing)          |
| `winner`        | int     | `1` if the screenplay won an Academy Award (Writing)                        |

---

## Splits

The dataset is provided as a **`DatasetDict`** with:

- `train` — 60% (1320 movies)
- `validation` — 20% (440 movies)
- `test` — 20% (440 movies)

Splits were created **stratified** by the `nominated` label to preserve class balance.

A file `split_60_20_20.npz` with the exact index arrays (`idx_train`, `idx_val`, `idx_test`) is also provided for full reproducibility.

---

## Additional Resources

To fully reproduce the experiments described in the paper:

- [Paper (PDF)](./assets/FrancisGross_PredictionNominatedScreenplays_2025.pdf)
- [Fixed Train/Validation/Test split (split_60_20_20.npz)](./assets/splits/split_60_20_20.npz)
- [Script embeddings (emb_script.joblib)](./assets/embeddings/emb_script.joblib)
- [Summary embeddings (emb_summary.joblib)](./assets/embeddings/emb_summary.joblib)
- [Title embeddings (emb_title.joblib)](./assets/embeddings/emb_title.joblib)
- [Model configuration (model_config.json)](./assets/config/best-performing-model_config.json)
- [Projekt code jupyter notebook](./assets/code/FrancisGross_screenplay_pred_nom.ipynb)







## License & Attribution

- **MovieSum dataset**:  
  Created and published by [Rohit Saxena](https://huggingface.co/datasets/rohitsaxena/MovieSum) (with Frank Keller).  
  Licensed under the **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license.  
  If you use this dataset, please cite:  
  *Rohit Saxena and Frank Keller. "MovieSum: An Abstractive Summarization Dataset for Movie Screenplays." Findings of ACL 2024. arXiv:2408.06281.*

- **Oscar nominations**:  
  Data adapted from [David V. Lu!!’s Oscar Data](https://github.com/DLu/oscar_data)  
  Licensed under the **BSD 2-Clause License** © 2022 David V. Lu!!.

- **Movie-O-Label**:  
  Created and processed by [Francis Gross](https://huggingface.co/datasets/Francis2003/Movie-O-Label), based on cleaned MovieSum screenplay texts enriched with Oscar nomination and winner labels.  
  Released under the **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license.
   If you use this dataset, please cite:  
   *Francis Gross. "Movie-O-Label: Predicting Oscar-Nominated Screenplays with Sentence Embeddings." Findings of ACL 2025 on Hugging Face.*


## Baseline Workflow

This work provides a simple baseline for predicting whether a screenplay receives an Oscar nomination
in the *Writing/Screenplays* category.

1. **Load the dataset**  
   ```python
   from datasets import load_dataset
   ds = load_dataset("Francis2003/Movie-O-Label")
````

The dataset includes a predefined **60/20/20 train/validation/test split** (`split_60_20_20.npz`).

2. **Text preparation**
   Use one or more of the available feature fields:

   * `script_clean` (recommended for embeddings)
   * `summary`
   * `title`

3. **Embeddings**
   Encode the texts with [**intfloat/e5-base-v2**](https://huggingface.co/intfloat/e5-base-v2).
   Each screenplay can be chunked (e.g., 400 words with 80-word overlap), encoded,
   and mean+max pooling and L2 normalized.

4. **Classifier**
   Train a logistic regression classifier with:

   ```python
   from sklearn.linear_model import LogisticRegression
   clf = LogisticRegression(max_iter=5000, class_weight="balanced", C=1.0)
   ```

   Select the threshold on the **validation set** to maximize F1 for the positive class (nominated).

5. **Evaluation**
   Report metrics such as Accuracy, ROC-AUC, PR-AUC, F1 (positive/negative) and Macro-F1.

> The best-performing baseline used **script_clean + summary + title** embeddings
> and achieved **ROC-AUC ≈ 0.79** and **Macro-F1 ≈ 0.68** on the test set.

```


## Usage

```python
from datasets import load_dataset

# Public dataset:
ds = load_dataset("Francis2003/Movie-O-Label")

print(ds)
print(ds["train"][0])