| | --- |
| | language: en |
| | license: cc-by-nc-4.0 |
| | tags: |
| | - movies |
| | - screenplays |
| | - oscar |
| | - text-classification |
| | - embeddings |
| | size_categories: |
| | - 1K<n<10K |
| | task_categories: |
| | - text-classification |
| | pretty_name: Movie-O-Label |
| | dataset_info: |
| | features: |
| | - name: movie_name |
| | dtype: string |
| | - name: imdb_id |
| | dtype: string |
| | - name: title |
| | dtype: string |
| | - name: year |
| | dtype: int64 |
| | - name: summary |
| | dtype: string |
| | - name: script |
| | dtype: string |
| | - name: script_plain |
| | dtype: string |
| | - name: script_clean |
| | dtype: string |
| | - name: nominated |
| | dtype: int64 |
| | - name: winner |
| | dtype: int64 |
| | --- |
| | |
| |
|
| |
|
| |
|
| |
|
| | # Movie-O-Label |
| |
|
| | **Movie-O-Label** is a dataset created by merging the [MovieSum](https://huggingface.co/datasets/rohitsaxena/MovieSum) screenplay collection with Oscar nomination labels derived from [David V. Lu’s Oscar Data](https://github.com/DLu/oscar_data). |
| | It provides **screenplays, summaries, and metadata** together with binary labels indicating whether a movie’s screenplay received an **Oscar nomination** and whether it **won**. |
| |
|
| | --- |
| |
|
| | ## Contents |
| |
|
| | Each entry includes: |
| |
|
| |
|
| | | column | type | description | |
| | |-----------------|---------|-----------------------------------------------------------------------------| |
| | | `movie_name` | string | Title and year combined, e.g. `The Social Network_2010` | |
| | | `title` | string | Movie title | |
| | | `year` | int | Release year | |
| | | `imdb_id` | string | IMDb identifier (e.g. `tt1285016`) | |
| | | `summary` | string | Plot summary of the movie | |
| | | `script_clean` | string | script_plain cleaned (unicode normaliziation, stage directions and scene transitions stripped where possible, whitespace reduced)| |
| | | `script_plain` | string | Original screenplay text (only xml-tags removed from script) | |
| | | `script` | string | Raw script field from MovieSum (for reference) | |
| | | `nominated` | int | `1` if the screenplay was nominated for an Academy Award (Writing) | |
| | | `winner` | int | `1` if the screenplay won an Academy Award (Writing) | |
| |
|
| | --- |
| |
|
| | ## Splits |
| |
|
| | The dataset is provided as a **`DatasetDict`** with: |
| |
|
| | - `train` — 60% (1320 movies) |
| | - `validation` — 20% (440 movies) |
| | - `test` — 20% (440 movies) |
| |
|
| | Splits were created **stratified** by the `nominated` label to preserve class balance. |
| |
|
| | A file `split_60_20_20.npz` with the exact index arrays (`idx_train`, `idx_val`, `idx_test`) is also provided for full reproducibility. |
| |
|
| | --- |
| |
|
| | ## Additional Resources |
| |
|
| | To fully reproduce the experiments described in the paper: |
| |
|
| | - [Paper (PDF)](./assets/FrancisGross_PredictionNominatedScreenplays_2025.pdf) |
| | - [Fixed Train/Validation/Test split (split_60_20_20.npz)](./assets/splits/split_60_20_20.npz) |
| | - [Script embeddings (emb_script.joblib)](./assets/embeddings/emb_script.joblib) |
| | - [Summary embeddings (emb_summary.joblib)](./assets/embeddings/emb_summary.joblib) |
| | - [Title embeddings (emb_title.joblib)](./assets/embeddings/emb_title.joblib) |
| | - [Model configuration (model_config.json)](./assets/config/best-performing-model_config.json) |
| | - [Projekt code jupyter notebook](./assets/code/FrancisGross_screenplay_pred_nom.ipynb) |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| | ## License & Attribution |
| |
|
| | - **MovieSum dataset**: |
| | Created and published by [Rohit Saxena](https://huggingface.co/datasets/rohitsaxena/MovieSum) (with Frank Keller). |
| | Licensed under the **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license. |
| | If you use this dataset, please cite: |
| | *Rohit Saxena and Frank Keller. "MovieSum: An Abstractive Summarization Dataset for Movie Screenplays." Findings of ACL 2024. arXiv:2408.06281.* |
| |
|
| | - **Oscar nominations**: |
| | Data adapted from [David V. Lu!!’s Oscar Data](https://github.com/DLu/oscar_data) |
| | Licensed under the **BSD 2-Clause License** © 2022 David V. Lu!!. |
| |
|
| | - **Movie-O-Label**: |
| | Created and processed by [Francis Gross](https://huggingface.co/datasets/Francis2003/Movie-O-Label), based on cleaned MovieSum screenplay texts enriched with Oscar nomination and winner labels. |
| | Released under the **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license. |
| | If you use this dataset, please cite: |
| | *Francis Gross. "Movie-O-Label: Predicting Oscar-Nominated Screenplays with Sentence Embeddings." Findings of ACL 2025 on Hugging Face.* |
| |
|
| |
|
| | ## Baseline Workflow |
| |
|
| | This work provides a simple baseline for predicting whether a screenplay receives an Oscar nomination |
| | in the *Writing/Screenplays* category. |
| |
|
| | 1. **Load the dataset** |
| | ```python |
| | from datasets import load_dataset |
| | ds = load_dataset("Francis2003/Movie-O-Label") |
| | ```` |
| |
|
| | The dataset includes a predefined **60/20/20 train/validation/test split** (`split_60_20_20.npz`). |
| |
|
| | 2. **Text preparation** |
| | Use one or more of the available feature fields: |
| |
|
| | * `script_clean` (recommended for embeddings) |
| | * `summary` |
| | * `title` |
| |
|
| | 3. **Embeddings** |
| | Encode the texts with [**intfloat/e5-base-v2**](https://huggingface.co/intfloat/e5-base-v2). |
| | Each screenplay can be chunked (e.g., 400 words with 80-word overlap), encoded, |
| | and mean+max pooling and L2 normalized. |
| |
|
| | 4. **Classifier** |
| | Train a logistic regression classifier with: |
| |
|
| | ```python |
| | from sklearn.linear_model import LogisticRegression |
| | clf = LogisticRegression(max_iter=5000, class_weight="balanced", C=1.0) |
| | ``` |
| |
|
| | Select the threshold on the **validation set** to maximize F1 for the positive class (nominated). |
| |
|
| | 5. **Evaluation** |
| | Report metrics such as Accuracy, ROC-AUC, PR-AUC, F1 (positive/negative) and Macro-F1. |
| |
|
| | > The best-performing baseline used **script_clean + summary + title** embeddings |
| | > and achieved **ROC-AUC ≈ 0.79** and **Macro-F1 ≈ 0.68** on the test set. |
| | |
| | ``` |
| | |
| | |
| | ## Usage |
| | |
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Public dataset: |
| | ds = load_dataset("Francis2003/Movie-O-Label") |
| | |
| | print(ds) |
| | print(ds["train"][0]) |
| | |
| | |
| | |
| | |