Datasets:
File size: 6,372 Bytes
fe429a0 5a1dffc fe429a0 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe 90c9c63 3300dbe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
---
language:
- en
pretty_name: "Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years"
tags:
- movies
- embeddings
- semantic-analysis
- temporal-analysis
- text-embeddings
- plot-summaries
- genre-classification
- concept-extraction
- wikidata
- wikipedia
- tmdb
- bge-m3
- text
license: "cc-by-nc-4.0"
task_categories:
- text-classification
- sentence-similarity
- text-retrieval
- feature-extraction
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: movie_id
dtype: string
- name: title
dtype: string
- name: year
dtype: int32
- name: plot
dtype: string
- name: genre
dtype: string
- name: embedding
dtype: float32
shape: [1024]
---
# Movie Plot Embeddings Dataset
**Project Title**: Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years
## Dataset Summary
This dataset contains movie metadata, plot summaries, and semantic embeddings for ~92,000 feature-length films (1930-2024). Created for temporal semantic drift analysis, it includes metadata from Wikidata, TMDb, and Wikipedia, along with dense (1024-dim) and sparse embeddings generated using BAAI/bge-m3.
**Key Statistics**: ~92,000 movies | Year range: 1930-2024 | Embedding dimension: 1024 | Average plot length: ~1,500 characters
## Dataset Structure
### Core Files
- **`final_dataset.csv`** (200 MB): Movie metadata with 39 columns (title, year, genre, plot, directors, actors, TMDb metrics, etc.)
- **`final_dense_embeddings.npy`** (380 MB): Dense embeddings array `(N, 1024)` using BGE-M3 CLS token aggregation
- **`final_dense_movie_ids.npy`** (7 MB): Wikidata QIDs corresponding to embeddings (index-aligned: `embeddings[i]` ↔ `movie_ids[i]`)
- **`final_sparse_embeddings.npz`** (500 MB): Sparse lexical weights for concept extraction (token_indices, weights, movie_ids)
### Additional Files
- **`knn_faiss_novelty.csv`**: Novelty scores and neighbor information
- **`umap_cluster_trajectories.png`**: UMAP visualization of embeddings
- **`concept_space/`**: WordNet-based concept vocabulary and embeddings for semantic mapping
## Data Collection Pipeline
Four-step pipeline: (1) **Wikidata**: Query SPARQL for movie metadata (1930-2024, ~8K/year), filter feature films; (2) **TMDb**: Enrich with popularity, votes, ratings via API; (3) **Wikipedia**: Extract plot summaries from sitelinks; (4) **Embeddings**: Generate dense/sparse embeddings using BGE-M3 with CLS token aggregation, parallel GPU processing.
## Data Fields
**Core**: `movie_id` (Wikidata QID), `title`, `year`, `imdb_id`, `tmdb_id`
**Metadata**: `release_date`, `country`, `duration`, `wikidata_class`, `wikipedia_link`
**Creative Team**: `directors`, `directors_id`, `actors`, `actors_id` (pipe-separated for multiple)
**Genre**: `genre` (raw, comma-separated), `genre_id`, `genre_cluster_ids`, `genre_cluster_names` (processed)
**Plot**: `plot` (full text), `plot_section`, `plot_length_chars`, `plot_length_tokens`, `num_different_tokens`, `token_shannon_entropy`
**TMDb Metrics**: `popularity`, `vote_average`, `vote_count`
**Financial** (if available): `budget`, `budget_currency`, `box_office`, `box_office_currency`, `box_office_worldwide`
**Other**: `set_in_period`, `awards`, `summary`
## Loading the Dataset
### Basic Python Example
```python
import numpy as np
import pandas as pd
# Load metadata and embeddings
df = pd.read_csv('final_dataset.csv', low_memory=False)
embeddings = np.load('final_dense_embeddings.npy')
movie_ids = np.load('final_dense_movie_ids.npy', allow_pickle=True)
# Merge embeddings with metadata
embeddings_df = pd.DataFrame({'movie_id': movie_ids, 'embedding': list(embeddings)})
combined_df = pd.merge(df, embeddings_df, on='movie_id', how='inner')
```
### Using Utilities
```python
from src.data_utils import load_final_data_with_embeddings
df = load_final_data_with_embeddings(
csv_path="data/data_final/final_dataset.csv",
data_dir="data/data_final",
verbose=True
)
```
## Embedding Details
- **Model**: BAAI/bge-m3 | **Dimension**: 1024 | **Method**: CLS token aggregation | **Normalization**: L2-normalized
- **Alignment**: `final_dense_embeddings[i]` corresponds to `final_dense_movie_ids[i]`
- **Alternative chunking**: Supports mean pooling, chunk-first, late chunking (stored with suffixes)
## Concept Space
Pre-computed WordNet-based concept vocabulary: top 20K nouns (Zipf ≥ 4.0), embedded with BGE-small-en-v1.5. Files parameterized by Zipf threshold, vocabulary size, and model name.
## Data Quality
**Completeness**: All movies have core fields; ~92K plots, ~91K TMDb data, ~75K directors, ~81K genres.
**Filtering**: Feature films only; plots filtered by length/entropy; explicit content excluded; duplicates removed.
**Cleaning**: Plot text normalized; genres clustered; Shannon entropy threshold: 4.8398.
## Use Cases
**Temporal Analysis**: Compute decade centroids to analyze semantic drift over time.
**Genre Classification**: Use embeddings for clustering or classification tasks.
**Similarity Search**: Find similar movies using cosine similarity on embeddings.
**Concept Extraction**: Map plot nouns to concept space using sparse lexical weights.
## Citation
```bibtex
@dataset{movie_plot_embeddings_2026,
title={Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years},
author={Cheung, Ansel and Villa, Alessio and Markovinović, Bartol and López de Ipiña, Martín and Abraham, Niklas},
year={2026},
note={Dataset collected from Wikidata, TMDb, and Wikipedia for temporal semantic analysis of movie plots}
}
```
## License
This dataset inherits licenses from source data:
- **Wikidata**: CC0 1.0 (Public Domain)
- **Wikipedia**: CC BY-SA 4.0
- **TMDb**: CC BY-NC 4.0 (non-commercial)
**Dataset License**: CC BY-NC 4.0 for non-commercial research. Commercial use requires TMDb licensing and Wikipedia compliance.
**Code**: MIT License (see main repository).
**Attribution Required**: Cite Wikidata, Wikipedia, TMDb contributors, and this dataset.
## Acknowledgments
Wikidata, TMDb, Wikipedia, BAAI (BGE-M3), WordNet
## Notes
- Embeddings are L2-normalized
- Movie IDs are Wikidata QIDs (format: "Q####")
- Plot text cleaned and normalized
- Genres may be multi-label (pipe-separated)
- Some fields may be NaN for older/less popular films
|