Datasets:
language:
- en
pretty_name: 'Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years'
tags:
- movies
- embeddings
- semantic-analysis
- temporal-analysis
- text-embeddings
- plot-summaries
- genre-classification
- concept-extraction
- wikidata
- wikipedia
- tmdb
- bge-m3
- text
license: cc-by-nc-4.0
task_categories:
- text-classification
- sentence-similarity
- text-retrieval
- feature-extraction
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: movie_id
dtype: string
- name: title
dtype: string
- name: year
dtype: int32
- name: plot
dtype: string
- name: genre
dtype: string
- name: embedding
dtype: float32
shape:
- 1024
Movie Plot Embeddings Dataset
Project Title: Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years
Dataset Summary
This dataset contains movie metadata, plot summaries, and semantic embeddings for ~92,000 feature-length films (1930-2024). Created for temporal semantic drift analysis, it includes metadata from Wikidata, TMDb, and Wikipedia, along with dense (1024-dim) and sparse embeddings generated using BAAI/bge-m3.
Key Statistics: ~92,000 movies | Year range: 1930-2024 | Embedding dimension: 1024 | Average plot length: ~1,500 characters
Dataset Structure
Core Files
final_dataset.csv(200 MB): Movie metadata with 39 columns (title, year, genre, plot, directors, actors, TMDb metrics, etc.)final_dense_embeddings.npy(380 MB): Dense embeddings array(N, 1024)using BGE-M3 CLS token aggregationfinal_dense_movie_ids.npy(7 MB): Wikidata QIDs corresponding to embeddings (index-aligned:embeddings[i]↔movie_ids[i])final_sparse_embeddings.npz(500 MB): Sparse lexical weights for concept extraction (token_indices, weights, movie_ids)
Additional Files
knn_faiss_novelty.csv: Novelty scores and neighbor informationumap_cluster_trajectories.png: UMAP visualization of embeddingsconcept_space/: WordNet-based concept vocabulary and embeddings for semantic mapping
Data Collection Pipeline
Four-step pipeline: (1) Wikidata: Query SPARQL for movie metadata (1930-2024, ~8K/year), filter feature films; (2) TMDb: Enrich with popularity, votes, ratings via API; (3) Wikipedia: Extract plot summaries from sitelinks; (4) Embeddings: Generate dense/sparse embeddings using BGE-M3 with CLS token aggregation, parallel GPU processing.
Data Fields
Core: movie_id (Wikidata QID), title, year, imdb_id, tmdb_id
Metadata: release_date, country, duration, wikidata_class, wikipedia_link
Creative Team: directors, directors_id, actors, actors_id (pipe-separated for multiple)
Genre: genre (raw, comma-separated), genre_id, genre_cluster_ids, genre_cluster_names (processed)
Plot: plot (full text), plot_section, plot_length_chars, plot_length_tokens, num_different_tokens, token_shannon_entropy
TMDb Metrics: popularity, vote_average, vote_count
Financial (if available): budget, budget_currency, box_office, box_office_currency, box_office_worldwide
Other: set_in_period, awards, summary
Loading the Dataset
Basic Python Example
import numpy as np
import pandas as pd
# Load metadata and embeddings
df = pd.read_csv('final_dataset.csv', low_memory=False)
embeddings = np.load('final_dense_embeddings.npy')
movie_ids = np.load('final_dense_movie_ids.npy', allow_pickle=True)
# Merge embeddings with metadata
embeddings_df = pd.DataFrame({'movie_id': movie_ids, 'embedding': list(embeddings)})
combined_df = pd.merge(df, embeddings_df, on='movie_id', how='inner')
Using Utilities
from src.data_utils import load_final_data_with_embeddings
df = load_final_data_with_embeddings(
csv_path="data/data_final/final_dataset.csv",
data_dir="data/data_final",
verbose=True
)
Embedding Details
- Model: BAAI/bge-m3 | Dimension: 1024 | Method: CLS token aggregation | Normalization: L2-normalized
- Alignment:
final_dense_embeddings[i]corresponds tofinal_dense_movie_ids[i] - Alternative chunking: Supports mean pooling, chunk-first, late chunking (stored with suffixes)
Concept Space
Pre-computed WordNet-based concept vocabulary: top 20K nouns (Zipf ≥ 4.0), embedded with BGE-small-en-v1.5. Files parameterized by Zipf threshold, vocabulary size, and model name.
Data Quality
Completeness: All movies have core fields; ~92K plots, ~91K TMDb data, ~75K directors, ~81K genres.
Filtering: Feature films only; plots filtered by length/entropy; explicit content excluded; duplicates removed.
Cleaning: Plot text normalized; genres clustered; Shannon entropy threshold: 4.8398.
Use Cases
Temporal Analysis: Compute decade centroids to analyze semantic drift over time.
Genre Classification: Use embeddings for clustering or classification tasks.
Similarity Search: Find similar movies using cosine similarity on embeddings.
Concept Extraction: Map plot nouns to concept space using sparse lexical weights.
Citation
@dataset{movie_plot_embeddings_2026,
title={Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years},
author={Cheung, Ansel and Villa, Alessio and Markovinović, Bartol and López de Ipiña, Martín and Abraham, Niklas},
year={2026},
note={Dataset collected from Wikidata, TMDb, and Wikipedia for temporal semantic analysis of movie plots}
}
License
This dataset inherits licenses from source data:
- Wikidata: CC0 1.0 (Public Domain)
- Wikipedia: CC BY-SA 4.0
- TMDb: CC BY-NC 4.0 (non-commercial)
Dataset License: CC BY-NC 4.0 for non-commercial research. Commercial use requires TMDb licensing and Wikipedia compliance.
Code: MIT License (see main repository).
Attribution Required: Cite Wikidata, Wikipedia, TMDb contributors, and this dataset.
Acknowledgments
Wikidata, TMDb, Wikipedia, BAAI (BGE-M3), WordNet
Notes
- Embeddings are L2-normalized
- Movie IDs are Wikidata QIDs (format: "Q####")
- Plot text cleaned and normalized
- Genres may be multi-label (pipe-separated)
- Some fields may be NaN for older/less popular films