Datasets:
Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -47,321 +47,107 @@ dataset_info:
|
|
| 47 |
|
| 48 |
## Dataset Summary
|
| 49 |
|
| 50 |
-
This dataset contains movie metadata, plot summaries, and semantic embeddings for feature-length films
|
| 51 |
|
| 52 |
-
**
|
| 53 |
-
|
| 54 |
-
**Year Range**: 1930-2024
|
| 55 |
-
|
| 56 |
-
**Primary Use Case**: Temporal semantic analysis, genre classification, concept extraction, and embedding-based movie similarity analysis
|
| 57 |
|
| 58 |
## Dataset Structure
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
-
|
| 65 |
-
- Contains 39 columns with movie metadata (title, year, genre, plot, etc.)
|
| 66 |
-
- Includes movies from 1930-2024
|
| 67 |
-
- All movies have corresponding embeddings in the embedding files
|
| 68 |
|
| 69 |
-
- **`
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
- Each embedding corresponds to a movie plot summary
|
| 73 |
|
| 74 |
-
|
| 75 |
-
- Shape: `(N,)`
|
| 76 |
-
- Contains Wikidata QIDs (e.g., "Q1931001") corresponding to each embedding
|
| 77 |
-
- Index alignment: `final_dense_embeddings[i]` corresponds to `final_dense_movie_ids[i]`
|
| 78 |
|
| 79 |
-
-
|
| 80 |
-
- Contains token-level lexical weights from BGE-M3 model
|
| 81 |
-
- Format: NPZ file with keys:
|
| 82 |
-
- `token_indices`: List of arrays, one per movie, containing token IDs with non-zero weights
|
| 83 |
-
- `weights`: List of arrays, one per movie, containing corresponding lexical weights
|
| 84 |
-
- `movie_ids`: Array of movie IDs corresponding to the lexical weights
|
| 85 |
-
- Used for concept extraction and fine-grained semantic analysis
|
| 86 |
|
| 87 |
-
|
| 88 |
|
| 89 |
-
|
| 90 |
-
- Contains novelty scores and neighbor information for movies
|
| 91 |
|
| 92 |
-
|
| 93 |
-
- Shows temporal trajectories and genre clusters
|
| 94 |
|
| 95 |
-
|
| 96 |
-
- `concept_words_*.npy`: WordNet-based concept vocabulary (nouns)
|
| 97 |
-
- `concept_vecs_*.npy`: Pre-computed embeddings for concept words
|
| 98 |
-
- Used for concept extraction and semantic mapping
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
### Step 1: Wikidata Collection
|
| 105 |
-
- Queries Wikidata SPARQL endpoint to fetch movie metadata
|
| 106 |
-
- Filters for feature-length films (excludes TV series, short films, etc.)
|
| 107 |
-
- Collects: titles, release years, genres, directors, actors, countries, Wikidata IDs
|
| 108 |
-
- Year range: 1930-2024
|
| 109 |
-
- Target: ~8,000 movies per year
|
| 110 |
-
|
| 111 |
-
### Step 2: TMDb Enrichment
|
| 112 |
-
- Enriches Wikidata entries with TMDb metadata via API lookup
|
| 113 |
-
- Retrieves: popularity scores, vote counts, average ratings, TMDb IDs
|
| 114 |
-
- Links movies using Wikidata IDs
|
| 115 |
-
|
| 116 |
-
### Step 3: Wikipedia Plot Retrieval
|
| 117 |
-
- Fetches full plot summaries from Wikipedia using Wikidata sitelinks
|
| 118 |
-
- Handles redirects and extracts main plot sections
|
| 119 |
-
- Filters out plots that are too short or contain insufficient narrative content
|
| 120 |
-
- Stores plot text and section title
|
| 121 |
-
|
| 122 |
-
### Step 4: Embedding Generation
|
| 123 |
-
- Generates dense embeddings using BAAI/bge-m3 model
|
| 124 |
-
- Uses CLS token aggregation method (default)
|
| 125 |
-
- Supports multiple chunking strategies (CLS token, mean pooling, chunk-first, late chunking)
|
| 126 |
-
- Generates sparse lexical weights for concept extraction
|
| 127 |
-
- Processes embeddings in batches with parallel GPU support
|
| 128 |
|
| 129 |
-
|
|
|
|
|
|
|
| 130 |
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
### Core Identifiers
|
| 134 |
-
- **`movie_id`**: Wikidata QID (e.g., "Q1931001") - primary identifier
|
| 135 |
-
- **`title`**: Movie title
|
| 136 |
-
- **`year`**: Release year (1930-2024)
|
| 137 |
-
- **`imdb_id`**: IMDb identifier
|
| 138 |
-
- **`tmdb_id`**: TMDb identifier
|
| 139 |
-
|
| 140 |
-
### Metadata
|
| 141 |
-
- **`release_date`**: Full release date
|
| 142 |
-
- **`country`**: Production country
|
| 143 |
-
- **`duration`**: Runtime in minutes
|
| 144 |
-
- **`duration_all`**: All duration information
|
| 145 |
-
- **`wikidata_class`**: Wikidata entity class
|
| 146 |
-
- **`wikipedia_link`**: Link to Wikipedia article
|
| 147 |
-
|
| 148 |
-
### Creative Team
|
| 149 |
-
- **`directors`**: Director names (pipe-separated if multiple)
|
| 150 |
-
- **`directors_id`**: Director Wikidata IDs (pipe-separated)
|
| 151 |
-
- **`actors`**: Actor names (pipe-separated if multiple)
|
| 152 |
-
- **`actors_id`**: Actor Wikidata IDs (pipe-separated)
|
| 153 |
-
|
| 154 |
-
### Genre Information
|
| 155 |
-
- **`genre`**: Raw genre strings (comma-separated)
|
| 156 |
-
- **`genre_id`**: Genre Wikidata IDs (comma-separated)
|
| 157 |
-
- **`genre_cluster_ids`**: Clustered genre IDs (after processing)
|
| 158 |
-
- **`genre_cluster_names`**: Clustered genre names (after processing)
|
| 159 |
-
|
| 160 |
-
### Plot Data
|
| 161 |
-
- **`plot`**: Full plot summary text from Wikipedia
|
| 162 |
-
- **`plot_section`**: Section title from which plot was extracted
|
| 163 |
-
- **`plot_length_chars`**: Character count of plot
|
| 164 |
-
- **`plot_length_tokens`**: Token count of plot
|
| 165 |
-
- **`num_different_tokens`**: Number of unique tokens
|
| 166 |
-
- **`token_shannon_entropy`**: Shannon entropy of token distribution
|
| 167 |
-
|
| 168 |
-
### TMDb Metrics
|
| 169 |
-
- **`popularity`**: TMDb popularity score
|
| 170 |
-
- **`vote_average`**: Average user rating
|
| 171 |
-
- **`vote_count`**: Number of votes
|
| 172 |
-
|
| 173 |
-
### Financial Data (if available)
|
| 174 |
-
- **`budget`**: Production budget
|
| 175 |
-
- **`budget_currency`**: Budget currency
|
| 176 |
-
- **`box_office`**: Box office revenue
|
| 177 |
-
- **`box_office_currency`**: Box office currency
|
| 178 |
-
- **`box_office_worldwide`**: Worldwide box office
|
| 179 |
-
- **`box_office_worldwide_currency`**: Worldwide currency
|
| 180 |
-
|
| 181 |
-
### Additional Metadata
|
| 182 |
-
- **`set_in_period`**: Time period in which movie is set
|
| 183 |
-
- **`awards`**: Awards information
|
| 184 |
-
- **`summary`**: Additional summary (usually empty)
|
| 185 |
|
| 186 |
## Loading the Dataset
|
| 187 |
|
| 188 |
-
### Python Example
|
| 189 |
|
| 190 |
```python
|
| 191 |
import numpy as np
|
| 192 |
import pandas as pd
|
| 193 |
|
| 194 |
-
# Load
|
| 195 |
df = pd.read_csv('final_dataset.csv', low_memory=False)
|
| 196 |
-
print(f"Loaded {len(df)} movies")
|
| 197 |
-
print(f"Year range: {df['year'].min()} to {df['year'].max()}")
|
| 198 |
-
|
| 199 |
-
# Load dense embeddings
|
| 200 |
embeddings = np.load('final_dense_embeddings.npy')
|
| 201 |
movie_ids = np.load('final_dense_movie_ids.npy', allow_pickle=True)
|
| 202 |
|
| 203 |
-
print(f"Embeddings shape: {embeddings.shape}")
|
| 204 |
-
print(f"Number of movie IDs: {len(movie_ids)}")
|
| 205 |
-
|
| 206 |
-
# Verify alignment: embeddings[i] corresponds to movie_ids[i]
|
| 207 |
-
assert len(embeddings) == len(movie_ids)
|
| 208 |
-
|
| 209 |
# Merge embeddings with metadata
|
| 210 |
-
embeddings_df = pd.DataFrame({
|
| 211 |
-
'movie_id': movie_ids,
|
| 212 |
-
'embedding': list(embeddings)
|
| 213 |
-
})
|
| 214 |
combined_df = pd.merge(df, embeddings_df, on='movie_id', how='inner')
|
| 215 |
-
print(f"Combined dataset: {len(combined_df)} movies with embeddings")
|
| 216 |
```
|
| 217 |
|
| 218 |
-
### Using
|
| 219 |
-
|
| 220 |
-
The dataset is designed to work with the utilities in `src/data_utils.py`:
|
| 221 |
|
| 222 |
```python
|
| 223 |
-
from src.data_utils import
|
| 224 |
-
load_final_dataset,
|
| 225 |
-
load_final_dense_embeddings,
|
| 226 |
-
load_final_sparse_embeddings,
|
| 227 |
-
load_final_data_with_embeddings
|
| 228 |
-
)
|
| 229 |
|
| 230 |
-
# Load dataset with embeddings merged
|
| 231 |
-
data_dir = "data/data_final"
|
| 232 |
df = load_final_data_with_embeddings(
|
| 233 |
-
csv_path=
|
| 234 |
-
data_dir=data_dir,
|
| 235 |
-
verbose=True
|
| 236 |
-
)
|
| 237 |
-
|
| 238 |
-
# Access embeddings
|
| 239 |
-
embeddings = np.array(df['embedding'].tolist())
|
| 240 |
-
movie_ids = df['movie_id'].values
|
| 241 |
-
```
|
| 242 |
-
|
| 243 |
-
### Loading Sparse Embeddings (Lexical Weights)
|
| 244 |
-
|
| 245 |
-
```python
|
| 246 |
-
from src.data_utils import load_final_sparse_embeddings
|
| 247 |
-
|
| 248 |
-
# Load sparse embeddings
|
| 249 |
-
token_indices_list, weights_list, movie_ids = load_final_sparse_embeddings(
|
| 250 |
data_dir="data/data_final",
|
| 251 |
verbose=True
|
| 252 |
)
|
| 253 |
-
|
| 254 |
-
# Access lexical weights for a specific movie
|
| 255 |
-
movie_idx = 0
|
| 256 |
-
token_indices = token_indices_list[movie_idx] # Token IDs with non-zero weights
|
| 257 |
-
weights = weights_list[movie_idx] # Corresponding lexical weights
|
| 258 |
-
movie_id = movie_ids[movie_idx] # Wikidata QID
|
| 259 |
```
|
| 260 |
|
| 261 |
## Embedding Details
|
| 262 |
|
| 263 |
-
|
| 264 |
-
- **
|
| 265 |
-
- **
|
| 266 |
-
- **Aggregation Method**: CLS token (default)
|
| 267 |
-
- **Normalization**: L2-normalized embeddings
|
| 268 |
-
|
| 269 |
-
### Alternative Chunking Methods
|
| 270 |
-
The pipeline supports multiple chunking strategies (stored with suffixes):
|
| 271 |
-
- `_cls_token`: CLS token aggregation (default)
|
| 272 |
-
- `_mean_pooling`: Global mean pooling
|
| 273 |
-
- `_chunk_first_{chunk_size}_{stride}`: Early chunking with specified parameters
|
| 274 |
-
- `_late_chunking_{window_size}_{stride}`: Late chunking with specified parameters
|
| 275 |
-
|
| 276 |
-
### Embedding Alignment
|
| 277 |
-
- Embeddings are indexed by `movie_id` (Wikidata QID)
|
| 278 |
-
- `final_dense_embeddings[i]` corresponds to `final_dense_movie_ids[i]`
|
| 279 |
-
- To find embedding for a specific movie:
|
| 280 |
-
```python
|
| 281 |
-
movie_id = "Q1931001"
|
| 282 |
-
idx = np.where(movie_ids == movie_id)[0][0]
|
| 283 |
-
embedding = embeddings[idx]
|
| 284 |
-
```
|
| 285 |
|
| 286 |
## Concept Space
|
| 287 |
|
| 288 |
-
|
| 289 |
-
|
| 290 |
-
- **Source**: WordNet noun synsets
|
| 291 |
-
- **Filtering**: Zipf frequency ≥ 4.0 (common words)
|
| 292 |
-
- **Size**: Top 20,000 most frequent nouns
|
| 293 |
-
- **Embedding Model**: BGE-small-en-v1.5 (L2-normalized)
|
| 294 |
-
- **Usage**: Concept extraction from movie plots using lexical weights
|
| 295 |
-
|
| 296 |
-
Files are parameterized by:
|
| 297 |
-
- Zipf threshold
|
| 298 |
-
- Vocabulary size
|
| 299 |
-
- Model name
|
| 300 |
-
|
| 301 |
-
Example: `concept_words_zipf2.5_vocab10000_BAAI_bge-m3.npy`
|
| 302 |
|
| 303 |
## Data Quality
|
| 304 |
|
| 305 |
-
|
| 306 |
-
- All movies have: `movie_id`, `title`, `year`, `wikipedia_link`
|
| 307 |
-
- ~92,000 movies have plot summaries
|
| 308 |
-
- ~91,000 movies have TMDb metadata (popularity, votes)
|
| 309 |
-
- ~75,000 movies have director information
|
| 310 |
-
- ~81,000 movies have genre information
|
| 311 |
|
| 312 |
-
|
| 313 |
-
- Only feature-length films (excludes TV series, short films, video games)
|
| 314 |
-
- Plots filtered for minimum length and narrative content
|
| 315 |
-
- Non-movie entries removed based on Wikidata class
|
| 316 |
|
| 317 |
-
|
| 318 |
-
- Plot text cleaned (removed line breaks, normalized whitespace)
|
| 319 |
-
- Genre strings normalized and clustered
|
| 320 |
-
- Duplicate entries removed based on movie_id
|
| 321 |
|
| 322 |
## Use Cases
|
| 323 |
|
| 324 |
-
|
| 325 |
-
Analyze how movie plot semantics evolve over time:
|
| 326 |
-
```python
|
| 327 |
-
# Group by decade and compute centroid shifts
|
| 328 |
-
decades = (df['year'] // 10) * 10
|
| 329 |
-
decade_embeddings = df.groupby(decades)['embedding'].apply(
|
| 330 |
-
lambda x: np.mean(np.array(x.tolist()), axis=0)
|
| 331 |
-
)
|
| 332 |
-
```
|
| 333 |
|
| 334 |
-
|
| 335 |
-
Use embeddings for genre prediction or clustering:
|
| 336 |
-
```python
|
| 337 |
-
from sklearn.cluster import KMeans
|
| 338 |
-
|
| 339 |
-
embeddings_array = np.array(df['embedding'].tolist())
|
| 340 |
-
kmeans = KMeans(n_clusters=10)
|
| 341 |
-
clusters = kmeans.fit_predict(embeddings_array)
|
| 342 |
-
```
|
| 343 |
-
|
| 344 |
-
### 3. Movie Similarity Search
|
| 345 |
-
Find similar movies using cosine similarity:
|
| 346 |
-
```python
|
| 347 |
-
from sklearn.metrics.pairwise import cosine_similarity
|
| 348 |
|
| 349 |
-
|
| 350 |
-
similarities = cosine_similarity(target_embedding, embeddings)
|
| 351 |
-
similar_movies = np.argsort(similarities[0])[-10:][::-1]
|
| 352 |
-
```
|
| 353 |
|
| 354 |
-
|
| 355 |
-
Extract semantic concepts from plots using lexical weights:
|
| 356 |
-
```python
|
| 357 |
-
# Use sparse embeddings to map plot nouns to concept space
|
| 358 |
-
# See src/analysis/concept_extraction.py for implementation
|
| 359 |
-
```
|
| 360 |
|
| 361 |
## Citation
|
| 362 |
|
| 363 |
-
If you use this dataset, please cite:
|
| 364 |
-
|
| 365 |
```bibtex
|
| 366 |
@dataset{movie_plot_embeddings_2026,
|
| 367 |
title={Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years},
|
|
@@ -373,57 +159,25 @@ If you use this dataset, please cite:
|
|
| 373 |
|
| 374 |
## License
|
| 375 |
|
| 376 |
-
This dataset
|
| 377 |
-
|
| 378 |
-
- **
|
| 379 |
-
- **
|
| 380 |
-
- **TMDb**: Data is licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) for non-commercial use. Commercial use requires separate licensing from TMDb.
|
| 381 |
|
| 382 |
-
**Dataset License**:
|
| 383 |
|
| 384 |
-
**Code
|
| 385 |
|
| 386 |
-
**Attribution**:
|
| 387 |
-
- Wikidata contributors
|
| 388 |
-
- Wikipedia contributors
|
| 389 |
-
- TMDb (The Movie Database)
|
| 390 |
-
- This dataset (see Citation section)
|
| 391 |
|
| 392 |
## Acknowledgments
|
| 393 |
|
| 394 |
-
|
| 395 |
-
- **TMDb**: Popularity scores, ratings, and additional metadata
|
| 396 |
-
- **Wikipedia**: Plot summaries and descriptions
|
| 397 |
-
- **BAAI**: BGE-M3 embedding model
|
| 398 |
-
- **WordNet**: Concept vocabulary for semantic analysis
|
| 399 |
-
|
| 400 |
-
## Contact
|
| 401 |
-
|
| 402 |
-
For questions or issues, please refer to the main repository: [GroupDataLiteracy](https://github.com/your-repo/GroupDataLiteracy)
|
| 403 |
-
|
| 404 |
-
## Dataset Statistics
|
| 405 |
-
|
| 406 |
-
- **Total Movies**: ~92,000
|
| 407 |
-
- **Year Range**: 1930-2024
|
| 408 |
-
- **Movies with Plots**: ~92,000
|
| 409 |
-
- **Movies with TMDb Data**: ~91,000
|
| 410 |
-
- **Movies with Genres**: ~81,000
|
| 411 |
-
- **Embedding Dimension**: 1024
|
| 412 |
-
- **Average Plot Length**: ~1,500 characters
|
| 413 |
-
- **Unique Genres**: ~50 (after clustering)
|
| 414 |
-
|
| 415 |
-
## File Sizes (Approximate)
|
| 416 |
-
|
| 417 |
-
- `final_dataset.csv`: ~200 MB
|
| 418 |
-
- `final_dense_embeddings.npy`: ~380 MB (for 92K movies × 1024 dims)
|
| 419 |
-
- `final_dense_movie_ids.npy`: ~7 MB
|
| 420 |
-
- `final_sparse_embeddings.npz`: ~500 MB (variable, depends on sparsity)
|
| 421 |
|
| 422 |
## Notes
|
| 423 |
|
| 424 |
-
-
|
| 425 |
-
- Movie IDs are Wikidata QIDs (
|
| 426 |
-
- Plot text
|
| 427 |
-
-
|
| 428 |
-
- Some fields may be NaN for older
|
| 429 |
-
- The dataset is designed for research in temporal semantic analysis and embedding-based movie analysis
|
|
|
|
| 47 |
|
| 48 |
## Dataset Summary
|
| 49 |
|
| 50 |
+
This dataset contains movie metadata, plot summaries, and semantic embeddings for ~92,000 feature-length films (1930-2024). Created for temporal semantic drift analysis, it includes metadata from Wikidata, TMDb, and Wikipedia, along with dense (1024-dim) and sparse embeddings generated using BAAI/bge-m3.
|
| 51 |
|
| 52 |
+
**Key Statistics**: ~92,000 movies | Year range: 1930-2024 | Embedding dimension: 1024 | Average plot length: ~1,500 characters
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
## Dataset Structure
|
| 55 |
|
| 56 |
+
### Core Files
|
| 57 |
|
| 58 |
+
- **`final_dataset.csv`** (200 MB): Movie metadata with 39 columns (title, year, genre, plot, directors, actors, TMDb metrics, etc.)
|
| 59 |
+
- **`final_dense_embeddings.npy`** (380 MB): Dense embeddings array `(N, 1024)` using BGE-M3 CLS token aggregation
|
| 60 |
+
- **`final_dense_movie_ids.npy`** (7 MB): Wikidata QIDs corresponding to embeddings (index-aligned: `embeddings[i]` ↔ `movie_ids[i]`)
|
| 61 |
+
- **`final_sparse_embeddings.npz`** (500 MB): Sparse lexical weights for concept extraction (token_indices, weights, movie_ids)
|
| 62 |
|
| 63 |
+
### Additional Files
|
|
|
|
|
|
|
|
|
|
| 64 |
|
| 65 |
+
- **`knn_faiss_novelty.csv`**: Novelty scores and neighbor information
|
| 66 |
+
- **`umap_cluster_trajectories.png`**: UMAP visualization of embeddings
|
| 67 |
+
- **`concept_space/`**: WordNet-based concept vocabulary and embeddings for semantic mapping
|
|
|
|
| 68 |
|
| 69 |
+
## Data Collection Pipeline
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
+
Four-step pipeline: (1) **Wikidata**: Query SPARQL for movie metadata (1930-2024, ~8K/year), filter feature films; (2) **TMDb**: Enrich with popularity, votes, ratings via API; (3) **Wikipedia**: Extract plot summaries from sitelinks; (4) **Embeddings**: Generate dense/sparse embeddings using BGE-M3 with CLS token aggregation, parallel GPU processing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
+
## Data Fields
|
| 74 |
|
| 75 |
+
**Core**: `movie_id` (Wikidata QID), `title`, `year`, `imdb_id`, `tmdb_id`
|
|
|
|
| 76 |
|
| 77 |
+
**Metadata**: `release_date`, `country`, `duration`, `wikidata_class`, `wikipedia_link`
|
|
|
|
| 78 |
|
| 79 |
+
**Creative Team**: `directors`, `directors_id`, `actors`, `actors_id` (pipe-separated for multiple)
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
+
**Genre**: `genre` (raw, comma-separated), `genre_id`, `genre_cluster_ids`, `genre_cluster_names` (processed)
|
| 82 |
|
| 83 |
+
**Plot**: `plot` (full text), `plot_section`, `plot_length_chars`, `plot_length_tokens`, `num_different_tokens`, `token_shannon_entropy`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
+
**TMDb Metrics**: `popularity`, `vote_average`, `vote_count`
|
| 86 |
+
|
| 87 |
+
**Financial** (if available): `budget`, `budget_currency`, `box_office`, `box_office_currency`, `box_office_worldwide`
|
| 88 |
|
| 89 |
+
**Other**: `set_in_period`, `awards`, `summary`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
## Loading the Dataset
|
| 92 |
|
| 93 |
+
### Basic Python Example
|
| 94 |
|
| 95 |
```python
|
| 96 |
import numpy as np
|
| 97 |
import pandas as pd
|
| 98 |
|
| 99 |
+
# Load metadata and embeddings
|
| 100 |
df = pd.read_csv('final_dataset.csv', low_memory=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
embeddings = np.load('final_dense_embeddings.npy')
|
| 102 |
movie_ids = np.load('final_dense_movie_ids.npy', allow_pickle=True)
|
| 103 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
# Merge embeddings with metadata
|
| 105 |
+
embeddings_df = pd.DataFrame({'movie_id': movie_ids, 'embedding': list(embeddings)})
|
|
|
|
|
|
|
|
|
|
| 106 |
combined_df = pd.merge(df, embeddings_df, on='movie_id', how='inner')
|
|
|
|
| 107 |
```
|
| 108 |
|
| 109 |
+
### Using Utilities
|
|
|
|
|
|
|
| 110 |
|
| 111 |
```python
|
| 112 |
+
from src.data_utils import load_final_data_with_embeddings
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
|
|
|
|
|
|
|
| 114 |
df = load_final_data_with_embeddings(
|
| 115 |
+
csv_path="data/data_final/final_dataset.csv",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
data_dir="data/data_final",
|
| 117 |
verbose=True
|
| 118 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
```
|
| 120 |
|
| 121 |
## Embedding Details
|
| 122 |
|
| 123 |
+
- **Model**: BAAI/bge-m3 | **Dimension**: 1024 | **Method**: CLS token aggregation | **Normalization**: L2-normalized
|
| 124 |
+
- **Alignment**: `final_dense_embeddings[i]` corresponds to `final_dense_movie_ids[i]`
|
| 125 |
+
- **Alternative chunking**: Supports mean pooling, chunk-first, late chunking (stored with suffixes)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
## Concept Space
|
| 128 |
|
| 129 |
+
Pre-computed WordNet-based concept vocabulary: top 20K nouns (Zipf ≥ 4.0), embedded with BGE-small-en-v1.5. Files parameterized by Zipf threshold, vocabulary size, and model name.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
|
| 131 |
## Data Quality
|
| 132 |
|
| 133 |
+
**Completeness**: All movies have core fields; ~92K plots, ~91K TMDb data, ~75K directors, ~81K genres.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 134 |
|
| 135 |
+
**Filtering**: Feature films only; plots filtered by length/entropy; explicit content excluded; duplicates removed.
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
+
**Cleaning**: Plot text normalized; genres clustered; Shannon entropy threshold: 4.8398.
|
|
|
|
|
|
|
|
|
|
| 138 |
|
| 139 |
## Use Cases
|
| 140 |
|
| 141 |
+
**Temporal Analysis**: Compute decade centroids to analyze semantic drift over time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
+
**Genre Classification**: Use embeddings for clustering or classification tasks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 144 |
|
| 145 |
+
**Similarity Search**: Find similar movies using cosine similarity on embeddings.
|
|
|
|
|
|
|
|
|
|
| 146 |
|
| 147 |
+
**Concept Extraction**: Map plot nouns to concept space using sparse lexical weights.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 148 |
|
| 149 |
## Citation
|
| 150 |
|
|
|
|
|
|
|
| 151 |
```bibtex
|
| 152 |
@dataset{movie_plot_embeddings_2026,
|
| 153 |
title={Plot Twists Over Time: How Movie Stories Have Changed Over 95 Years},
|
|
|
|
| 159 |
|
| 160 |
## License
|
| 161 |
|
| 162 |
+
This dataset inherits licenses from source data:
|
| 163 |
+
- **Wikidata**: CC0 1.0 (Public Domain)
|
| 164 |
+
- **Wikipedia**: CC BY-SA 4.0
|
| 165 |
+
- **TMDb**: CC BY-NC 4.0 (non-commercial)
|
|
|
|
| 166 |
|
| 167 |
+
**Dataset License**: CC BY-NC 4.0 for non-commercial research. Commercial use requires TMDb licensing and Wikipedia compliance.
|
| 168 |
|
| 169 |
+
**Code**: MIT License (see main repository).
|
| 170 |
|
| 171 |
+
**Attribution Required**: Cite Wikidata, Wikipedia, TMDb contributors, and this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
|
| 173 |
## Acknowledgments
|
| 174 |
|
| 175 |
+
Wikidata, TMDb, Wikipedia, BAAI (BGE-M3), WordNet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 176 |
|
| 177 |
## Notes
|
| 178 |
|
| 179 |
+
- Embeddings are L2-normalized
|
| 180 |
+
- Movie IDs are Wikidata QIDs (format: "Q####")
|
| 181 |
+
- Plot text cleaned and normalized
|
| 182 |
+
- Genres may be multi-label (pipe-separated)
|
| 183 |
+
- Some fields may be NaN for older/less popular films
|
|
|