File size: 5,512 Bytes
3fdd1ad 6f274cb 3fdd1ad | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 | ---
license: cc-by-nc-4.0
task_categories:
- video-classification
tags:
- deepfakes
- temporal-motion-signatures
- video-analysis
- talking-faces
- celebrities
size_categories:
- 100K<n<1M
---
# TalkingCelebs Dataset
## Dataset Description
TalkingCelebs is a manually curated video dataset designed specifically for Talking Motion Signatures (TMS) analysis and deepfake detection research. The dataset contains 500 high-quality video clips from five prominent public figures, addressing the limitations of existing datasets by providing more clips per identity and longer duration clips with continuous speech.
### Dataset Summary
- **Total clips**: 500 (100 clips per identity)
- **Identities**: 5 public figures (Barack Obama, Angela Merkel, Volodymyr Zelenskyy, Elon Musk, Emma Watson)
- **Clip duration**: 30 seconds each
- **Video sources**: 13-18 unique videos per person
- **Resolution**: 720p
- **Frame rate**: 25 FPS
- **Split**: 80% training (400 clips) / 20% test (100 clips)
## Dataset Structure
### Data Instances
Each data instance represents a 30-second video clip containing continuous speech from one of the five identities. The clips capture diverse speaking environments, camera angles, and contexts.
### Data Fields
```
TalkingCelebs/
├── train/
│ ├── id00000/ # identity id
│ │ ├── ZNYmK19-d0U/ # 11-char YouTube video id
│ │ ... ├── 000.mp4/ # clip id within a video
│ │ ...
│ ...
└── test/
├── id00000/
...
```
Each identity folder contains subdirectories organized by source video ID, with individual clips numbered sequentially (e.g., `000.mp4`, `001.mp4`, etc.).
### Data Splits
| Split | Clips per Identity | Total Clips | Source Videos per Identity |
|-------|-------------------|-------------|--------------------------------|
| Train | 80 | 400 | 10-15 |
| Test | 20 | 100 | ≥3 (no overlap w/ train) |
## Dataset Creation
### Curation Rationale
TalkingCelebs was created to overcome limitations in existing datasets for TMS analysis:
1. **Quantity**: Provides 100 clips per identity - a quantity rarely to be found for a single person with the following critirea
2. **Duration**: 30-second clips
3. **Quality**: All clips contain continuous speech throughout
4. **Diversity**: Multiple source videos per identity capture various contexts
### Source Data
- **Initial Data Collection**: YouTube videos of public speeches, interviews, and presentations
- **Data Processing**: Automated clip extraction using timestamps defined in `sources.yaml`
- **Quality Control**: Manual verification of speech continuity and video quality
### Preprocessing
1. Frame resclaing to 720p
2. Standardization to 25 FPS
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contains videos of real public figures and should be used responsibly:
- **Intended Use**: Research on deepfake detection, biometric authentication, and video analysis
- **Ethical Considerations**: Should not be used to create deepfakes or misleading content
- **Privacy**: All content is from publicly available sources
### Discussion of Biases
- **Representation and Demographics**: Limited to 5 individuals, not representative of global population
- **Context**: Clips primarily from formal speaking contexts (interviews, speeches)
- **Language**: English-language content
## Usage
To collect and process the TalkingCelebs dataset, follow these steps:
### Prerequisites
1. **Install uv package manager** (if not already installed):
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
2. **Setup the environment**:
```bash
uv sync
```
This will install all dependencies specified in the `uv.lock` file.
3. **Install FFmpeg** (if not already available):
**On Ubuntu/Debian:**
```bash
sudo apt update && sudo apt install ffmpeg
```
**On macOS:**
```bash
brew install ffmpeg
```
**On Windows:**
Download from [https://ffmpeg.org/download.html](https://ffmpeg.org/download.html) or use:
```bash
winget install FFmpeg
```
### Dataset Collection
Run the dataset collection script:
```bash
uv run collect_dataset.py
```
This will download and process the video clips according to the specifications in `sources.yaml`, creating the structured dataset ready for use in TMS analysis and deepfake detection research.
## Additional Information
### Licensing Information
All video content in this dataset remains the intellectual property of the original content creators, the individuals depicted, and the platforms (YouTube) where the videos were originally published. The TalkingCelebs dataset provides only curated metadata and organizational structure for research purposes; no original video files are redistributed. Any use of the dataset must comply with YouTube’s Terms of Service and the rights of the individuals featured in the videos. The dataset annotations and structure are made available under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license for non-commercial research use only.
### Citation Information
```bibtex
@dataset{talkingcelebs2025,
title={TalkingCelebs: A Dataset for Temporal Speech Motion Signatures Analysis },
author={[Ivan Samarskyi]},
year={2025},
note={Dataset available at "https://huggingface.co/datasets/samarrik/talkingcelebs"}
}
```
|