text
stringclasses 6
values |
|---|
MIT License
|
Copyright (c) 2025 Reeha Parkar
|
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
CMU-MOSEI Custom Unaligned Dataset
Dataset Description
This dataset represents a custom preprocessed version of the CMU-MOSEI (Multimodal Opinion Sentiment and Emotion Intensity) dataset with variable-length temporal sequences preserved for enhanced multimodal emotion recognition research. Unlike traditional fixed-alignment preprocessing approaches that truncate sequences to uniform lengths, this dataset maintains the natural temporal dynamics of multimodal expressions.
Key Features
- Variable-Length Processing: Preserves original temporal intervals across all modalities
- No Forced Alignment: Maintains authentic temporal asynchrony between modalities
- Enhanced Temporal Coverage: Up to 217x more temporal information than fixed-alignment approaches
- Multi-Hot Emotion Labels: 6-dimensional binary emotion vectors for comprehensive emotion modeling
- Research-Ready Format: Optimized for PyTorch dataloaders with custom collation functions
Dataset Statistics
| Split | Segments | Text Length Range | Visual Length Range | Audio Length Range |
|---|---|---|---|---|
| Train | 16,322 | 16-374 timesteps | 126-3,140 timesteps | 400-10,891 timesteps |
| Validation | 1,871 | 22-330 timesteps | 162-2,850 timesteps | 539-9,200 timesteps |
| Test | 4,659 | 18-350 timesteps | 140-2,950 timesteps | 450-9,500 timesteps |
| Total | 22,852 | Average: ~55 | Average: ~535 | Average: ~1,781 |
Processing Summary
- Processed: 22,852 segments from 23,248 total segments (98.3% success rate)
- Missing Data: 8 segments (incomplete modality data)
- Wrong Splits: 388 segments (video ID not in standard splits)
- Quality Control: Zero dimension issues or empty features
Data Format
File Structure
cmu_mosei_unaligned_ree.pt
βββ train/
β βββ src-text: List[np.ndarray] # Variable-length text sequences
β βββ src-visual: List[np.ndarray] # Variable-length visual sequences
β βββ src-audio: List[np.ndarray] # Variable-length audio sequences
β βββ tgt: List[np.ndarray] # 6-dimensional emotion labels
βββ val/
β βββ [same structure as train]
βββ test/
βββ [same structure as train]
Feature Specifications
Text Features (src-text)
- Source: GloVe word embeddings from CMU_MOSEI_TimestampedWordVectors
- Dimensions:
(timesteps, 300)where timesteps β [16, 374] - Format: 300-dimensional GloVe embeddings per word
- Preprocessing: NaN/Inf values replaced with 0.0
Visual Features (src-visual)
- Source: FacetNet facial features from CMU_MOSEI_VisualFacet42
- Dimensions:
(timesteps, 35)where timesteps β [126, 3,140] - Format: 35-dimensional facial expression features
- Sampling Rate: ~30 FPS from original video
- Preprocessing: NaN/Inf values replaced with 0.0
Audio Features (src-audio)
- Source: COVAREP acoustic features from CMU_MOSEI_COVAREP
- Dimensions:
(timesteps, 74)where timesteps β [400, 10,891] - Format: 74-dimensional low-level acoustic features
- Sampling Rate: ~100 Hz (10ms windows)
- Preprocessing: NaN/Inf values replaced with 0.0, -Inf clipped to 0.0
Emotion Labels (tgt)
- Dimensions:
(6,)binary vector - Emotions:
[Happy, Sad, Anger, Surprise, Disgust, Fear] - Encoding: Multi-hot binary (1.0 if emotion present, 0.0 otherwise)
- Source: Averaged annotations from 3 human annotators
- Threshold: Emotions with intensity > 0.0 marked as present
Example Data Sample
# Sample from train split
sample_segment = {
'text': np.array([[0.1, 0.2, ...], [0.3, 0.4, ...]]), # Shape: (55, 300)
'visual': np.array([[0.5, 0.6, ...], [0.7, 0.8, ...]]), # Shape: (535, 35)
'audio': np.array([[0.9, 1.0, ...], [1.1, 1.2, ...]]), # Shape: (1781, 74)
'label': np.array([1., 1., 0., 0., 0., 1.]) # Shape: (6,) - Happy, Sad, Fear present
}
Dataset Creation
Data Sources
- Original Dataset: CMU-MOSEI
- Processing SDK: CMU-MultimodalSDK
- Standard Splits: CMU-MOSEI official train/validation/test splits
Processing Pipeline
- Raw Data Loading: Load .csd files using CMU-MultimodalSDK
- Label Alignment: Align all modalities to emotion label timestamps
- Quality Filtering: Remove segments with missing/corrupted data
- Dimension Validation: Ensure consistent feature dimensions per modality
- Label Processing: Convert continuous emotion scores to binary labels
- Split Assignment: Assign segments to train/val/test using video IDs
Processing Code
Available at: Author's GitHub Repository
Key processing steps:
# Preserve variable lengths without collapse functions
dataset.align(label_field, collapse_functions=None)
# Extract features maintaining original temporal intervals
text_features = dataset[text_field][segment_key]['features'].astype(np.float32)
visual_features = dataset[visual_field][segment_key]['features'].astype(np.float32)
acoustic_features = dataset[acoustic_field][segment_key]['features'].astype(np.float32)
# Process emotion labels to binary format
emotion_labels = (label_features.flatten()[1:7] > 0.0).astype(np.float32)
Technical Details
Temporal Preservation
Unlike traditional approaches that pad or truncate to fixed lengths (typically 50 timesteps), this dataset:
- Preserves Natural Asynchrony: Text, visual, and audio modalities maintain their original temporal relationships
- Captures Complete Expressions: Full emotional expressions are preserved without truncation
- Enables Dynamic Processing: Models can learn from complete temporal dynamics
Citation
If you use this dataset in your research, please cite:
@misc{parkar2025mosei_unaligned,
title={CMU-MOSEI Custom Unaligned Dataset for Variable-Length Multimodal Emotion Recognition},
author={Reeha Parkar},
year={2025},
institution={King's College London},
note={Custom preprocessing of CMU-MOSEI dataset preserving temporal authenticity}
}
@inproceedings{zadeh2018multimodal,
title={Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph},
author={Zadeh, AmirAli Bagher and Liang, Paul Pu and Poria, Soujanya and Cambria, Erik and Morency, Louis-Philippe},
booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics},
pages={2236--2246},
year={2018}
}
Related Work
Original CARAT Paper
- Title: "CARAT: Cross-Modal Adaptive Representation Learning with Attention for Multimodal Emotion Recognition"
- Focus: Contrastive learning and cross-modal attention
Dataset Motivation
This preprocessing addresses limitations in traditional multimodal datasets:
- Fixed-Alignment Bias: Standard preprocessing loses temporal authenticity
- Information Loss: Truncation discards valuable temporal information
- Unrealistic Assumptions: Real expressions don't follow fixed timing
License
This dataset is derived from CMU-MOSEI and follows the same licensing terms. The preprocessing code and documentation are released under MIT License.
Contact
For questions about this dataset or preprocessing approach:
- Author: Reeha Parkar
- Institution: King's College London
- Repository: multimodal-emotion-recognition
Acknowledgments
- CMU Multicomp Lab: Original CMU-MOSEI dataset creators
- King's College London: Computing resources and academic support
- CMU-MultimodalSDK: Data processing infrastructure
- CARAT Authors: Original model architecture inspiration
- Downloads last month
- 30