MediaPipe Face Emotion Classifier
This model classifies facial expressions into 7 categories using MediaPipe Face Landmarker Blendshapes and a Random Forest Classifier.
Performance Summary
This model was trained on the FER-2013 dataset. Given the difficulty and noise of $48 \times 48$ grayscale images, the model achieves a strong baseline by utilizing high-level muscle movement features (Blendshapes).
- Overall Accuracy: 51.36%
- Baseline (Random Guess): 14.28%
- Human Baseline (FER-2013): ~65-68%
Detailed Metrics
| Emotion | Precision | Recall | F1-Score |
|---|---|---|---|
| Happy | 0.66 | 0.80 | 0.72 |
| Surprise | 0.68 | 0.71 | 0.70 |
| Disgust | 0.71 | 0.50 | 0.59 |
| Neutral | 0.41 | 0.48 | 0.44 |
| Angry | 0.40 | 0.41 | 0.41 |
| Fear | 0.38 | 0.27 | 0.32 |
| Sad | 0.32 | 0.30 | 0.31 |
[Image of a Confusion Matrix for facial emotion recognition]
Model Architecture
1. Feature Extraction
Instead of raw pixel data, we use the 52 Blendshape scores provided by MediaPipe. These scores represent specific facial muscle activations (e.g., browInnerUp, mouthSmileLeft). This makes the model robust to different lighting conditions and head poses.
2. Classifier
- Algorithm: Random Forest Classifier
- Estimators: 200
- Optimization: Trained with
class_weight='balanced'to ensure fair representation of rare emotions like Disgust.
How to use
You can load this model using the skops library:
import skops.io as sio
# Load the model
model = sio.load("model.skops", trusted=True)
# Expects an array of 52 blendshape scores
# prediction = model.predict([blendshape_list])
- Downloads last month
- -