Improve model card: add usage example, fix preprocessing details, expand limitations
7a73a2b verified metadata
license: apache-2.0
Vision Transformer (ViT) for Facial Expression Recognition Model Card
Model Overview
- Model Name: trpakov/vit-face-expression
- Task: Facial Expression/Emotion Recognition
- Dataset: FER2013
- Model Architecture: Vision Transformer (ViT)
- Finetuned from model: vit-base-patch16-224-in21k
Model Description
The vit-face-expression model is a Vision Transformer fine-tuned for the task of facial emotion recognition. It is trained on the FER2013 dataset, which consists of facial images categorized into seven different emotions:
- Angry
- Disgust
- Fear
- Happy
- Sad
- Surprise
- Neutral
Data Preprocessing
The input images are preprocessed before being fed into the model. The preprocessing steps include:
- Resizing: Images are resized to 224x224 pixels before being fed into the model.
- Normalization: Pixel values are normalized using ImageNet mean and standard deviation.
- Data Augmentation: Random transformations such as rotations, flips, and zooms are applied to augment the training dataset.
Evaluation Metrics
- Validation set accuracy: 0.7113
- Test set accuracy: 0.7116
Usage
from transformers import pipeline
from PIL import Image
# Load the model
pipe = pipeline("image-classification", model="trpakov/vit-face-expression")
# Load an image (must contain a face)
image = Image.open("your_image.jpg").convert("RGB")
# Run inference
results = pipe(image)
# Output: list of dicts with 'label' and 'score'
# Example: [{'label': 'happy', 'score': 0.98}, {'label': 'neutral', 'score': 0.01}, ...]
print(results)
Limitations
- Dataset bias: FER2013 is collected from Google Image Search and is known to contain noisy and mislabelled samples, which affects model reliability.
- Class imbalance: The dataset is heavily skewed toward "happy" and "neutral", making the model less reliable for underrepresented classes like "disgust" and "fear".
- Skin tone bias: The model may perform worse on darker skin tones due to underrepresentation in the training data.
- Input requirements: The model expects a cropped, frontal face image. Performance degrades significantly on profile faces, occluded faces, or images where the face is not the primary subject.
- Image size: Input images are resized to 224x224 pixels internally.
- Real-world generalization: Lab-posed expressions in training data differ from natural spontaneous expressions in the wild.