File size: 2,732 Bytes
7a73a2b
 
 
a2346f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a73a2b
 
a2346f6
 
 
 
 
 
7a73a2b
 
 
 
 
 
 
 
 
 
a2346f6
7a73a2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: apache-2.0
---
# Vision Transformer (ViT) for Facial Expression Recognition Model Card

## Model Overview
- **Model Name:** [trpakov/vit-face-expression](https://huggingface.co/trpakov/vit-face-expression)
- **Task:** Facial Expression/Emotion Recognition
- **Dataset:** [FER2013](https://www.kaggle.com/datasets/msambare/fer2013)
- **Model Architecture:** [Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)
- **Finetuned from model:** [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)

## Model Description
The vit-face-expression model is a Vision Transformer fine-tuned for the task of facial emotion recognition. 
It is trained on the FER2013 dataset, which consists of facial images categorized into seven different emotions:
- Angry
- Disgust
- Fear
- Happy
- Sad
- Surprise
- Neutral

## Data Preprocessing
The input images are preprocessed before being fed into the model. The preprocessing steps include:
- **Resizing:** Images are resized to 224x224 pixels before being fed into the model.
- **Normalization:** Pixel values are normalized using ImageNet mean and standard deviation.
- **Data Augmentation:** Random transformations such as rotations, flips, and zooms are applied to augment the training dataset.

## Evaluation Metrics
- **Validation set accuracy:** 0.7113
- **Test set accuracy:** 0.7116

## Usage
```python
from transformers import pipeline
from PIL import Image

# Load the model
pipe = pipeline("image-classification", model="trpakov/vit-face-expression")

# Load an image (must contain a face)
image = Image.open("your_image.jpg").convert("RGB")

# Run inference
results = pipe(image)

# Output: list of dicts with 'label' and 'score'
# Example: [{'label': 'happy', 'score': 0.98}, {'label': 'neutral', 'score': 0.01}, ...]
print(results)
```

## Limitations
- **Dataset bias:** FER2013 is collected from Google Image Search and is known 
  to contain noisy and mislabelled samples, which affects model reliability.
- **Class imbalance:** The dataset is heavily skewed toward "happy" and "neutral", 
  making the model less reliable for underrepresented classes like "disgust" and "fear".
- **Skin tone bias:** The model may perform worse on darker skin tones due to 
  underrepresentation in the training data.
- **Input requirements:** The model expects a cropped, frontal face image. 
  Performance degrades significantly on profile faces, occluded faces, or 
  images where the face is not the primary subject.
- **Image size:** Input images are resized to 224x224 pixels internally.
- **Real-world generalization:** Lab-posed expressions in training data differ 
  from natural spontaneous expressions in the wild.