|
|
--- |
|
|
license: mit |
|
|
--- |
|
|
# Facial Expression Recognition Challenge (ICML 2013) |
|
|
|
|
|
## Overview |
|
|
|
|
|
This dataset was created for the **[Challenges in Representation Learning: Facial Expression Recognition Challenge](https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge)**, part of the ICML 2013 Workshop. The challenge focused on evaluating how well learning algorithms can generalize to newly introduced data, particularly for facial expression classification. |
|
|
|
|
|
- **Start Date**: April 13, 2013 |
|
|
- **End Date**: May 25, 2013 |
|
|
|
|
|
The dataset contains facial images labeled with one of **seven emotional classes**, and participants were challenged to develop models that could accurately classify these expressions. |
|
|
|
|
|
> One motivation for representation learning is that learning algorithms can design features better and faster than humans can. To this end, we introduce an entirely new dataset and invite competitors to build models that work effectively on unseen data. |
|
|
|
|
|
## Files |
|
|
|
|
|
- `train.csv`: Training data |
|
|
- `test.csv`: Public test data used for leaderboard scoring |
|
|
- `train.csv.zip`: Zipped version of the training data |
|
|
- `test.csv.zip`: Zipped version of the test data |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
The dataset consists of grayscale images, each represented as a flattened array of pixel intensities along with a corresponding emotion label (for training). It has been sourced from various internet sources and curated specifically for this challenge. |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
Participants are evaluated based on the **accuracy** of their models in predicting the correct facial expression out of the seven available classes. |
|
|
|
|
|
- **Public Leaderboard**: Uses `test.csv` |
|
|
- **Final Evaluation**: Uses a hidden test set released 72 hours before the contest closes |
|
|
|
|
|
To ensure fair play: |
|
|
- Manual labeling of the test set is strictly prohibited. |
|
|
- Preliminary winners must release their **code** and **methodology** under an OSI-approved open-source license. |
|
|
|
|
|
## Timeline |
|
|
|
|
|
- **April 12, 2013**: Competition launched |
|
|
- **May 17, 2013**: Final test set released |
|
|
- **May 24, 2013 (11:59 PM UTC)**: Final submission deadline |
|
|
- **May 31, 2013**: Code release deadline for preliminary winners |
|
|
- **June 20–21, 2013**: ICML Workshop and winner presentations |
|
|
|
|
|
## Prizes |
|
|
|
|
|
- 🥇 **First Prize**: $350 + Invitation to speak at the ICML 2013 Workshop |
|
|
- 🥈 **Second Prize**: $150 |
|
|
_Prize funding provided by Google Inc._ |
|
|
|
|
|
## Baseline Code |
|
|
|
|
|
Baseline models are provided as part of the [`pylearn2`](https://github.com/lisa-lab/pylearn2) Python package. |
|
|
|