Spaces:
Configuration error
title: Emotion Recognition API
emoji: 👨💻
colorFrom: indigo
colorTo: pink
sdk: docker
sdk_version: '{{sdkVersion}}'
app_file: app.py
pinned: false
Emotion Recognition – Real‑Time Facial Emotion Detection
This project is a real‑time facial emotion recognition system.
It uses a PyTorch MobileNetV2 model on the backend and a browser-based webcam UI on the frontend to detect emotions such as Angry, Fear, Happy, Sad, and Surprise from a live camera feed.
Features
- Real‑time detection from your webcam (browser or local OpenCV).
- Pre‑trained deep learning model (
emotion_recognition_model.pth) based on MobileNetV2. - FastAPI REST API endpoint for image-based emotion prediction.
- Simple HTML/JS frontend that streams frames to the backend and displays the predicted emotion.
- Standalone webcam script (
webcam.py) if you prefer running everything locally without the browser.
Project Structure
frontend/index.html– Minimal UI with a video element and live emotion text.script.js– Grabs webcam frames, sends them to the backend (/api/predict), and updates the UI.style.css– Basic styling for the page and video element.
model/api.py– FastAPI app exposingPOST /api/predictand serving the frontend.inference.py– Loads the trained model and defines thepredict(image)function.model.py– Helper to construct the MobileNetV2 architecture.webcam.py– OpenCV-based real‑time emotion recognition from your local webcam.emotion_recognition_model.pth– Trained PyTorch checkpoint (model weights + class labels).Data/– Dataset folders (Angry,Fear,Happy,Sad,Suprise) used for training.
requirements.txt– Python dependencies.
Requirements
- Python 3.8+ (recommended)
- pip for installing dependencies
- A webcam
- (Optional) GPU with CUDA for faster inference, otherwise CPU will be used.
Python packages (also listed in requirements.txt):
torch,torchvisionnumpyopencv-pythonmatplotlibpillowtqdmrequestsfastapi,uvicorn(install explicitly if missing)
You can install everything with:
pip install -r requirements.txt fastapi uvicorn
How to Run – Web API + Frontend
Navigate to the model folder:
cd modelStart the FastAPI server (with Uvicorn):
uvicorn api:app --host 0.0.0.0 --port 8000 --reloadOpen the frontend:
Option A: The API mounts the
frontendfolder as static files.
Open your browser and go to:http://localhost:8000/If needed, ensure the working directory is such that
../frontend(frommodel/api.py) points to thefrontendfolder in this repo.
Allow camera access in the browser when prompted.
You should see the live video and the predicted emotion updating underneath.
How to Run – Local Webcam Script (No Browser)
If you prefer a pure Python / OpenCV pipeline:
Make sure dependencies are installed:
pip install -r requirements.txtFrom the
modeldirectory, run:python webcam.pyA window called “Emotion Recognition” will appear:
- Detected faces will be highlighted with a bounding box.
- The predicted emotion label will be shown next to each face.
- Press
qto quit.
Model & Inference Details
- The model is a MobileNetV2 classifier with the final layer adapted to the number of emotion classes.
- The trained weights and class names are stored in
emotion_recognition_model.pth. - Images are preprocessed with:
- Resize to (224 \times 224)
- Conversion to tensor
- Normalization with ImageNet mean and std
- Inference is done by
inference.pyvia:predict(pil_image)→ returns a string label, e.g."Happy".
You can test the model directly with a static image (from the model directory):
python inference.py
This will load Image.jpg and print the predicted emotion.
API Reference
GET /- Returns a simple JSON status:
{"status": "API running"}.
- Returns a simple JSON status:
POST /api/predict- Body:
multipart/form-datawith a single field:file: image file (e.g. JPEG/PNG).
- Response:
{ "emotion": "Happy" }
- Body:
The frontend uses this endpoint to send frames from your webcam (as blobs) approximately once per second.
Dataset & Training (High Level)
- The
Datadirectory contains labeled images organized by emotion:Angry/,Fear/,Happy/,Sad/,Suprise/.
- The model was trained externally (not fully captured in this repo) using this dataset and saved into
emotion_recognition_model.pth. - You can adapt the model for new datasets by:
- Updating the
Datafolders and class list. - Adjusting
model.pyand the training script (not included here).
- Updating the
Troubleshooting
Camera access denied (browser)
- Check browser permissions and ensure you’re using
http://localhost(notfile://).
- Check browser permissions and ensure you’re using
“Backend not reachable” in the frontend
- Confirm the FastAPI server is running on the same host/port that
script.jsexpects (/api/predict→ defaulthttp://localhost:8000/api/predict). - Check for CORS issues or port conflicts.
- Confirm the FastAPI server is running on the same host/port that
Model file not found
- Ensure
emotion_recognition_model.pthis present in themodeldirectory when running any Python scripts there.
- Ensure
License & Credits
- The project uses PyTorch, FastAPI, OpenCV, and PIL under their respective licenses.
- Dataset images in
Data/should respect their original source licenses (not provided here). - Feel free to modify or extend this project for research, learning, or personal use.