Spaces:
Configuration error
Configuration error
| title: Emotion Recognition API | |
| emoji: 👨💻 | |
| colorFrom: indigo | |
| colorTo: pink | |
| sdk: docker | |
| sdk_version: "{{sdkVersion}}" | |
| app_file: app.py | |
| pinned: false | |
| ## Emotion Recognition – Real‑Time Facial Emotion Detection | |
| This project is a **real‑time facial emotion recognition system**. | |
| It uses a **PyTorch MobileNetV2 model** on the backend and a **browser-based webcam UI** on the frontend to detect emotions such as *Angry, Fear, Happy, Sad,* and *Surprise* from a live camera feed. | |
| ### Features | |
| - **Real‑time detection** from your webcam (browser or local OpenCV). | |
| - **Pre‑trained deep learning model** (`emotion_recognition_model.pth`) based on MobileNetV2. | |
| - **FastAPI REST API** endpoint for image-based emotion prediction. | |
| - **Simple HTML/JS frontend** that streams frames to the backend and displays the predicted emotion. | |
| - **Standalone webcam script** (`webcam.py`) if you prefer running everything locally without the browser. | |
| --- | |
| ### Project Structure | |
| - **`frontend/`** | |
| - `index.html` – Minimal UI with a video element and live emotion text. | |
| - `script.js` – Grabs webcam frames, sends them to the backend (`/api/predict`), and updates the UI. | |
| - `style.css` – Basic styling for the page and video element. | |
| - **`model/`** | |
| - `api.py` – FastAPI app exposing `POST /api/predict` and serving the frontend. | |
| - `inference.py` – Loads the trained model and defines the `predict(image)` function. | |
| - `model.py` – Helper to construct the MobileNetV2 architecture. | |
| - `webcam.py` – OpenCV-based real‑time emotion recognition from your local webcam. | |
| - `emotion_recognition_model.pth` – Trained PyTorch checkpoint (model weights + class labels). | |
| - `Data/` – Dataset folders (`Angry`, `Fear`, `Happy`, `Sad`, `Suprise`) used for training. | |
| - **`requirements.txt`** – Python dependencies. | |
| --- | |
| ### Requirements | |
| - **Python** 3.8+ (recommended) | |
| - **pip** for installing dependencies | |
| - A **webcam** | |
| - (Optional) **GPU with CUDA** for faster inference, otherwise CPU will be used. | |
| Python packages (also listed in `requirements.txt`): | |
| - `torch`, `torchvision` | |
| - `numpy` | |
| - `opencv-python` | |
| - `matplotlib` | |
| - `pillow` | |
| - `tqdm` | |
| - `requests` | |
| - `fastapi`, `uvicorn` (install explicitly if missing) | |
| You can install everything with: | |
| ```bash | |
| pip install -r requirements.txt fastapi uvicorn | |
| ``` | |
| --- | |
| ### How to Run – Web API + Frontend | |
| 1. **Navigate to the model folder**: | |
| ```bash | |
| cd model | |
| ``` | |
| 2. **Start the FastAPI server** (with Uvicorn): | |
| ```bash | |
| uvicorn api:app --host 0.0.0.0 --port 8000 --reload | |
| ``` | |
| 3. **Open the frontend**: | |
| - Option A: The API mounts the `frontend` folder as static files. | |
| Open your browser and go to: | |
| ```text | |
| http://localhost:8000/ | |
| ``` | |
| - If needed, ensure the working directory is such that `../frontend` (from `model/api.py`) points to the `frontend` folder in this repo. | |
| 4. **Allow camera access** in the browser when prompted. | |
| 5. You should see the **live video** and the **predicted emotion** updating underneath. | |
| --- | |
| ### How to Run – Local Webcam Script (No Browser) | |
| If you prefer a pure Python / OpenCV pipeline: | |
| 1. Make sure dependencies are installed: | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| 2. From the `model` directory, run: | |
| ```bash | |
| python webcam.py | |
| ``` | |
| 3. A window called **“Emotion Recognition”** will appear: | |
| - Detected faces will be highlighted with a bounding box. | |
| - The predicted emotion label will be shown next to each face. | |
| - Press **`q`** to quit. | |
| --- | |
| ### Model & Inference Details | |
| - The model is a **MobileNetV2** classifier with the final layer adapted to the number of emotion classes. | |
| - The trained weights and class names are stored in `emotion_recognition_model.pth`. | |
| - Images are preprocessed with: | |
| - Resize to \(224 \times 224\) | |
| - Conversion to tensor | |
| - Normalization with ImageNet mean and std | |
| - Inference is done by `inference.py` via: | |
| - `predict(pil_image)` → returns a string label, e.g. `"Happy"`. | |
| You can test the model directly with a static image (from the `model` directory): | |
| ```bash | |
| python inference.py | |
| ``` | |
| This will load `Image.jpg` and print the predicted emotion. | |
| --- | |
| ### API Reference | |
| - **`GET /`** | |
| - Returns a simple JSON status: `{"status": "API running"}`. | |
| - **`POST /api/predict`** | |
| - **Body**: `multipart/form-data` with a single field: | |
| - `file`: image file (e.g. JPEG/PNG). | |
| - **Response**: | |
| ```json | |
| { "emotion": "Happy" } | |
| ``` | |
| The frontend uses this endpoint to send frames from your webcam (as blobs) approximately once per second. | |
| --- | |
| ### Dataset & Training (High Level) | |
| - The `Data` directory contains labeled images organized by emotion: | |
| - `Angry/`, `Fear/`, `Happy/`, `Sad/`, `Suprise/`. | |
| - The model was trained externally (not fully captured in this repo) using this dataset and saved into `emotion_recognition_model.pth`. | |
| - You can adapt the model for new datasets by: | |
| - Updating the `Data` folders and class list. | |
| - Adjusting `model.py` and the training script (not included here). | |
| --- | |
| ### Troubleshooting | |
| - **Camera access denied (browser)** | |
| - Check browser permissions and ensure you’re using `http://localhost` (not `file://`). | |
| - **“Backend not reachable” in the frontend** | |
| - Confirm the FastAPI server is running on the same host/port that `script.js` expects (`/api/predict` → default `http://localhost:8000/api/predict`). | |
| - Check for CORS issues or port conflicts. | |
| - **Model file not found** | |
| - Ensure `emotion_recognition_model.pth` is present in the `model` directory when running any Python scripts there. | |
| --- | |
| ### License & Credits | |
| - The project uses **PyTorch**, **FastAPI**, **OpenCV**, and **PIL** under their respective licenses. | |
| - Dataset images in `Data/` should respect their original source licenses (not provided here). | |
| - Feel free to modify or extend this project for research, learning, or personal use. | |