File size: 4,695 Bytes
6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 cd0e64e d2aa461 6e12b53 cd0e64e 6e12b53 d2aa461 6e12b53 d2aa461 cd0e64e 6e12b53 bdc6186 6e12b53 4a889fd 6e12b53 4a889fd 6e12b53 d2aa461 6e12b53 4d702a8 6e12b53 4d702a8 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 d2aa461 6e12b53 bdc6186 cd0e64e 6e12b53 8c0ba40 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
---
license: mit
language:
- en
metrics:
- accuracy
base_model:
- microsoft/resnet-50
new_version: google/vit-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- pytorch
- emotion-detection
- facial-expressio
- image-classification
- deep-learning
- cnn
---
# ๐ญ Face Expression Detector
A deep learning model that classifies facial expressions in grayscale images into one of seven core emotions. Designed for applications in **emotion analytics**, **human-computer interaction**, and **psychological research**.
---
## ๐ Model Overview
This model takes **48x48 grayscale face images** and classifies them into:
- ๐ Angry
- ๐คข Disgust
- ๐จ Fear
- ๐ Happy
- ๐ข Sad
- ๐ฒ Surprise
- ๐ Neutral
**Dataset**: [FER2013](https://www.kaggle.com/datasets/msambare/fer2013)
**Training Samples**: 28,709
**Testing Samples**: 3,589
---
## ๐ง Model Architecture
- ๐ฆ **Custom CNN**
- 3 Convolutional Layers
- Batch Normalization
- ReLU Activation
- Dropout for regularization
- ๐ Optimizer: `Adam`
- ๐ฅ Loss Function: `Categorical Crossentropy`
- โฑ๏ธ Epochs: `100`
---
## โ
Performance
> ๐ *Add your actual performance metrics here:*
- Accuracy on FER2013 Test Set: **~1.0%**
---
## ๐๏ธ Required Files
- `model.h5` โ Model Weights
- `config.json` โ Configuration file *(Transformers-based)*
- `preprocessor_config.json` โ Preprocessing setup *(if needed)*
- `requirements.txt` โ Python dependencies
---
## ๐ Use Cases
- ๐ฎ Real-time emotion feedback in games or virtual assistants
- ๐ Emotion analysis for psychological and behavioral studies
- ๐ฅ Enhancing video-based UX with dynamic emotion tracking
---
## โ ๏ธ Limitations
- Works best with **centered 48x48 grayscale faces**
- **Face detection (e.g., MTCNN)** required before prediction
- FER2013's demographic diversity is limited โ potential bias
---
## โ๏ธ Installation
Follow these steps to set up the environment and dependencies:
--pip install -r requirements.txt
torch>=1.9.0
transformers>=4.20.0
pillow>=8.0.0
### 1. Clone the Repository
git clone https://github.com/TRavi8688/Mood-Based-Music-Player
cd mood_detector
##๐งช How to Use (Transformers-based)
Follow these steps to preprocess an image and predict facial expression using the pre-trained Transformers-based model:
Python
```bash
from transformers import AutoModelForImageClassification, AutoImageProcessor
from PIL import Image
import torch
```
### 1. Load Model and Preprocessor
```bash
# STEP 1: Install dependencies
!pip install tensorflow pillow numpy
# STEP 2: Download model file using `requests`
import requests
model_url = "https://huggingface.co/ravi86/mood_detector/resolve/main/my_model.h5"
model_path = "my_model.h5"
# Download the file
response = requests.get(model_url)
with open(model_path, "wb") as f:
f.write(response.content)
print("โ
Model downloaded successfully!")
```
2. Load and Preprocess the Image
```bash
image_path = "your_image.jpg" # ๐ Replace with your image file
image = Image.open(image_path).convert("L").resize((48, 48)) # Convert to grayscale and resize
```
# 3. Make Predictions
```bash
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1) # Convert logits to probabilities
predicted_class = probs.argmax().item() # Get the predicted class index
```
# 4. Interpret the Result
```bash
emotions = ["Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral"]
print(f"Predicted Emotion: {emotions[predicted_class]}")
```
โ๏ธ Deploy to Hugging Face Hub
```bash
Use these commands to prepare and push your model to the Hugging Face Hub:
Bash
# Step 1: Install & Login
pip install huggingface_hub
huggingface-cli login
from huggingface_hub import upload_folder
upload_folder(
folder_path="path/to/mood_detector",
repo_id="ravi86/mood_detector",
repo_type="model",
commit_message="๐ Upload mood detection model"
)
```
###
๐งญ Ethical Considerations
โ๏ธ Bias: The FER2013 dataset may exhibit biases in demographic representation. Exercise caution when interpreting results across diverse populations.
๐ Privacy: Ensure strict compliance with data privacy laws (e.g., GDPR, CCPA) when using this model on personal or sensitive images. Do not use without explicit consent.
โ Misuse: This model is not intended for unauthorized surveillance, profiling, or any other unethical applications.
###
๐ค Contact
๐ฌ For questions, support, or collaborations:
Hugging Face โ @ravi86
Gmailโ travikumar6789@gmail.com
โญ If you find this project useful, consider giving a star or contributing! |