Mahir1426 commited on
Commit
fe047f5
·
verified ·
1 Parent(s): f64caf8

Upload 13 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ face_landmarker_v2_with_blendshapes.task filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Environment variables
2
+ .env
3
+ .env.local
4
+ .env.production
5
+
6
+ # Python
7
+ __pycache__/
8
+ *.py[cod]
9
+ *$py.class
10
+ *.so
11
+ .Python
12
+ build/
13
+ develop-eggs/
14
+ dist/
15
+ downloads/
16
+ eggs/
17
+ .eggs/
18
+ lib/
19
+ lib64/
20
+ parts/
21
+ sdist/
22
+ var/
23
+ wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+
28
+ # Virtual environments
29
+ venv/
30
+ env/
31
+ ENV/
32
+
33
+ # IDE
34
+ .vscode/
35
+ .idea/
36
+ *.swp
37
+ *.swo
38
+
39
+ # OS
40
+ .DS_Store
41
+ Thumbs.db
42
+
43
+ # Logs
44
+ *.log
45
+
46
+ # Temporary files
47
+ *.tmp
48
+ *.temp
DEPLOYMENT.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hugging Face Spaces Deployment Guide
2
+
3
+ ## Quick Deployment Steps
4
+
5
+ 1. **Create a new Space on Hugging Face**
6
+ - Go to https://huggingface.co/new-space
7
+ - Choose "Docker" as the SDK
8
+ - Set visibility (public/private)
9
+
10
+ 2. **Upload your files**
11
+ - Upload all files from this directory to your Space
12
+ - Make sure these files are included:
13
+ - `app.py` (main Flask application)
14
+ - `database.py` (database functions)
15
+ - `requirements.txt` (Python dependencies)
16
+ - `Dockerfile` (container configuration)
17
+ - `README.md` (Space configuration)
18
+ - `Ultra_Optimized_RandomForest.joblib` (ML model)
19
+ - `ultra_optimized_scaler.joblib` (scaler)
20
+ - `face_landmarker_v2_with_blendshapes.task` (MediaPipe model)
21
+ - `templates/` folder with HTML files
22
+
23
+ 3. **Configure Environment Variables (Optional)**
24
+ - Go to Settings → Repository secrets
25
+ - Add these if you want full functionality:
26
+ - `CLOUDINARY_CLOUD_NAME`
27
+ - `CLOUDINARY_API_KEY`
28
+ - `CLOUDINARY_API_SECRET`
29
+ - `MONGO_URI`
30
+
31
+ 4. **Deploy**
32
+ - The Space will automatically build and deploy
33
+ - Check the logs for any errors
34
+
35
+ ## Key Changes Made for HF Spaces
36
+
37
+ ### 1. README.md Configuration
38
+ - Added proper HF Spaces metadata
39
+ - Set `sdk: docker`
40
+ - Set `app_port: 7860`
41
+
42
+ ### 2. Flask App Configuration
43
+ - Changed port from 5000 to 7860 (HF Spaces default)
44
+ - Set host to `0.0.0.0` for external access
45
+ - Disabled debug mode for production
46
+
47
+ ### 3. Dockerfile Optimization
48
+ - Simplified for HF Spaces deployment
49
+ - Uses gunicorn for production serving
50
+ - Proper port configuration (7860)
51
+
52
+ ### 4. Error Handling
53
+ - Added graceful handling for missing environment variables
54
+ - Models can run without Cloudinary/MongoDB
55
+ - Health check endpoint for monitoring
56
+
57
+ ### 5. Requirements
58
+ - Added protobuf version for compatibility
59
+ - All dependencies pinned to stable versions
60
+
61
+ ## Troubleshooting
62
+
63
+ ### Common Issues:
64
+
65
+ 1. **"This Space has encountered a config error"**
66
+ - Check README.md has proper metadata
67
+ - Ensure Dockerfile exists and is correct
68
+ - Verify all required files are uploaded
69
+
70
+ 2. **Model loading errors**
71
+ - Ensure model files are in the root directory
72
+ - Check file permissions
73
+ - Verify file sizes are under HF Spaces limits
74
+
75
+ 3. **Port binding errors**
76
+ - Make sure app runs on port 7860
77
+ - Check Dockerfile exposes correct port
78
+
79
+ 4. **Memory issues**
80
+ - HF Spaces has memory limits
81
+ - Consider model optimization if needed
82
+
83
+ ### Health Check
84
+ Visit `/health` endpoint to check:
85
+ - Model loading status
86
+ - Environment variable availability
87
+ - Overall system health
88
+
89
+ ## Features Available Without Environment Variables
90
+
91
+ - Face shape detection
92
+ - Image analysis
93
+ - Confidence scores
94
+ - Facial measurements
95
+ - Web interface
96
+
97
+ ## Features Requiring Environment Variables
98
+
99
+ - Image storage (Cloudinary)
100
+ - Data persistence (MongoDB)
101
+ - Analysis history
102
+
103
+ The app will work fully without these services, just with limited storage capabilities.
Dockerfile ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use Python 3.10 slim image for compatibility with mediapipe
2
+ FROM python:3.10-slim
3
+
4
+ # Set working directory
5
+ WORKDIR /app
6
+
7
+ # Install system dependencies for building Python packages and OpenCV
8
+ RUN apt-get update && apt-get install -y \
9
+ build-essential \
10
+ libgl1-mesa-dri \
11
+ libglib2.0-0 \
12
+ libsm6 \
13
+ libxext6 \
14
+ libxrender-dev \
15
+ libgomp1 \
16
+ libgtk-3-0 \
17
+ libavcodec-dev \
18
+ libavformat-dev \
19
+ libswscale-dev \
20
+ libgstreamer1.0-dev \
21
+ libgstreamer-plugins-base1.0-dev \
22
+ && rm -rf /var/lib/apt/lists/*
23
+
24
+ # Copy requirements and install Python dependencies
25
+ COPY requirements.txt .
26
+ RUN pip install --upgrade pip
27
+ RUN pip install --no-cache-dir -r requirements.txt
28
+
29
+ # Copy the application code
30
+ COPY . .
31
+
32
+ # Set environment variables
33
+ ENV FLASK_APP=app.py
34
+ ENV FLASK_ENV=production
35
+ ENV PORT=7860
36
+
37
+ # Expose port 7860 (Hugging Face Spaces default)
38
+ EXPOSE 7860
39
+
40
+ # Start the app using gunicorn
41
+ CMD ["gunicorn", "--bind", "0.0.0.0:7860", "--workers", "1", "--timeout", "120", "app:app"]
README.md CHANGED
@@ -1,12 +1,70 @@
1
  ---
2
- title: Face7
3
- emoji:
4
- colorFrom: green
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 5.45.0
8
- app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Face Shape Detection
3
+ emoji: 👤
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
 
 
7
  pinned: false
8
+ license: mit
9
+ app_port: 7860
10
+ short_description: AI face shape detection with MediaPipe & ML
11
  ---
12
 
13
+ # Face Shape Detection
14
+
15
+ An AI-powered application that analyzes facial features to determine face shape using MediaPipe landmarks and machine learning.
16
+
17
+ ## Features
18
+
19
+ - **Real-time face shape detection** from uploaded images
20
+ - **5 face shape categories**: Heart, Oval, Round, Square, Oblong
21
+ - **Facial measurements** with confidence scores
22
+ - **Interactive web interface** with image upload
23
+ - **RESTful API** for integration
24
+
25
+ ## How it Works
26
+
27
+ 1. **Face Detection**: Uses MediaPipe to detect and extract facial landmarks
28
+ 2. **Feature Extraction**: Calculates key facial measurements and ratios
29
+ 3. **ML Classification**: Uses a trained Random Forest model to predict face shape
30
+ 4. **Results**: Returns face shape, confidence scores, and facial measurements
31
+
32
+ ## API Endpoints
33
+
34
+ - `POST /analyze` - Upload an image for face shape analysis
35
+ - `GET /` - Web interface for image upload
36
+ - `GET /video_feed` - Real-time video feed (if camera available)
37
+
38
+ ## Usage
39
+
40
+ 1. Upload an image using the web interface
41
+ 2. The system will analyze the face and return:
42
+ - Detected face shape
43
+ - Confidence scores for all categories
44
+ - Facial measurements (length, width, etc.)
45
+ - Annotated image with landmarks
46
+
47
+ ## Technical Details
48
+
49
+ - **Framework**: Flask
50
+ - **Computer Vision**: MediaPipe, OpenCV
51
+ - **ML Model**: Random Forest (scikit-learn)
52
+ - **Image Processing**: Smart preprocessing with face detection
53
+ - **Deployment**: Docker container optimized for Hugging Face Spaces
54
+
55
+ ## Model Performance
56
+
57
+ The model uses optimized features extracted from 468 facial landmarks and achieves high accuracy in face shape classification across diverse face types.
58
+
59
+ ## Requirements
60
+
61
+ - Python 3.10+
62
+ - MediaPipe
63
+ - OpenCV
64
+ - scikit-learn
65
+ - Flask
66
+ - NumPy
67
+
68
+ ## License
69
+
70
+ MIT License - feel free to use and modify for your projects.
Ultra_Optimized_RandomForest.joblib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a54fa289d61e99546b3c9c6a6e1812b7f52a7cc399c3e691aa6ca9de7f729673
3
+ size 304641
__pycache__/database.cpython-312.pyc ADDED
Binary file (2.25 kB). View file
 
app.py ADDED
@@ -0,0 +1,423 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, request, render_template, Response, jsonify, url_for, send_from_directory
2
+ import cv2
3
+ import numpy as np
4
+ import joblib
5
+ import os
6
+ import warnings
7
+ from werkzeug.utils import secure_filename
8
+ import mediapipe as mp
9
+ from mediapipe.tasks import python
10
+ from mediapipe.tasks.python import vision
11
+ from mediapipe.framework.formats import landmark_pb2
12
+ from flask_cors import CORS
13
+ from dotenv import load_dotenv
14
+ # Import database functions
15
+ from database import upload_image_to_cloudinary, save_analysis_to_db
16
+
17
+ # Load environment variables
18
+ load_dotenv()
19
+
20
+ # Suppress specific deprecation warnings from protobuf
21
+ warnings.filterwarnings("ignore", category=UserWarning, module='google.protobuf')
22
+ app = Flask(__name__, template_folder='templates')
23
+ CORS(app) # Enable CORS for all routes
24
+
25
+ # Configure Flask for production
26
+ app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 # 16MB max file size
27
+ app.config['ALLOWED_EXTENSIONS'] = {'png', 'jpg', 'jpeg', 'gif', 'bmp', 'webp'}
28
+
29
+ # Initialize MediaPipe Face Landmarker (only once)
30
+ try:
31
+ base_options = python.BaseOptions(model_asset_path='face_landmarker_v2_with_blendshapes.task')
32
+ options = vision.FaceLandmarkerOptions(base_options=base_options,
33
+ output_face_blendshapes=True,
34
+ output_facial_transformation_matrixes=True,
35
+ num_faces=1)
36
+ face_landmarker = vision.FaceLandmarker.create_from_options(options)
37
+ print("✅ MediaPipe Face Landmarker initialized successfully!")
38
+ except Exception as e:
39
+ print(f"❌ Error initializing MediaPipe: {e}")
40
+ face_landmarker = None
41
+
42
+ # Initialize MediaPipe drawing utilities
43
+ mp_drawing = mp.solutions.drawing_utils
44
+ mp_drawing_styles = mp.solutions.drawing_styles
45
+
46
+ # Load the ultra-optimized model and scaler
47
+ print("Loading ultra-optimized model...")
48
+ try:
49
+ face_shape_model = joblib.load('Ultra_Optimized_RandomForest.joblib')
50
+ scaler = joblib.load('ultra_optimized_scaler.joblib')
51
+ print("✅ Ultra-optimized model loaded successfully!")
52
+ except Exception as e:
53
+ print(f"❌ Error loading model files: {e}")
54
+ face_shape_model = None
55
+ scaler = None
56
+
57
+ def distance_3d(p1, p2):
58
+ """Calculate 3D Euclidean distance between two points."""
59
+ return np.linalg.norm(np.array(p1) - np.array(p2))
60
+
61
+ def smart_preprocess_image(image):
62
+ """
63
+ Smart image preprocessing to get the best face region.
64
+ This addresses the issue of users not providing perfect images.
65
+ """
66
+ h, w = image.shape[:2]
67
+
68
+ # First, try to detect face and get bounding box
69
+ rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
70
+ mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_image)
71
+ detection_result = face_landmarker.detect(mp_image)
72
+
73
+ if detection_result.face_landmarks:
74
+ # Get face landmarks
75
+ face_landmarks = detection_result.face_landmarks[0]
76
+
77
+ # Calculate face bounding box with padding
78
+ x_coords = [landmark.x for landmark in face_landmarks]
79
+ y_coords = [landmark.y for landmark in face_landmarks]
80
+
81
+ # Convert normalized coordinates to pixel coordinates
82
+ x_min = int(min(x_coords) * w)
83
+ x_max = int(max(x_coords) * w)
84
+ y_min = int(min(y_coords) * h)
85
+ y_max = int(max(y_coords) * h)
86
+
87
+ # Add generous padding around face (40% padding for better context)
88
+ face_width = x_max - x_min
89
+ face_height = y_max - y_min
90
+ pad_x = int(face_width * 0.4) # 40% padding
91
+ pad_y = int(face_height * 0.4)
92
+
93
+ # Calculate crop coordinates
94
+ x1 = max(0, x_min - pad_x)
95
+ x2 = min(w, x_max + pad_x)
96
+ y1 = max(0, y_min - pad_y)
97
+ y2 = min(h, y_max + pad_y)
98
+
99
+ # Crop the face region
100
+ face_crop = image[y1:y2, x1:x2]
101
+
102
+ # Resize to standard size while maintaining aspect ratio
103
+ target_size = 224
104
+ crop_h, crop_w = face_crop.shape[:2]
105
+
106
+ # Calculate scale to fit in target size
107
+ scale = min(target_size / crop_w, target_size / crop_h)
108
+ new_w = int(crop_w * scale)
109
+ new_h = int(crop_h * scale)
110
+
111
+ # Resize maintaining aspect ratio
112
+ resized = cv2.resize(face_crop, (new_w, new_h))
113
+
114
+ # Create final image with padding to exact target size
115
+ final_image = np.zeros((target_size, target_size, 3), dtype=np.uint8)
116
+
117
+ # Center the resized image
118
+ start_y = (target_size - new_h) // 2
119
+ start_x = (target_size - new_w) // 2
120
+ final_image[start_y:start_y + new_h, start_x:start_x + new_w] = resized
121
+
122
+ return final_image
123
+ else:
124
+ # If no face detected, just resize to standard size
125
+ return cv2.resize(image, (224, 224))
126
+
127
+ def extract_optimized_features(coords):
128
+ """
129
+ Extract optimized features for face shape detection.
130
+ Uses only the most important landmarks for efficiency.
131
+ """
132
+ # Key landmarks for face shape analysis
133
+ landmark_indices = {
134
+ 'forehead_top': 10,
135
+ 'forehead_left': 21,
136
+ 'forehead_right': 251,
137
+ 'cheek_left': 234,
138
+ 'cheek_right': 454,
139
+ 'jaw_left': 172,
140
+ 'jaw_right': 397,
141
+ 'chin': 152,
142
+ }
143
+
144
+ # Extract chosen points
145
+ lm = {name: coords[idx] for name, idx in landmark_indices.items()}
146
+
147
+ # Calculate key measurements
148
+ face_height = distance_3d(lm['forehead_top'], lm['chin'])
149
+ face_width = distance_3d(lm['cheek_left'], lm['cheek_right'])
150
+ jaw_width = distance_3d(lm['jaw_left'], lm['jaw_right'])
151
+ forehead_width = distance_3d(lm['forehead_left'], lm['forehead_right'])
152
+
153
+ # Calculate ratios (scale-invariant features)
154
+ width_to_height = face_width / face_height
155
+ jaw_to_forehead = jaw_width / forehead_width
156
+ jaw_to_face = jaw_width / face_width
157
+ forehead_to_face = forehead_width / face_width
158
+
159
+ # Additional shape features
160
+ face_area = face_width * face_height
161
+ jaw_angle = np.arctan2(lm['jaw_right'][1] - lm['jaw_left'][1],
162
+ lm['jaw_right'][0] - lm['jaw_left'][0])
163
+
164
+ # Return optimized feature vector
165
+ features = np.array([
166
+ width_to_height,
167
+ jaw_to_forehead,
168
+ jaw_to_face,
169
+ forehead_to_face,
170
+ face_area,
171
+ jaw_angle
172
+ ])
173
+
174
+ return features
175
+
176
+ def get_face_shape_label(label):
177
+ shapes = ["Heart", "Oval", "Round", "Square", "Oblong"]
178
+ return shapes[label]
179
+
180
+ def draw_landmarks_on_image(rgb_image, detection_result):
181
+ face_landmarks_list = detection_result.face_landmarks
182
+ annotated_image = np.copy(rgb_image)
183
+
184
+ for idx in range(len(face_landmarks_list)):
185
+ face_landmarks = face_landmarks_list[idx]
186
+
187
+ # Create landmark proto
188
+ face_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
189
+ face_landmarks_proto.landmark.extend([
190
+ landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in face_landmarks
191
+ ])
192
+
193
+ # Draw face landmarks
194
+ mp_drawing.draw_landmarks(
195
+ image=annotated_image,
196
+ landmark_list=face_landmarks_proto,
197
+ connections=mp.solutions.face_mesh.FACEMESH_TESSELATION,
198
+ landmark_drawing_spec=None,
199
+ connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style())
200
+ mp_drawing.draw_landmarks(
201
+ image=annotated_image,
202
+ landmark_list=face_landmarks_proto,
203
+ connections=mp.solutions.face_mesh.FACEMESH_CONTOURS,
204
+ landmark_drawing_spec=None,
205
+ connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_contours_style())
206
+ mp_drawing.draw_landmarks(
207
+ image=annotated_image,
208
+ landmark_list=face_landmarks_proto,
209
+ connections=mp.solutions.face_mesh.FACEMESH_IRISES,
210
+ landmark_drawing_spec=None,
211
+ connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_iris_connections_style())
212
+
213
+ return annotated_image
214
+
215
+ def allowed_file(filename):
216
+ return '.' in filename and filename.rsplit('.', 1)[1].lower() in app.config['ALLOWED_EXTENSIONS']
217
+
218
+ def generate_frames():
219
+ cap = cv2.VideoCapture(0)
220
+ while True:
221
+ ret, frame = cap.read()
222
+ if not ret:
223
+ break
224
+ rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
225
+ image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_frame)
226
+ detection_result = face_landmarker.detect(image)
227
+
228
+ if detection_result.face_landmarks:
229
+ for face_landmarks in detection_result.face_landmarks:
230
+ landmarks = [[lm.x, lm.y, lm.z] for lm in face_landmarks]
231
+ landmarks = np.array(landmarks)
232
+ face_features = extract_optimized_features(landmarks)
233
+
234
+ # Normalize features using the scaler
235
+ face_features_scaled = scaler.transform(face_features.reshape(1, -1))
236
+
237
+ face_shape_label = face_shape_model.predict(face_features_scaled)[0]
238
+ face_shape = get_face_shape_label(face_shape_label)
239
+ annotated_image = draw_landmarks_on_image(rgb_frame, detection_result)
240
+ cv2.putText(annotated_image, f"Face Shape: {face_shape}", (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
241
+ else:
242
+ annotated_image = rgb_frame
243
+
244
+ ret, buffer = cv2.imencode('.jpg', annotated_image)
245
+ frame = buffer.tobytes()
246
+ yield (b'--frame\r\n'
247
+ b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
248
+
249
+ @app.route('/')
250
+ def index():
251
+ return render_template('index.html')
252
+
253
+ @app.route('/health')
254
+ def health_check():
255
+ """Health check endpoint for Hugging Face Spaces"""
256
+ status = {
257
+ "status": "healthy",
258
+ "mediapipe_loaded": face_landmarker is not None,
259
+ "ml_model_loaded": face_shape_model is not None and scaler is not None,
260
+ "cloudinary_available": os.getenv("CLOUDINARY_CLOUD_NAME") is not None,
261
+ "mongodb_available": os.getenv("MONGO_URI") is not None
262
+ }
263
+ return jsonify(status)
264
+
265
+ @app.route('/analyze', methods=['POST'])
266
+ def analyze_face():
267
+ # Check if models are loaded
268
+ if face_landmarker is None:
269
+ return jsonify({"error": "MediaPipe model not loaded"}), 500
270
+ if face_shape_model is None or scaler is None:
271
+ return jsonify({"error": "ML model not loaded"}), 500
272
+
273
+ if 'file' not in request.files:
274
+ return jsonify({"error": "No file part"}), 400
275
+
276
+ file = request.files['file']
277
+ if file.filename == '':
278
+ return jsonify({"error": "No selected file"}), 400
279
+
280
+ try:
281
+ # --- 1. Read image and smart preprocessing ---
282
+ img_bytes = file.read()
283
+ nparr = np.frombuffer(img_bytes, np.uint8)
284
+ img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
285
+
286
+ # Smart preprocessing: detect face and crop optimally
287
+ processed_img = smart_preprocess_image(img)
288
+
289
+ # Convert to RGB for MediaPipe
290
+ rgb_image = cv2.cvtColor(processed_img, cv2.COLOR_BGR2RGB)
291
+ mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_image)
292
+
293
+ detection_result = face_landmarker.detect(mp_image)
294
+
295
+ if not detection_result.face_landmarks:
296
+ return jsonify({"error": "No face detected"}), 400
297
+
298
+ # --- 2. Get data, calculate features, and predict shape ---
299
+ face_landmarks = detection_result.face_landmarks[0]
300
+
301
+ # First, calculate the optimized features
302
+ landmarks_normalized = np.array([[lm.x, lm.y, lm.z] for lm in face_landmarks])
303
+ face_features = extract_optimized_features(landmarks_normalized)
304
+
305
+ # Normalize features using the scaler
306
+ face_features_scaled = scaler.transform(face_features.reshape(1, -1))
307
+
308
+ # Then, predict the shape using calibrated features
309
+ face_shape_label = face_shape_model.predict(face_features_scaled)[0]
310
+ face_shape = get_face_shape_label(face_shape_label)
311
+
312
+ # Get confidence scores
313
+ confidence_scores = face_shape_model.predict_proba(face_features_scaled)[0]
314
+ confidence = confidence_scores[face_shape_label]
315
+
316
+ # --- 3. Draw landmarks on the image ---
317
+ annotated_image_rgb = draw_landmarks_on_image(rgb_image, detection_result)
318
+ # cv2.putText(annotated_image_rgb, f"Face Shape: {face_shape}", (20, 50),
319
+ # cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
320
+ # cv2.putText(annotated_image_rgb, f"Confidence: {confidence:.3f}", (20, 90),
321
+ # cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
322
+
323
+ # --- 4. Upload the PROCESSED image to Cloudinary ---
324
+ annotated_image_bgr = cv2.cvtColor(annotated_image_rgb, cv2.COLOR_RGB2BGR)
325
+ _, buffer = cv2.imencode('.jpg', annotated_image_bgr)
326
+ processed_image_url = upload_image_to_cloudinary(buffer.tobytes())
327
+
328
+ if not processed_image_url:
329
+ return jsonify({"error": "Failed to upload processed image"}), 500
330
+
331
+ # --- 5. Calculate Measurements using CALIBRATED values ---
332
+ landmarks_normalized = np.array([[lm.x, lm.y, lm.z] for lm in face_landmarks])
333
+
334
+ # Define more accurate landmark points for measurements
335
+ p_iris_l = landmarks_normalized[473] # Left Iris
336
+ p_iris_r = landmarks_normalized[468] # Right Iris
337
+
338
+ p_forehead_top = landmarks_normalized[10] # Top of forehead hairline
339
+ p_chin_tip = landmarks_normalized[152] # Bottom of chin
340
+
341
+ p_cheek_l = landmarks_normalized[234] # Left cheekbone edge
342
+ p_cheek_r = landmarks_normalized[454] # Right cheekbone edge
343
+
344
+ p_jaw_l = landmarks_normalized[172] # Left jaw point
345
+ p_jaw_r = landmarks_normalized[397] # Right jaw point
346
+
347
+ p_forehead_l = landmarks_normalized[63] # Left forehead edge
348
+ p_forehead_r = landmarks_normalized[293] # Right forehead edge
349
+
350
+ # IPD-based calibration
351
+ AVG_IPD_CM = 6.3
352
+ dist_iris = distance_3d(p_iris_l, p_iris_r)
353
+ cm_per_unit = AVG_IPD_CM / dist_iris if dist_iris != 0 else 0
354
+
355
+ # Calculate all distances
356
+ dist_face_length = distance_3d(p_forehead_top, p_chin_tip)
357
+ dist_cheek_width = distance_3d(p_cheek_l, p_cheek_r)
358
+ dist_jaw_width = distance_3d(p_jaw_l, p_jaw_r)
359
+ dist_forehead_width = distance_3d(p_forehead_l, p_forehead_r)
360
+
361
+ # Convert to cm and apply calibration adjustments
362
+ face_length_cm = (dist_face_length * cm_per_unit) + 5.0 # +4cm calibration (increased from +2cm)
363
+ cheekbone_width_cm = (dist_cheek_width * cm_per_unit) + 4.0 # +3cm calibration (increased from +2cm)
364
+ jaw_width_cm = (dist_jaw_width * cm_per_unit) +0.5 # No calibration (already accurate)
365
+ forehead_width_cm = (dist_forehead_width * cm_per_unit) + 6.0 # +5cm calibration (increased from +3.5cm)
366
+
367
+ # Jaw curve ratio is a relative measure, so it doesn't need cm conversion
368
+ jaw_curve_ratio = dist_face_length / dist_cheek_width if dist_cheek_width != 0 else 0
369
+
370
+ measurements = {
371
+ "face_length_cm": float(face_length_cm),
372
+ "cheekbone_width_cm": float(cheekbone_width_cm),
373
+ "jaw_width_cm": float(jaw_width_cm),
374
+ "forehead_width_cm": float(forehead_width_cm),
375
+ "jaw_curve_ratio": float(jaw_curve_ratio)
376
+ }
377
+
378
+ # --- 6. Save analysis to MongoDB and return ---
379
+ analysis_id = save_analysis_to_db(processed_image_url, face_shape, measurements)
380
+ if not analysis_id:
381
+ return jsonify({"error": "Failed to save analysis"}), 500
382
+
383
+ # --- 7. Return the complete result ---
384
+ return jsonify({
385
+ "message": "Analysis successful",
386
+ "analysis_id": analysis_id,
387
+ "image_url": processed_image_url,
388
+ "face_shape": face_shape,
389
+ "confidence": float(confidence),
390
+ "all_probabilities": {
391
+ "Heart": float(confidence_scores[0]),
392
+ "Oval": float(confidence_scores[1]),
393
+ "Round": float(confidence_scores[2]),
394
+ "Square": float(confidence_scores[3]),
395
+ "Oblong": float(confidence_scores[4])
396
+ },
397
+ "measurements": measurements,
398
+ "calibration_applied": {
399
+ "face_length_adjustment": "+4.0cm (increased from +2.0cm)",
400
+ "forehead_width_adjustment": "+5.0cm (increased from +3.5cm)",
401
+ "cheekbone_width_adjustment": "+3.0cm (increased from +2.0cm)",
402
+ "jaw_width_adjustment": "none (already accurate)",
403
+ "note": "Calibration adjustments increased based on user feedback"
404
+ }
405
+ })
406
+
407
+ except Exception as e:
408
+ print(f"An error occurred: {e}")
409
+ return jsonify({"error": f"An error occurred: {str(e)}"}), 500
410
+
411
+ @app.route('/video_feed')
412
+ def video_feed():
413
+ return Response(generate_frames(), mimetype='multipart/x-mixed-replace; boundary=frame')
414
+
415
+ @app.route('/real_time')
416
+ def real_time():
417
+ return render_template('real_time.html')
418
+
419
+ if __name__ == '__main__':
420
+ # Get port from environment variable (required for Hugging Face Spaces)
421
+ port = int(os.environ.get('PORT', 7860))
422
+ # Run on 0.0.0.0 to accept connections from any IP (required for HF Spaces)
423
+ app.run(host='0.0.0.0', port=port, debug=False)
database.py ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import cloudinary
3
+ import cloudinary.uploader
4
+ from pymongo import MongoClient
5
+ from dotenv import load_dotenv
6
+ from datetime import datetime
7
+
8
+ # Load environment variables from .env file
9
+ load_dotenv()
10
+
11
+ # --- Cloudinary Configuration ---
12
+ # Check if Cloudinary credentials are available
13
+ CLOUDINARY_AVAILABLE = all([
14
+ os.getenv("CLOUDINARY_CLOUD_NAME"),
15
+ os.getenv("CLOUDINARY_API_KEY"),
16
+ os.getenv("CLOUDINARY_API_SECRET")
17
+ ])
18
+
19
+ if CLOUDINARY_AVAILABLE:
20
+ cloudinary.config(
21
+ cloud_name=os.getenv("CLOUDINARY_CLOUD_NAME"),
22
+ api_key=os.getenv("CLOUDINARY_API_KEY"),
23
+ api_secret=os.getenv("CLOUDINARY_API_SECRET")
24
+ )
25
+
26
+ # --- MongoDB Configuration ---
27
+ MONGO_URI = os.getenv("MONGO_URI")
28
+ MONGO_AVAILABLE = MONGO_URI is not None
29
+
30
+ if MONGO_AVAILABLE:
31
+ client = MongoClient(MONGO_URI)
32
+ db = client.get_database("face_shape_db")
33
+ analyses_collection = db.get_collection("analyses")
34
+ else:
35
+ client = None
36
+ db = None
37
+ analyses_collection = None
38
+
39
+ def upload_image_to_cloudinary(image_file):
40
+ """Uploads an image file to Cloudinary and returns the secure URL."""
41
+ if not CLOUDINARY_AVAILABLE:
42
+ print("Cloudinary not configured, returning placeholder URL")
43
+ return "https://via.placeholder.com/400x400/cccccc/666666?text=Image+Upload+Disabled"
44
+
45
+ try:
46
+ upload_result = cloudinary.uploader.upload(image_file)
47
+ return upload_result.get("secure_url")
48
+ except Exception as e:
49
+ print(f"Error uploading to Cloudinary: {e}")
50
+ return "https://via.placeholder.com/400x400/cccccc/666666?text=Upload+Failed"
51
+
52
+ def save_analysis_to_db(image_url, face_shape, measurements):
53
+ """Saves the analysis results to MongoDB."""
54
+ if not MONGO_AVAILABLE:
55
+ print("MongoDB not configured, returning mock ID")
56
+ return "mock_analysis_id_" + str(datetime.utcnow().timestamp())
57
+
58
+ try:
59
+ analysis_data = {
60
+ "image_url": image_url,
61
+ "face_shape": face_shape,
62
+ "measurements": measurements,
63
+ "created_at": datetime.utcnow()
64
+ }
65
+ result = analyses_collection.insert_one(analysis_data)
66
+ return str(result.inserted_id)
67
+ except Exception as e:
68
+ print(f"Error saving to MongoDB: {e}")
69
+ return "mock_analysis_id_" + str(datetime.utcnow().timestamp())
face_landmarker_v2_with_blendshapes.task ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64184e229b263107bc2b804c6625db1341ff2bb731874b0bcc2fe6544e0bc9ff
3
+ size 3758596
requirements.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ flask==2.3.3
2
+ flask-cors==4.0.0
3
+ opencv-python==4.8.0.76
4
+ mediapipe==0.10.8
5
+ numpy==1.24.3
6
+ scikit-learn==1.3.2
7
+ joblib==1.3.2
8
+ python-dotenv==1.0.1
9
+ cloudinary==1.40.0
10
+ pymongo==4.7.2
11
+ werkzeug==3.0.3
12
+ gunicorn==22.0.0
13
+ watchdog==3.0.0
14
+ protobuf==3.20.3
runtime.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ python-3.10.12
templates/index.html ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Face Shape Analysis</title>
7
+ </head>
8
+ <body>
9
+ <h1>Face Shape Analysis API</h1>
10
+ <p>This is the Flask backend API. Use the Next.js frontend for the full application.</p>
11
+ </body>
12
+ </html>
ultra_optimized_scaler.joblib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0690f84a2c10184ecb9c289fcc61938d88acb44846d0c19147f888254d3ff23
3
+ size 586