Mahir1426 commited on
Commit
2aee2df
·
verified ·
1 Parent(s): 7453067

Upload 11 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ face_landmarker_v2_with_blendshapes.task filter=lfs diff=lfs merge=lfs -text
Dockerfile ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use Python 3.10 slim image for compatibility with mediapipe
2
+ FROM python:3.10-slim
3
+
4
+ # Set working directory
5
+ WORKDIR /app
6
+
7
+ # Install system dependencies for building Python packages and OpenCV
8
+ RUN apt-get update && apt-get install -y \
9
+ build-essential \
10
+ libgl1-mesa-glx \
11
+ libglib2.0-0 \
12
+ curl \
13
+ && rm -rf /var/lib/apt/lists/*
14
+
15
+ # Copy requirements and install Python dependencies
16
+ COPY requirements.txt .
17
+ RUN pip install --upgrade pip
18
+ RUN pip install --no-cache-dir -r requirements.txt
19
+
20
+ # Copy the application code (excluding large files)
21
+ COPY . .
22
+
23
+ # Remove the large model file if it exists (we'll download it at runtime)
24
+ RUN rm -f Best_RandomForest.pkl
25
+
26
+ # Create a startup script
27
+ RUN echo '#!/bin/bash\n\
28
+ if [ ! -f Best_RandomForest.pkl ]; then\n\
29
+ echo "Downloading model file..."\n\
30
+ curl -o Best_RandomForest.pkl "$MODEL_URL"\n\
31
+ fi\n\
32
+ exec gunicorn --bind 0.0.0.0:5000 app:app' > /app/start.sh && chmod +x /app/start.sh
33
+
34
+ # Expose port 5000 (Flask default)
35
+ EXPOSE 5000
36
+
37
+ # Set environment variables for Flask
38
+ ENV FLASK_APP=app.py
39
+ ENV FLASK_ENV=production
40
+
41
+ # Start the app using our startup script
42
+ CMD ["/app/start.sh"]
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Face Shape Analysis Application
2
+
3
+ This is a full-stack application that analyzes face shapes using AI. It consists of a Flask backend for face detection and analysis, and a Next.js frontend with two different result display components.
4
+
5
+ ## Features
6
+
7
+ - **Face Shape Detection**: Uses MediaPipe and a trained Random Forest model to detect face shapes (Heart, Oval, Round, Square)
8
+ - **Dual Result Display**:
9
+ - **AnalysisCard**: Beautiful, animated display with personality insights and styling recommendations
10
+ - **ResultSection**: Detailed facial measurements and technical data
11
+ - **Real-time Processing**: Processes uploaded images and displays results immediately
12
+ - **Modern UI**: Next.js frontend with beautiful animations and responsive design
13
+
14
+ ## Project Structure
15
+
16
+ ```
17
+ Face_Detection/
18
+ ├── app.py # Flask backend with face analysis logic
19
+ ├── app/ # Next.js frontend
20
+ │ ├── page.tsx # Main application page
21
+ │ └── api/ # API routes for frontend-backend communication
22
+ ├── components/ # React components
23
+ │ ├── result-section.tsx # Detailed measurements display
24
+ │ ├── analysis-card.tsx # Enhanced result display with personality insights
25
+ │ └── upload-section.tsx # File upload component
26
+ ├── uploads/ # Directory for uploaded images
27
+ ├── templates/ # Flask templates
28
+ └── requirements.txt # Python dependencies
29
+ ```
30
+
31
+ ## Setup Instructions
32
+
33
+ ### 1. Install Python Dependencies
34
+
35
+ ```bash
36
+ pip install -r requirements.txt
37
+ ```
38
+
39
+ ### 2. Install Node.js Dependencies
40
+
41
+ ```bash
42
+ npm install
43
+ # or
44
+ pnpm install
45
+ ```
46
+
47
+ ### 3. Run the Application
48
+
49
+ #### Start the Flask Backend
50
+ ```bash
51
+ python app.py
52
+ ```
53
+ The Flask server will run on `http://localhost:5000`
54
+
55
+ #### Start the Next.js Frontend
56
+ ```bash
57
+ npm run dev
58
+ # or
59
+ pnpm dev
60
+ ```
61
+ The Next.js app will run on `http://localhost:3000`
62
+
63
+ ## How It Works
64
+
65
+ 1. **Image Upload**: Users upload images through the Next.js frontend
66
+ 2. **Backend Processing**: Flask backend processes images using MediaPipe face detection
67
+ 3. **Face Shape Analysis**: The trained Random Forest model predicts face shape
68
+ 4. **Dual Display**: Results are shown in two formats:
69
+ - **AnalysisCard**: Enhanced display with personality traits, characteristics, and styling advice
70
+ - **ResultSection**: Technical measurements and facial proportions
71
+ 5. **Processed Images**: Shows original image with facial landmarks and face shape label
72
+
73
+ ## API Endpoints
74
+
75
+ - `POST /analyze` - Analyzes a face image and returns face shape results with measurements
76
+ - `GET /uploads/<filename>` - Serves processed images
77
+ - `POST /upload` - Handles file uploads (Next.js API route)
78
+
79
+ ## Face Shapes Supported
80
+
81
+ - **Heart**: Wider forehead, pointed chin, romantic silhouette
82
+ - **Oval**: Balanced proportions, most versatile for styling
83
+ - **Round**: Soft curves, full cheeks, warm appearance
84
+ - **Square**: Strong jawline, defined angles, commanding presence
85
+
86
+ ## Components
87
+
88
+ ### AnalysisCard
89
+ - Beautiful animated display
90
+ - Personality insights and characteristics
91
+ - Styling recommendations
92
+ - Career compatibility suggestions
93
+ - Confidence meter and visual effects
94
+
95
+ ### ResultSection
96
+ - Detailed facial measurements
97
+ - Technical data (face length, cheekbone width, etc.)
98
+ - Processed image with landmarks
99
+ - Jaw curve ratio and proportions
100
+
101
+ ## Technologies Used
102
+
103
+ - **Backend**: Flask, MediaPipe, OpenCV, scikit-learn
104
+ - **Frontend**: Next.js, React, TypeScript, Tailwind CSS
105
+ - **AI/ML**: Random Forest model for face shape classification
106
+ - **Computer Vision**: MediaPipe for facial landmark detection
107
+
108
+ ## Testing
109
+
110
+ Run the test script to verify the backend is working:
111
+
112
+ ```bash
113
+ python test_setup.py
114
+ ```
115
+
116
+ ## Notes
117
+
118
+ - Both result components display the same analysis data in different formats
119
+ - The AnalysisCard provides a more user-friendly, personality-focused experience
120
+ - The ResultSection provides detailed technical measurements for analysis
121
+ - All processed images are saved with landmarks drawn on them
122
+ - CORS is enabled for frontend-backend communication
Ultra_Optimized_RandomForest.joblib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a54fa289d61e99546b3c9c6a6e1812b7f52a7cc399c3e691aa6ca9de7f729673
3
+ size 304641
__pycache__/database.cpython-312.pyc ADDED
Binary file (2.25 kB). View file
 
app.py ADDED
@@ -0,0 +1,388 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, request, render_template, Response, jsonify, url_for, send_from_directory
2
+ import cv2
3
+ import numpy as np
4
+ import joblib
5
+ import os
6
+ import warnings
7
+ from werkzeug.utils import secure_filename
8
+ import mediapipe as mp
9
+ from mediapipe.tasks import python
10
+ from mediapipe.tasks.python import vision
11
+ from mediapipe.framework.formats import landmark_pb2
12
+ from flask_cors import CORS
13
+ from dotenv import load_dotenv
14
+ # Import database functions
15
+ from database import upload_image_to_cloudinary, save_analysis_to_db
16
+
17
+ # Load environment variables
18
+ load_dotenv()
19
+
20
+ # Suppress specific deprecation warnings from protobuf
21
+ warnings.filterwarnings("ignore", category=UserWarning, module='google.protobuf')
22
+ app = Flask(__name__, template_folder='templates')
23
+ CORS(app) # Enable CORS for all routes
24
+
25
+ # Initialize MediaPipe Face Landmarker (only once)
26
+ base_options = python.BaseOptions(model_asset_path='face_landmarker_v2_with_blendshapes.task')
27
+ options = vision.FaceLandmarkerOptions(base_options=base_options,
28
+ output_face_blendshapes=True,
29
+ output_facial_transformation_matrixes=True,
30
+ num_faces=1)
31
+ face_landmarker = vision.FaceLandmarker.create_from_options(options)
32
+
33
+ # Initialize MediaPipe drawing utilities
34
+ mp_drawing = mp.solutions.drawing_utils
35
+ mp_drawing_styles = mp.solutions.drawing_styles
36
+
37
+ # Load the ultra-optimized model and scaler
38
+ print("Loading ultra-optimized model...")
39
+ face_shape_model = joblib.load('Ultra_Optimized_RandomForest.joblib')
40
+ scaler = joblib.load('ultra_optimized_scaler.joblib')
41
+ print("✅ Ultra-optimized model loaded successfully!")
42
+
43
+ def distance_3d(p1, p2):
44
+ """Calculate 3D Euclidean distance between two points."""
45
+ return np.linalg.norm(np.array(p1) - np.array(p2))
46
+
47
+ def smart_preprocess_image(image):
48
+ """
49
+ Smart image preprocessing to get the best face region.
50
+ This addresses the issue of users not providing perfect images.
51
+ """
52
+ h, w = image.shape[:2]
53
+
54
+ # First, try to detect face and get bounding box
55
+ rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
56
+ mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_image)
57
+ detection_result = face_landmarker.detect(mp_image)
58
+
59
+ if detection_result.face_landmarks:
60
+ # Get face landmarks
61
+ face_landmarks = detection_result.face_landmarks[0]
62
+
63
+ # Calculate face bounding box with padding
64
+ x_coords = [landmark.x for landmark in face_landmarks]
65
+ y_coords = [landmark.y for landmark in face_landmarks]
66
+
67
+ # Convert normalized coordinates to pixel coordinates
68
+ x_min = int(min(x_coords) * w)
69
+ x_max = int(max(x_coords) * w)
70
+ y_min = int(min(y_coords) * h)
71
+ y_max = int(max(y_coords) * h)
72
+
73
+ # Add generous padding around face (40% padding for better context)
74
+ face_width = x_max - x_min
75
+ face_height = y_max - y_min
76
+ pad_x = int(face_width * 0.4) # 40% padding
77
+ pad_y = int(face_height * 0.4)
78
+
79
+ # Calculate crop coordinates
80
+ x1 = max(0, x_min - pad_x)
81
+ x2 = min(w, x_max + pad_x)
82
+ y1 = max(0, y_min - pad_y)
83
+ y2 = min(h, y_max + pad_y)
84
+
85
+ # Crop the face region
86
+ face_crop = image[y1:y2, x1:x2]
87
+
88
+ # Resize to standard size while maintaining aspect ratio
89
+ target_size = 224
90
+ crop_h, crop_w = face_crop.shape[:2]
91
+
92
+ # Calculate scale to fit in target size
93
+ scale = min(target_size / crop_w, target_size / crop_h)
94
+ new_w = int(crop_w * scale)
95
+ new_h = int(crop_h * scale)
96
+
97
+ # Resize maintaining aspect ratio
98
+ resized = cv2.resize(face_crop, (new_w, new_h))
99
+
100
+ # Create final image with padding to exact target size
101
+ final_image = np.zeros((target_size, target_size, 3), dtype=np.uint8)
102
+
103
+ # Center the resized image
104
+ start_y = (target_size - new_h) // 2
105
+ start_x = (target_size - new_w) // 2
106
+ final_image[start_y:start_y + new_h, start_x:start_x + new_w] = resized
107
+
108
+ return final_image
109
+ else:
110
+ # If no face detected, just resize to standard size
111
+ return cv2.resize(image, (224, 224))
112
+
113
+ def extract_optimized_features(coords):
114
+ """
115
+ Extract optimized features for face shape detection.
116
+ Uses only the most important landmarks for efficiency.
117
+ """
118
+ # Key landmarks for face shape analysis
119
+ landmark_indices = {
120
+ 'forehead_top': 10,
121
+ 'forehead_left': 21,
122
+ 'forehead_right': 251,
123
+ 'cheek_left': 234,
124
+ 'cheek_right': 454,
125
+ 'jaw_left': 172,
126
+ 'jaw_right': 397,
127
+ 'chin': 152,
128
+ }
129
+
130
+ # Extract chosen points
131
+ lm = {name: coords[idx] for name, idx in landmark_indices.items()}
132
+
133
+ # Calculate key measurements
134
+ face_height = distance_3d(lm['forehead_top'], lm['chin'])
135
+ face_width = distance_3d(lm['cheek_left'], lm['cheek_right'])
136
+ jaw_width = distance_3d(lm['jaw_left'], lm['jaw_right'])
137
+ forehead_width = distance_3d(lm['forehead_left'], lm['forehead_right'])
138
+
139
+ # Calculate ratios (scale-invariant features)
140
+ width_to_height = face_width / face_height
141
+ jaw_to_forehead = jaw_width / forehead_width
142
+ jaw_to_face = jaw_width / face_width
143
+ forehead_to_face = forehead_width / face_width
144
+
145
+ # Additional shape features
146
+ face_area = face_width * face_height
147
+ jaw_angle = np.arctan2(lm['jaw_right'][1] - lm['jaw_left'][1],
148
+ lm['jaw_right'][0] - lm['jaw_left'][0])
149
+
150
+ # Return optimized feature vector
151
+ features = np.array([
152
+ width_to_height,
153
+ jaw_to_forehead,
154
+ jaw_to_face,
155
+ forehead_to_face,
156
+ face_area,
157
+ jaw_angle
158
+ ])
159
+
160
+ return features
161
+
162
+ def get_face_shape_label(label):
163
+ shapes = ["Heart", "Oval", "Round", "Square", "Oblong"]
164
+ return shapes[label]
165
+
166
+ def draw_landmarks_on_image(rgb_image, detection_result):
167
+ face_landmarks_list = detection_result.face_landmarks
168
+ annotated_image = np.copy(rgb_image)
169
+
170
+ for idx in range(len(face_landmarks_list)):
171
+ face_landmarks = face_landmarks_list[idx]
172
+
173
+ # Create landmark proto
174
+ face_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
175
+ face_landmarks_proto.landmark.extend([
176
+ landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in face_landmarks
177
+ ])
178
+
179
+ # Draw face landmarks
180
+ mp_drawing.draw_landmarks(
181
+ image=annotated_image,
182
+ landmark_list=face_landmarks_proto,
183
+ connections=mp.solutions.face_mesh.FACEMESH_TESSELATION,
184
+ landmark_drawing_spec=None,
185
+ connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style())
186
+ mp_drawing.draw_landmarks(
187
+ image=annotated_image,
188
+ landmark_list=face_landmarks_proto,
189
+ connections=mp.solutions.face_mesh.FACEMESH_CONTOURS,
190
+ landmark_drawing_spec=None,
191
+ connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_contours_style())
192
+ mp_drawing.draw_landmarks(
193
+ image=annotated_image,
194
+ landmark_list=face_landmarks_proto,
195
+ connections=mp.solutions.face_mesh.FACEMESH_IRISES,
196
+ landmark_drawing_spec=None,
197
+ connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_iris_connections_style())
198
+
199
+ return annotated_image
200
+
201
+ def allowed_file(filename):
202
+ return '.' in filename and filename.rsplit('.', 1)[1].lower() in app.config['ALLOWED_EXTENSIONS']
203
+
204
+ def generate_frames():
205
+ cap = cv2.VideoCapture(0)
206
+ while True:
207
+ ret, frame = cap.read()
208
+ if not ret:
209
+ break
210
+ rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
211
+ image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_frame)
212
+ detection_result = face_landmarker.detect(image)
213
+
214
+ if detection_result.face_landmarks:
215
+ for face_landmarks in detection_result.face_landmarks:
216
+ landmarks = [[lm.x, lm.y, lm.z] for lm in face_landmarks]
217
+ landmarks = np.array(landmarks)
218
+ face_features = extract_optimized_features(landmarks)
219
+
220
+ # Normalize features using the scaler
221
+ face_features_scaled = scaler.transform(face_features.reshape(1, -1))
222
+
223
+ face_shape_label = face_shape_model.predict(face_features_scaled)[0]
224
+ face_shape = get_face_shape_label(face_shape_label)
225
+ annotated_image = draw_landmarks_on_image(rgb_frame, detection_result)
226
+ cv2.putText(annotated_image, f"Face Shape: {face_shape}", (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
227
+ else:
228
+ annotated_image = rgb_frame
229
+
230
+ ret, buffer = cv2.imencode('.jpg', annotated_image)
231
+ frame = buffer.tobytes()
232
+ yield (b'--frame\r\n'
233
+ b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
234
+
235
+ @app.route('/')
236
+ def index():
237
+ return render_template('index.html')
238
+
239
+ @app.route('/analyze', methods=['POST'])
240
+ def analyze_face():
241
+ if 'file' not in request.files:
242
+ return jsonify({"error": "No file part"}), 400
243
+
244
+ file = request.files['file']
245
+ if file.filename == '':
246
+ return jsonify({"error": "No selected file"}), 400
247
+
248
+ try:
249
+ # --- 1. Read image and smart preprocessing ---
250
+ img_bytes = file.read()
251
+ nparr = np.frombuffer(img_bytes, np.uint8)
252
+ img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
253
+
254
+ # Smart preprocessing: detect face and crop optimally
255
+ processed_img = smart_preprocess_image(img)
256
+
257
+ # Convert to RGB for MediaPipe
258
+ rgb_image = cv2.cvtColor(processed_img, cv2.COLOR_BGR2RGB)
259
+ mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_image)
260
+
261
+ detection_result = face_landmarker.detect(mp_image)
262
+
263
+ if not detection_result.face_landmarks:
264
+ return jsonify({"error": "No face detected"}), 400
265
+
266
+ # --- 2. Get data, calculate features, and predict shape ---
267
+ face_landmarks = detection_result.face_landmarks[0]
268
+
269
+ # First, calculate the optimized features
270
+ landmarks_normalized = np.array([[lm.x, lm.y, lm.z] for lm in face_landmarks])
271
+ face_features = extract_optimized_features(landmarks_normalized)
272
+
273
+ # Normalize features using the scaler
274
+ face_features_scaled = scaler.transform(face_features.reshape(1, -1))
275
+
276
+ # Then, predict the shape using calibrated features
277
+ face_shape_label = face_shape_model.predict(face_features_scaled)[0]
278
+ face_shape = get_face_shape_label(face_shape_label)
279
+
280
+ # Get confidence scores
281
+ confidence_scores = face_shape_model.predict_proba(face_features_scaled)[0]
282
+ confidence = confidence_scores[face_shape_label]
283
+
284
+ # --- 3. Draw landmarks on the image ---
285
+ annotated_image_rgb = draw_landmarks_on_image(rgb_image, detection_result)
286
+ # cv2.putText(annotated_image_rgb, f"Face Shape: {face_shape}", (20, 50),
287
+ # cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
288
+ # cv2.putText(annotated_image_rgb, f"Confidence: {confidence:.3f}", (20, 90),
289
+ # cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
290
+
291
+ # --- 4. Upload the PROCESSED image to Cloudinary ---
292
+ annotated_image_bgr = cv2.cvtColor(annotated_image_rgb, cv2.COLOR_RGB2BGR)
293
+ _, buffer = cv2.imencode('.jpg', annotated_image_bgr)
294
+ processed_image_url = upload_image_to_cloudinary(buffer.tobytes())
295
+
296
+ if not processed_image_url:
297
+ return jsonify({"error": "Failed to upload processed image"}), 500
298
+
299
+ # --- 5. Calculate Measurements using CALIBRATED values ---
300
+ landmarks_normalized = np.array([[lm.x, lm.y, lm.z] for lm in face_landmarks])
301
+
302
+ # Define more accurate landmark points for measurements
303
+ p_iris_l = landmarks_normalized[473] # Left Iris
304
+ p_iris_r = landmarks_normalized[468] # Right Iris
305
+
306
+ p_forehead_top = landmarks_normalized[10] # Top of forehead hairline
307
+ p_chin_tip = landmarks_normalized[152] # Bottom of chin
308
+
309
+ p_cheek_l = landmarks_normalized[234] # Left cheekbone edge
310
+ p_cheek_r = landmarks_normalized[454] # Right cheekbone edge
311
+
312
+ p_jaw_l = landmarks_normalized[172] # Left jaw point
313
+ p_jaw_r = landmarks_normalized[397] # Right jaw point
314
+
315
+ p_forehead_l = landmarks_normalized[63] # Left forehead edge
316
+ p_forehead_r = landmarks_normalized[293] # Right forehead edge
317
+
318
+ # IPD-based calibration
319
+ AVG_IPD_CM = 6.3
320
+ dist_iris = distance_3d(p_iris_l, p_iris_r)
321
+ cm_per_unit = AVG_IPD_CM / dist_iris if dist_iris != 0 else 0
322
+
323
+ # Calculate all distances
324
+ dist_face_length = distance_3d(p_forehead_top, p_chin_tip)
325
+ dist_cheek_width = distance_3d(p_cheek_l, p_cheek_r)
326
+ dist_jaw_width = distance_3d(p_jaw_l, p_jaw_r)
327
+ dist_forehead_width = distance_3d(p_forehead_l, p_forehead_r)
328
+
329
+ # Convert to cm and apply calibration adjustments
330
+ face_length_cm = (dist_face_length * cm_per_unit) + 5.0 # +4cm calibration (increased from +2cm)
331
+ cheekbone_width_cm = (dist_cheek_width * cm_per_unit) + 4.0 # +3cm calibration (increased from +2cm)
332
+ jaw_width_cm = (dist_jaw_width * cm_per_unit) +0.5 # No calibration (already accurate)
333
+ forehead_width_cm = (dist_forehead_width * cm_per_unit) + 6.0 # +5cm calibration (increased from +3.5cm)
334
+
335
+ # Jaw curve ratio is a relative measure, so it doesn't need cm conversion
336
+ jaw_curve_ratio = dist_face_length / dist_cheek_width if dist_cheek_width != 0 else 0
337
+
338
+ measurements = {
339
+ "face_length_cm": float(face_length_cm),
340
+ "cheekbone_width_cm": float(cheekbone_width_cm),
341
+ "jaw_width_cm": float(jaw_width_cm),
342
+ "forehead_width_cm": float(forehead_width_cm),
343
+ "jaw_curve_ratio": float(jaw_curve_ratio)
344
+ }
345
+
346
+ # --- 6. Save analysis to MongoDB and return ---
347
+ analysis_id = save_analysis_to_db(processed_image_url, face_shape, measurements)
348
+ if not analysis_id:
349
+ return jsonify({"error": "Failed to save analysis"}), 500
350
+
351
+ # --- 7. Return the complete result ---
352
+ return jsonify({
353
+ "message": "Analysis successful",
354
+ "analysis_id": analysis_id,
355
+ "image_url": processed_image_url,
356
+ "face_shape": face_shape,
357
+ "confidence": float(confidence),
358
+ "all_probabilities": {
359
+ "Heart": float(confidence_scores[0]),
360
+ "Oval": float(confidence_scores[1]),
361
+ "Round": float(confidence_scores[2]),
362
+ "Square": float(confidence_scores[3]),
363
+ "Oblong": float(confidence_scores[4])
364
+ },
365
+ "measurements": measurements,
366
+ "calibration_applied": {
367
+ "face_length_adjustment": "+4.0cm (increased from +2.0cm)",
368
+ "forehead_width_adjustment": "+5.0cm (increased from +3.5cm)",
369
+ "cheekbone_width_adjustment": "+3.0cm (increased from +2.0cm)",
370
+ "jaw_width_adjustment": "none (already accurate)",
371
+ "note": "Calibration adjustments increased based on user feedback"
372
+ }
373
+ })
374
+
375
+ except Exception as e:
376
+ print(f"An error occurred: {e}")
377
+ return jsonify({"error": f"An error occurred: {str(e)}"}), 500
378
+
379
+ @app.route('/video_feed')
380
+ def video_feed():
381
+ return Response(generate_frames(), mimetype='multipart/x-mixed-replace; boundary=frame')
382
+
383
+ @app.route('/real_time')
384
+ def real_time():
385
+ return render_template('real_time.html')
386
+
387
+ if __name__ == '__main__':
388
+ app.run(debug=True, port=5000)
database.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import cloudinary
3
+ import cloudinary.uploader
4
+ from pymongo import MongoClient
5
+ from dotenv import load_dotenv
6
+ from datetime import datetime
7
+
8
+ # Load environment variables from .env file
9
+ load_dotenv()
10
+
11
+ # --- Cloudinary Configuration ---
12
+ cloudinary.config(
13
+ cloud_name=os.getenv("CLOUDINARY_CLOUD_NAME"),
14
+ api_key=os.getenv("CLOUDINARY_API_KEY"),
15
+ api_secret=os.getenv("CLOUDINARY_API_SECRET")
16
+ )
17
+
18
+ # --- MongoDB Configuration ---
19
+ MONGO_URI = os.getenv("MONGO_URI")
20
+ client = MongoClient(MONGO_URI)
21
+ db = client.get_database("face_shape_db") # You can name your database
22
+ analyses_collection = db.get_collection("analyses") # You can name your collection
23
+
24
+ def upload_image_to_cloudinary(image_file):
25
+ """Uploads an image file to Cloudinary and returns the secure URL."""
26
+ try:
27
+ upload_result = cloudinary.uploader.upload(image_file)
28
+ return upload_result.get("secure_url")
29
+ except Exception as e:
30
+ print(f"Error uploading to Cloudinary: {e}")
31
+ return None
32
+
33
+ def save_analysis_to_db(image_url, face_shape, measurements):
34
+ """Saves the analysis results to MongoDB."""
35
+ try:
36
+ analysis_data = {
37
+ "image_url": image_url,
38
+ "face_shape": face_shape,
39
+ "measurements": measurements,
40
+ "created_at": datetime.utcnow()
41
+ }
42
+ result = analyses_collection.insert_one(analysis_data)
43
+ return str(result.inserted_id)
44
+ except Exception as e:
45
+ print(f"Error saving to MongoDB: {e}")
46
+ return None
face_landmarker_v2_with_blendshapes.task ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64184e229b263107bc2b804c6625db1341ff2bb731874b0bcc2fe6544e0bc9ff
3
+ size 3758596
requirements.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ flask==2.3.3
2
+ flask-cors==4.0.0
3
+ opencv-python==4.8.0.76
4
+ mediapipe==0.10.8
5
+ numpy==1.24.3
6
+ scikit-learn==1.3.2
7
+ joblib==1.3.2
8
+ python-dotenv==1.0.1
9
+ cloudinary==1.40.0
10
+ pymongo==4.7.2
11
+ werkzeug==3.0.3
12
+ gunicorn==22.0.0
13
+ watchdog==3.0.0
runtime.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ python-3.10.12
templates/index.html ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Face Shape Analysis</title>
7
+ </head>
8
+ <body>
9
+ <h1>Face Shape Analysis API</h1>
10
+ <p>This is the Flask backend API. Use the Next.js frontend for the full application.</p>
11
+ </body>
12
+ </html>
ultra_optimized_scaler.joblib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0690f84a2c10184ecb9c289fcc61938d88acb44846d0c19147f888254d3ff23
3
+ size 586