QuantaSparkLabs commited on
Commit
3513064
·
verified ·
1 Parent(s): 0fc0ed3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ tags:
4
+ - image-to-image
5
+ - pytorch
6
+ - computer-vision
7
+ - face-verification
8
+ ---
9
+ # AdvancedFaceVerifyAI Model - QuantaSparkLabs
10
+
11
+ This model, `AdvancedFaceVerifyAI`, is designed for multi-task facial analysis, including age prediction, gender classification, and emotion classification.
12
+
13
+ ## Model Architecture
14
+ The model features a convolutional backbone followed by shared fully connected layers and separate output heads for each task.
15
+
16
+ ## Training Details
17
+ - **Dataset**: Custom Structured Synthetic Face Dataset
18
+ - **Number of Samples**: 8000 training, 2000 validation
19
+ - **Backbone**: Custom CNN architecture
20
+ - **Number of Epochs**: 50
21
+ - **Optimizer**: Adam with learning rate 0.0001
22
+ - **Learning Rate Scheduler**: ReduceLROnPlateau (mode='min', factor=0.5, patience=5)
23
+ - **Loss Functions**:
24
+ - Age: Mean Squared Error (MSE)
25
+ - Gender: Cross-Entropy Loss
26
+ - Emotion: Cross-Entropy Loss
27
+
28
+ ## Performance Metrics (Final Validation Epoch)
29
+ - **Gender Accuracy**: 100.00%
30
+ - **Emotion Accuracy**: 100.00%
31
+ - **Age MAE**: 0.0990
32
+
33
+ ## Usage
34
+ To load and use this model (after downloading `pytorch_model.bin` from the Hugging Face repo):
35
+
36
+ ```python
37
+ import torch
38
+ import torch.nn as nn
39
+ import torch.nn.functional as F
40
+
41
+ # Define the model architecture (as it was during training)
42
+ class AdvancedFaceVerifyAI(nn.Module):
43
+ def __init__(self, num_gender_classes=2, num_emotion_classes=3):
44
+ super(AdvancedFaceVerifyAI, self).__init__()
45
+ # (Copy your model definition here from the notebook)
46
+ self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)
47
+ self.bn1 = nn.BatchNorm2d(32)
48
+ self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)
49
+ self.bn2 = nn.BatchNorm2d(64)
50
+ self.conv3 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
51
+ self.bn3 = nn.BatchNorm2d(128)
52
+ self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
53
+ self.dropout = nn.Dropout(0.3)
54
+ self._to_linear = 128 * (64 // (2**3)) * (64 // (2**3))
55
+ self.fc1 = nn.Linear(self._to_linear, 512)
56
+ self.fc_bn1 = nn.BatchNorm1d(512)
57
+ self.fc2 = nn.Linear(512, 256)
58
+ self.fc_bn2 = nn.BatchNorm1d(256)
59
+ self.age_head = nn.Linear(256, 1)
60
+ self.gender_head = nn.Linear(256, num_gender_classes)
61
+ self.emotion_head = nn.Linear(256, num_emotion_classes)
62
+
63
+ def forward(self, x):
64
+ x = self.pool(F.relu(self.bn1(self.conv1(x))))
65
+ x = self.dropout(x)
66
+ x = self.pool(F.relu(self.bn2(self.conv2(x))))
67
+ x = self.dropout(x)
68
+ x = self.pool(F.relu(self.bn3(self.conv3(x))))
69
+ x = self.dropout(x)
70
+ x = x.view(-1, self._to_linear)
71
+ x = F.relu(self.fc_bn1(self.fc1(x)))
72
+ x = self.dropout(x)
73
+ x = F.relu(self.fc_bn2(self.fc2(x)))
74
+ x = self.dropout(x)
75
+ age_out = self.age_head(x)
76
+ gender_out = self.gender_head(x)
77
+ emotion_out = self.emotion_head(x)
78
+ return age_out, gender_out, emotion_out
79
+
80
+ # Instantiate the model
81
+ loaded_model = AdvancedFaceVerifyAI(num_gender_classes=2, num_emotion_classes=3)
82
+
83
+ # Load the state dictionary
84
+ # model_path = hf_hub_download(repo_id="QuantaSparkLabs/FaceVerifyAI-Advanced", filename="best_advanced_face_verify_ai_model.pth")
85
+ loaded_model.load_state_dict(torch.load("best_advanced_face_verify_ai_model.pth", map_location=torch.device('cpu')))
86
+ loaded_model.eval()
87
+
88
+ print("Model loaded successfully!")
89
+
90
+ # Example prediction (assuming 'image_tensor' is your preprocessed input image)
91
+ # with torch.no_grad():
92
+ # age_pred, gender_pred, emotion_pred = loaded_model(image_tensor)
93
+ # print(f"Predicted Age: {age_pred.item():.2f}")
94
+ # print(f"Predicted Gender: {torch.argmax(gender_pred).item()}")
95
+ # print(f"Predicted Emotion: {torch.argmax(emotion_pred).item()}")
96
+ ```