Abaya & Thobe Image Classifier
A fine-tuned MobileNetV2 model that classifies garment images as Abaya or Thobe.
Model Details
| Property |
Value |
| Base model |
MobileNetV2 (ImageNet pretrained) |
| Task |
Binary Image Classification |
| Framework |
PyTorch |
| Input size |
224 Γ 224 RGB |
| Output classes |
Abaya, Thobe |
Architecture
The backbone (MobileNetV2) was frozen. Only the custom classifier head was trained:
Dropout(0.3) β Linear(1280 β 128) β ReLU β Dropout(0.2) β Linear(128 β 2)
Training
| Setting |
Value |
| Epochs |
15 |
| Optimizer |
Adam |
| Learning rate |
1e-3 |
| Weight decay |
1e-4 |
| Loss |
CrossEntropyLoss |
| Dataset |
~500 crawled garment images (Abaya & Thobe) |
Labels
Usage
import torch
import torch.nn as nn
import torchvision.models as models
import torchvision.transforms as transforms
from huggingface_hub import hf_hub_download
from PIL import Image
weights = hf_hub_download("Resham2987/abaya-and-thobes-classifier", "pytorch_model.bin")
model = models.mobilenet_v2(weights=None)
model.classifier = nn.Sequential(
nn.Dropout(0.3), nn.Linear(1280, 128),
nn.ReLU(), nn.Dropout(0.2), nn.Linear(128, 2)
)
model.load_state_dict(torch.load(weights, map_location="cpu"))
model.eval()
tf = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
img = Image.open("your_image.jpg").convert("RGB")
with torch.no_grad():
probs = torch.softmax(model(tf(img).unsqueeze(0)), dim=1)[0]
labels = ["Abaya", "Thobe"]
print(f"{labels[probs.argmax()]}: {probs.max():.1%} confidence")