File size: 2,015 Bytes
fb30739
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: gpl-3.0
language:
- en
metrics:
- accuracy
- sensitivity
- specificity
base_model: google/mobilenet-v2
new_version: "true"
pipeline_tag: image-classification
library_name: pytorch
tags:
- medical
- oral-cancer
- healthcare
- mobileNet
- image-classification
- pytorch
eval_results:
  accuracy: 0.95
  sensitivity: 0.93
  specificity: 0.91
---

# Umlomo – Oral Cancer Detection Model

This model is a fine‑tuned **MobileNetV2** for binary classification of oral cavity images into **Normal** or **Oral Cancer**. It is part of the MySmile project, an AI‑powered oral health screening tool designed to empower individuals with early risk assessment.

## Model Details

- **Base Architecture:** MobileNetV2 (pretrained on ImageNet)
- **Fine‑tuned Dataset:** Curated oral images (normal and cancerous)
- **Input Size:** 224×224 RGB
- **Output:** Two classes – `Normal` and `Oral Cancer`
- **Framework:** PyTorch

## Intended Use

This model is intended for research and educational purposes within the MySmile screening application. It provides a preliminary risk assessment and is **not a substitute for professional medical diagnosis**.

## How to Use

### Installation
```bash
pip install torch torchvision pillow

import torch
from torchvision import transforms
from PIL import Image

# Load model
model = torch.hub.load('mysmile/umlomo', 'model', trust_repo=True)
model.eval()

# Preprocess image
transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                         std=[0.229, 0.224, 0.225])
])

image = Image.open('oral_photo.jpg').convert('RGB')
input_tensor = transform(image).unsqueeze(0)

# Inference
with torch.no_grad():
    outputs = model(input_tensor)
    probs = torch.softmax(outputs, dim=1)
    pred_idx = torch.argmax(probs, dim=1).item()

class_names = ['Normal', 'Oral Cancer']
print(f"Prediction: {class_names[pred_idx]}, Confidence: {probs[0][pred_idx]:.2f}")
```

---