HRNet Cephalometric Landmark Detection
This model performs automatic detection of 19 anatomical landmarks in lateral cephalometric radiographs using HRNet-W32 architecture.
π¦· Model Description
- Architecture: HRNet-W32 (High-Resolution Network)
- Task: 19-point cephalometric landmark detection
- Dataset: ISBI Lateral Cephalograms
- Input Size: 768Γ768 pixels
- Output: 19 landmark coordinates (x, y)
- Model Size: 331.1 MB
π Landmarks Detected
- Sella turcica - Center of pituitary fossa
- Nasion - Frontonasal suture
- Orbitale - Lowest point of orbital cavity
- Porion - Highest point of acoustic meatus
- Subspinale (Point A) - Deepest midline point on maxilla
- Supramentale (Point B) - Deepest midline point on mandible
- Pogonion - Most prominent midline point of chin
- Menton - Lowest point of mandibular symphysis
- Gnathion - Midpoint between Pogonion and Menton
- Gonion - Corner of the jaw angle
- Lower Incisor Tip - Tip of lower central incisor
- Upper Incisor Tip - Tip of upper central incisor
- Upper Lip - Most prominent point of upper lip
- Lower Lip - Most prominent point of lower lip
- Subnasale - Junction between nose and upper lip
- Soft Tissue Pogonion - Most prominent point of chin in profile
- Posterior Nasal Spine - Tip of posterior nasal spine
- Anterior Nasal Spine - Tip of anterior nasal spine
- Articulare - Junction of temporal bone and mandible
π Usage
Quick Start with Streamlit
import streamlit as st
import torch
from huggingface_hub import hf_hub_download
# Download model
@st.cache_resource
def load_model():
model_path = hf_hub_download(
repo_id="cwlachap/hrnet-cephalometric-landmark-detection",
filename="best_model.pth"
)
# Load your HRNet model here
model = get_hrnet_w32(config)
checkpoint = torch.load(model_path, map_location='cpu')
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
return model
model = load_model()
Python API
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="cwlachap/hrnet-cephalometric-landmark-detection",
filename="best_model.pth"
)
# Load model
model = get_hrnet_w32(config)
checkpoint = torch.load(model_path, map_location='cpu')
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
# Perform inference
with torch.no_grad():
landmarks = model(input_image)
π Performance
- Mean Radial Error (MRE): ~1.2-1.6mm
- Successful Detection Rate (SDR@2mm): ~80-85%
- Successful Detection Rate (SDR@2.5mm): ~88-92%
- Training Time: ~15-20 hours on RTX 4070 Ti SUPER
π₯ Applications
- Orthodontic Treatment Planning: Automated cephalometric analysis
- Research: Large-scale cephalometric studies
- Education: Teaching cephalometric landmark identification
- Clinical Decision Support: Assisting radiological assessment
β οΈ Limitations
- Designed for lateral cephalometric radiographs only
- Performance may vary on images with different acquisition parameters
- Intended for research and educational purposes
- Clinical use requires validation by qualified professionals
π Citation
If you use this model in your research, please cite:
@misc{hrnet-cephalometric-2024,
title={HRNet for Cephalometric Landmark Detection},
author={cwlachap},
year={2024},
url={https://huggingface.co/cwlachap/hrnet-cephalometric-landmark-detection}
}
π License
This model is released under the MIT License, making it free for both academic and commercial use.
π€ Contributing
This is an open-source project! Contributions, issues, and feature requests are welcome.
- Repository: [GitHub Repository URL]
- Issues: [GitHub Issues URL]
- Discussions: Use the Community tab above
π Acknowledgments
- ISBI Challenge for providing the cephalometric dataset
- HRNet authors for the excellent architecture
- The medical imaging community for advancing automated analysis techniques
Built with β€οΈ for the medical imaging community
- Downloads last month
- 18