Research Project 🧠
Models for my research project in Breast Cancer Detection
Image Classification • Updated • 22Note Using VGG16
keanteng/densenet201-breast-cancer-classification-0603
Image Classification • Updated • 37Note Using DenseNet-201
keanteng/swin-v2-breast-cancer-classification-0602
Image Classification • Updated • 27Note First run Swin-V2-Base Pretrained ImageNet-1K
keanteng/swin-v2-breast-cancer-classification-0603
Image Classification • Updated • 29Note Second run Swin-V2-Base Pretrained ImageNet-1k | Different Learning Rate than 0602
keanteng/swin-v2-large-ft-breast-cancer-classification-0603
Image Classification • Updated • 30 • 1Note Base Model: Pretrained on ImageNet-22k and Fine-Tuned on ImageNet-1k
keanteng/swin-v2-large-breast-cancer-classification-0603
Image Classification • Updated • 29Note Base Model: Pretrained on ImageNet-22k Only
keanteng/efficientnet-b7-breast-cancer-classification-0603
Image Classification • Updated • 31Note Bad performance for some unknown reasons, might be architectural issues ⚠️, infinite validation loss when using auto_grad, and high validation loss after the fix
keanteng/efficientnet-b7-breast-cancer-classification-0603-2
Image Classification • Updated • 22Note Bad performance for some unknown reasons, might be architectural issues ⚠️, infinite validation loss when using auto_grad, and high validation loss after the fix
keanteng/efficientnet-b7-breast-cancer-classification-0603-3
Image Classification • Updated • 17Note Bad performance for some unknown reasons, might be architectural issues ⚠️, infinite validation loss when using auto_grad, and high validation loss after the fix
keanteng/efficientnet-b0-breast-cancer-classification-0604-1
Image Classification • Updated • 19Note Slightly performance increase. Efficient-b7 poor performance due to incorrect images size. Add weight penalty, parameters adjustment and improved image preporcessing.
keanteng/efficientnet-b0-breast-cancer-classification-0604-2
Image Classification • Updated • 24Note Better improvement but not best. LR = 1e4 with accumulation step of 4 seems to be the sweet spot, for the model. Validation accuracy can sometimes cross 50% which is good.