YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

🫧 Microbubble Distilled Student Model (v2)

TinyBubbleNet — a 389K parameter distilled student for microbubble segmentation.

Property Value
Architecture Depthwise-separable U-Net (Cellpose-compatible)
Params 388,580
Size ~4.6 MB
base_ch 16
Input Grayscale microscopy images
Output 4 channels: dY, dX, cell_prob, dist_transform

Training Summary

  • Teacher: Cellpose-SAM-FT (300M params, 1.1GB) — fine-tuned on 2-3 annotated microbubble images
  • Dataset: 50 lab images from callumtilbury/microbubble-images
  • Pseudo-labels: Generated by teacher on all 50 images
  • Student: TinyBubbleNet(base_ch=16, depthwise=True)
  • Epochs: 300 (best at epoch 282)
  • Best val loss: 0.00083
  • Training time: ~30 min on NVIDIA A10G

Pipeline

Cellpose-SAM-FT → pseudo-labels on 50 lab images → TinyBubbleNet distillation

Usage

import torch
from model import TinyBubbleNet

model = TinyBubbleNet(in_channels=1, base_ch=16, out_channels=4)
checkpoint = torch.load("best_model.pt", map_location="cpu")
model.load_state_dict(checkpoint["model_state_dict"])

# For inference, see:
# https://huggingface.co/callumtilbury/bubble-distill

Files

  • best_model.pt — trained model weights (epoch 282, val_loss=0.00083)
  • model_and_train.py — full training script with model, loss, dataset definitions

Why This Matters

Model Params Size Inference @ 256²
Cellpose-SAM ~300M 1.1 GB ~100 ms
TinyBubbleNet 389K ~1.5 MB ~3 ms

~750× smaller, ~33× faster — specialized for your exact lab domain.

When image style changes, re-annotate 2-3 images, re-finetune the teacher, regenerate pseudo-labels, and re-train the student in ~30 minutes.


Distilled from Cellpose-SAM-FT using pseudo-labels on microbubble-images.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support