Masked Autoencoders are Scalable Learners of Cellular Morphology
Official repo for Recursion's two recently accepted papers:
- Spotlight full-length paper at CVPR 2024 -- Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology
- Paper: link to be shared soon!
- Spotlight workshop paper at NeurIPS 2023 Generative AI & Biology workshop
Provided code
See the repo for ingredients required for defining our MAEs. Users seeking to re-implement training will need to stitch together the Encoder and Decoder modules according to their usecase.
Furthermore the baseline Vision Transformer architecture backbone used in this work can be built with the following code snippet from Timm:
import timm.models.vision_transformer as vit
def vit_base_patch16_256(**kwargs):
default_kwargs = dict(
img_size=256,
in_chans=6,
num_classes=0,
fc_norm=None,
class_token=True,
drop_path_rate=0.1,
init_values=0.0001,
block_fn=vit.ParallelScalingBlock,
qkv_bias=False,
qk_norm=True,
)
for k, v in kwargs.items():
default_kwargs[k] = v
return vit.vit_base_patch16_224(**default_kwargs)
Provided models
A publicly available model for research can be found via Nvidia's BioNemo platform, which handles inference and auto-scaling for you: https://www.rxrx.ai/phenom
We are not able to release model weights at this time.