MuMo-Pretrained / README.md
nielsr's picture
nielsr HF Staff
Add pipeline tag, library name, paper link, GitHub link, and BibTeX citation
8f687c3 verified
|
raw
history blame
1.72 kB
---
license: apache-2.0
tags:
- chemistry
- drug-discovery
- molecular-modeling
- mumo
pipeline_tag: graph-ml
library_name: transformers
---
# mumo-pretrain
This model was trained using MuMo (Multi-Modal Molecular) framework, as presented in the paper [Structure-Aware Fusion with Progressive Injection for Multimodal Molecular Representation Learning](https://huggingface.co/papers/2510.23640).
The official code repository is available at: https://github.com/selmiss/MuMo
## Model Description
- **Model Type**: MuMo Pretrained Model
- **Training Data**: Molecular structures and properties
- **Framework**: PyTorch + Transformers
## Usage
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
# Load model
model_path = "zihaojing/mumo-pretrain"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
# Example usage
smiles = "CCO" # Ethanol
inputs = tokenizer(smiles, return_tensors="pt")
outputs = model(**inputs)
```
## Training Details
- Training script: See the [official GitHub repository](https://github.com/selmiss/MuMo) for details.
- Framework: Transformers + DeepSpeed
## Citation
If you use this model or the MuMo framework, please cite our paper:
```bibtex
@inproceedings{jing2025mumo,
title = {MuMo: Multimodal Molecular Representation Learning via Structural Fusion and Progressive Injection},
author = {Jing, Zihao and Sun, Yan and Li, Yan Yi and Janarthanan, Sugitha and Deng, Alana and Hu, Pingzhao},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2025}
}
```