File size: 2,582 Bytes
17452c2
 
 
 
 
 
 
d98cf4b
 
17452c2
 
 
 
d98cf4b
 
17452c2
 
 
 
 
5574517
17452c2
 
 
5574517
 
 
 
 
 
 
 
17452c2
5574517
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17452c2
 
5574517
 
 
 
 
 
17452c2
 
d98cf4b
17452c2
 
 
 
d98cf4b
17452c2
d98cf4b
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: apache-2.0
tags:
- chemistry
- drug-discovery
- molecular-modeling
- mumo
pipeline_tag: graph-ml
library_name: transformers
---

# mumo-pretrain

This model was trained using MuMo (Multi-Modal Molecular) framework, as presented in the paper [Structure-Aware Fusion with Progressive Injection for Multimodal Molecular Representation Learning](https://huggingface.co/papers/2510.23640).
The official code repository is available at: https://github.com/selmiss/MuMo

## Model Description

- **Model Type**: MuMo Pretrained Model
- **Training Data**: Molecular structures and properties
- **Framework**: PyTorch + Transformers + Mamba-ssm

## Usage

### Loading the Model

MuMo uses a custom loading function. Here's how to load the pretrained model:

```shell
git clone https://github.com/selmiss/MuMo.git
```

```python
from transformers import AutoConfig, AutoTokenizer
from model.load_model import load_model
from dataclasses import dataclass

# Load configuration and tokenizer
repo = "zihaojing/MuMo-Pretrained"
config = AutoConfig.from_pretrained(repo, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(repo)

# Set up model arguments
@dataclass
class ModelArgs:
    model_name_or_path: str = repo
    model_class: str = "MuMoFinetune"  # or "MuMoPretrain" for pretraining
    cache_dir: str = None
    model_revision: str = "main"
    use_auth_token: bool = False
    task_type: str = None  # e.g., "classification" or "regression" for finetuning

model_args = ModelArgs()

# Load the model
model = load_model(config, tokenizer=tokenizer, model_args=model_args)
```

**Notes:**
- Use `model_class="MuMoPretrain"` for pretraining or inference
- Use `model_class="MuMoFinetune"` for finetuning tasks
- Set `task_type` to `"classification"` or `"regression"` when using `MuMoFinetune`
- The model supports loading from both Hugging Face Hub (e.g., `"zihaojing/MuMo-Pretrained"`) and local paths (e.g., `"/path/to/model"`)

## Training Details

- Training script: See the [official GitHub repository](https://github.com/selmiss/MuMo) for details.
- Framework: Transformers + DeepSpeed

## Citation

If you use this model or the MuMo framework, please cite our paper:

```bibtex
@inproceedings{jing2025mumo,
  title        = {MuMo: Multimodal Molecular Representation Learning via Structural Fusion and Progressive Injection},
  author       = {Jing, Zihao and Sun, Yan and Li, Yan Yi and Janarthanan, Sugitha and Deng, Alana and Hu, Pingzhao},
  booktitle    = {Advances in Neural Information Processing Systems (NeurIPS)},
  year         = {2025}
}
```