MuMo-pin1 / README.md
zihaojing's picture
Update README.md
619d559 verified
metadata
license: apache-2.0
tags:
  - chemistry
  - drug-discovery
  - molecular-modeling
  - mumo

mumo-pin1

This model was trained using MuMo (Multi-Modal Molecular) framework.

Model Description

  • Model Type: MuMo Pretrained Model
  • Training Data: Molecular structures and properties
  • Framework: PyTorch + Transformers

Usage

Loading the Model MuMo uses a custom loading function. Here's how to load the pretrained model:

git clone https://github.com/selmiss/MuMo.git

from transformers import AutoConfig, AutoTokenizer from model.load_model import load_model from dataclasses import dataclass

Load configuration and tokenizer

repo = "zihaojing/MuMo-pin1" config = AutoConfig.from_pretrained(repo, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(repo)

Set up model arguments

class ModelArgs: model_name_or_path: str = repo model_class: str = "MuMoFinetunePairwise" # or "MuMoPretrain" for pretraining cache_dir: str = None model_revision: str = "main" use_auth_token: bool = False task_type: str = None # e.g., "classification" or "regression" for finetuning

model_args = ModelArgs()

Load the model

model = load_model(config, tokenizer=tokenizer, model_args=model_args)

Notes:

Use model_class="MuMoPretrain" for pretraining or inference Use model_class="MuMoFinetune" or "MuMoFinetunePairwise" for finetuning tasks Set task_type to "classification" or "regression" when using MuMoFinetune The model supports loading from both Hugging Face Hub (e.g., "zihaojing/MuMo-pin1") and local paths (e.g., "/path/to/model")

Training Details

  • Training script: See repository for details
  • Framework: Transformers + DeepSpeed

Citation

If you use this model, please cite the original MuMo paper.