metadata
language:
- en
tags:
- chemistry
- molecules
- drug-discovery
- molecular-generation
- multimodal
base_model: Qwen/Qwen2.5-3B
library_name: transformers
datasets:
- language-plus-molecules/mCLM_Pretrain_1k
license: cc-by-nc-nd-4.0
mCLM_1k-3b
mCLM: A Modular Chemical Language Model that Generates Functional and Makeable Molecules
Relevant Links
:globe_with_meridians: Website | :octocat: Code | :hugs: Data and Model | :desktop_computer: Demo | :page_with_curl: Paper
Architecture
- Base Model: Qwen2.5-3B
- Molecular Encoder: GNN with 5 message passing layers
- Molecular Tokenizer: Custom block-based tokenizer for SMILES representations
Usage
Please follow installation instructions from the Github .
from mCLM.model.models import mCLM
from mCLM.tokenizer.utils import convert_instruction_to_input, message_ids_to_string, get_processor
import torch
# ===========================
# Settings
# ===========================
DTYPE = torch.bfloat16
DEVICE = torch.device("cpu")
if __name__ == "__main__":
model = mCLM.from_pretrained("language-plus-molecules/mCLM_1k-3b")
tokenizer = model.tokenizer
molecule_tokenizer = model.molecule_tokenizer
bad_words_ids = None
model.to(DEVICE).to(DTYPE) #This is important for the HF model
while True:
user_input = input("Enter an instruction (type 'quit' to exit): ")
if user_input == 'quit': break
user_input = user_input.strip()
message_tokens = convert_instruction_to_input(user_input, model, molecule_tokenizer, tokenizer)
################## Generate results ###################################
beam_size = 5
input_ids = message_tokens.to(DEVICE)
processor = get_processor(molecule_tokenizer, tokenizer) #we do this every time in case vocab was expanded
generated = model.generate(
input_ids=input_ids,
attention_mask=torch.ones_like(input_ids), #This is to turn off the attention mask warning
pad_token_id=tokenizer.eos_token_id, #This is to turn off the pad token warning
max_new_tokens=32,
num_beams=beam_size,
num_return_sequences=beam_size,
logits_processor=processor,
do_sample=False,
bad_words_ids=bad_words_ids,
diversity_penalty=1.0,
num_beam_groups=beam_size,
)
for i in [0]: #range(beam_size):
message_ids = generated[i, message_tokens.shape[1]:]
mol_msg, smiles_msg, mol_list, smiles_list = message_ids_to_string(message_ids, molecule_tokenizer, tokenizer)
if smiles_msg != None:
print(mol_msg)
if len(smiles_list) > 0:
print("SMILES list:", smiles_list)
else:
print(mol_msg)
print()
Training Data
The model was trained on:
- Molecular instruction-following data from activity cliffs
- General text instruction data, SMolInstruct, Mol-Instructions Biomedical
Citation
If you use this model, please cite:
@misc{edwards2025mclmmodularchemicallanguage,
title={mCLM: A Modular Chemical Language Model that Generates Functional and Makeable Molecules},
author={Carl Edwards and Chi Han and Gawon Lee and Thao Nguyen and Sara Szymkuć and Chetan Kumar Prasad and Bowen Jin and Jiawei Han and Ying Diao and Ge Liu and Hao Peng and Bartosz A. Grzybowski and Martin D. Burke and Heng Ji},
year={2025},
eprint={2505.12565},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.12565},
}
Model Card Contact
For questions or issues, please open an issue in the repository.