MolCrawl/compounds
Collection
22 items • Updated
GPT-2 XL (1.5B parameters) fine-tuned on GuacaMol SMILES data, starting from the molcrawl-compounds-gpt2-xl pre-trained model.
The tokenizer is a character-level BPE tokenizer (vocab_size=612). Input SMILES strings should be passed without spaces. The [SEP] token (id=13) is used as the end-of-sequence marker.
GuacaMol: https://github.com/BenevolentAI/guacamol (Fine-tuning dataset)
Model Type: gpt2
Data Type: Molecule/Compound
Training Date: 2026-04-24
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("kojima-lab/molcrawl-compounds-guacamol-gpt2-xl")
tokenizer = AutoTokenizer.from_pretrained("kojima-lab/molcrawl-compounds-guacamol-gpt2-xl")
# Generate SMILES string
prompt = "CC(=O)O"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
input_ids,
max_new_tokens=50,
do_sample=True,
temperature=0.8,
eos_token_id=tokenizer.convert_tokens_to_ids("[SEP]"), # [SEP] is EOS for compounds
pad_token_id=0,
)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
Training pipeline, configuration files, and data preparation scripts are available in the MolCrawl GitHub repository: https://github.com/mmai-framework-lab/MolCrawl
This model is released under the APACHE-2.0 license.
If you use this model, please cite:
@misc{molcrawl_compounds_guacamol_gpt2_xl,
title={molcrawl-compounds-guacamol-gpt2-xl},
author={{RIKEN}},
year={2026},
publisher={{Hugging Face}},
url={{https://huggingface.co/kojima-lab/molcrawl-compounds-guacamol-gpt2-xl}}
}