pretty_name: MLM-Scaling-datasets
license: mit
tags:
- chemistry
- biology
- drug-discovery
- molecular-language-modeling
- smiles
- deepsmiles
- safe
- fragseq
- text
MLM-Scaling-datasets
Dataset Summary
MLM-Scaling-datasets is the companion dataset repository for the paper "Unveiling Scaling Behaviors in Molecular Language Models: Effects of Model Size, Data, and Representation". It packages the molecular corpora used to study how model size, token budget, and molecular representation affect autoregressive molecular language modeling.
The source molecules are collected from ZINC and UniChem and serialized into multiple molecular string representations. The repository is designed for:
- compute-controlled scaling studies
- pretraining GPT-style molecular language models
- controlled comparison across molecular representations
- downstream transfer studies for molecular property prediction
Dataset Source
According to the paper, the pretraining corpus is constructed from large-scale unlabeled molecules collected from:
- ZINC
- UniChem
Each molecule is then converted into one of the molecular string representations used in the study.
Molecular Representations
The paper studies five string representations:
- SMILES
- DeepSMILES
- SAFE
- FragSeq
- FragLink
Repository Contents
The current repository layout contains the following subsets.
DeepSMILES
DeepSMILES-100MDeepSMILES-300MDeepSMILES-1BDeepSMILES-3B
FragSeq
FragSeq-100MFragSeq-300MFragSeq-1BFragSeq-3B
FragLink
FragLink-100MFragLink-300MFragLink-1BFragLink-3B
SAFE
SAFE-100MSAFE-300MSAFE-1BSAFE-3B
SMILES
SMILES-100MSMILES-300MSMILES-1BSMILES-3B
What the Scale Labels Mean
The scale labels in the repository names refer to token budget after tokenization under a given representation.
For the main scaling grid in the paper, the following token budgets are used:
- 100M
- 300M
- 1B
- 3B
This point matters: a "1B-token" subset under one molecular representation is not the same thing as a "1B-token" subset under another representation, because sequence length and token statistics change with the representation.
Dataset Creation and Experimental Role
The paper uses these corpora in a structured scaling study with:
- 8 model sizes: 1M, 4M, 16M, 43M, 85M, 152M, 278M, 650M parameters
- 4 token budgets: 100M, 300M, 1B, 3B tokens
- 5 molecular representations
The main training grid uses single-epoch from-scratch runs for compute-controlled comparison.
Dataset Structure
At a conceptual level, each subset contains molecular strings serialized under a specific representation and scale.
Examples of what varies across subsets:
- the molecular string representation
- the token budget
- the token statistics and sequence lengths
- the downstream behavior of models trained on that subset
This repository is therefore best understood as a family of matched corpora rather than a single flat dataset.
Intended Use
This dataset repository is suitable for:
- pretraining autoregressive molecular language models
- scaling-law studies under matched compute
- representation comparison in molecular language modeling
- initialization studies for downstream molecular property prediction
Out-of-Scope Use
This repository is not intended to be used as:
- a clinical knowledge base
- a stand-alone benchmark for safety or efficacy claims
- a substitute for chemistry-specific filtering, synthesis planning, docking, or wet-lab validation
How to Download
For a multi-folder release like this one, the most robust way is to download the repository snapshot and then select the subset you need.
from huggingface_hub import snapshot_download
local_dir = snapshot_download(
repo_id="SZU-ADDG/MLM-Scaling-datasets",
repo_type="dataset",
)
print(local_dir)
After downloading, choose the folder that matches your target representation and token budget, for example:
SMILES-1BDeepSMILES-300MFragSeq-3BFragLink-1BSAFE-100M
Companion Resources
- Model repository: SZU-ADDG/MLM-Scaling-Model
- Code: SZU-ADDG/MLM-Scaling
- Paper: arXiv:2601.22757
Citation
If you use this dataset repository in your research, please cite:
@article{xu2026mlmscaling,
title={Unveiling Scaling Behaviors in Molecular Language Models: Effects of Model Size, Data, and Representation},
author={Xu, Dong and Pan, Qihua and Yuan, Sisi and Li, Jianqiang and Zhu, Zexuan and Ji, Junkai},
journal={arXiv preprint arXiv:2601.22757},
year={2026}
}