Datasets:

Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
mc_labels: list<item: double>
  child 0, item: double
input: string
n_tokens: int64
-- schema metadata --
huggingface: '{"info": {"features": {"mc_labels": {"feature": {"dtype": "' + 150
to
{'input': Value('string'), 'n_tokens': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  for key, table in generator:
                                    ^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 609, in wrapped
                  for item in generator(*args, **kwargs):
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py", line 74, in _generate_tables
                  yield Key(file_idx, batch_idx), self._cast_table(pa_table)
                                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py", line 54, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              mc_labels: list<item: double>
                child 0, item: double
              input: string
              n_tokens: int64
              -- schema metadata --
              huggingface: '{"info": {"features": {"mc_labels": {"feature": {"dtype": "' + 150
              to
              {'input': Value('string'), 'n_tokens': Value('int64')}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

input
string
n_tokens
int64
CC[C@]C)CCNC=O)[C@H]CCcccccOC))c6))))))=NO5))))))C5
35
Ccccncc6CCC=O)N[C@@H]CC[C@H]5C[C@@H]NC=O)[C@]O)CCNC=O)OCC)C)C))))C5)))))))C7
56
CncC=O)N[C@H]CCNC=O)CCCCCCC5))))))))C5)))))))c[nH]c5=O
38
C[C@@H]COccccF)cc6F)))))))))NC=O)ccc[N+]=O)[O-]))nn5C
38
CCC)C)OC=O)NCcccccNC=O)NCCCCC4)))[C@H]4[C@H]CCCO5)))))))))))c6
40
COcccc[C@H]C[C@H]3C=O)NCC[C@H][C@@H]5CCN5C=O)cccc-n[nH]cC)cc5=O))))))cc6)))))))))))))))))))cc6
70
O=CNCcccccCOCcccco5))))))))c6))))))))[C@@H]C[C@@H]CCCC[C@H]6N9C=O)cccccc6
44
CCOC=O)NCCC[C@H]C=O)NccccNCCOCC6))))))cc6C)))))))))C6
32
O=Cccccnccnc6c%10))))))))))NCCO[C@@]CCNC=O)[C@H]ON=Ccccccc6))))))O5))))))C5))))C6
52
CCOccccCNCC))C=O)CCCNC=O)CC)C)N=C5O))))))))))))cc6OC
31
CCC)CCCC=CC=C5)))))CCC)C)C6
17
CNC)CO)=NN=CO)cccF)cF)cc6Cl
14
O=C[O-])C[C@H]C=O)NCC[NH+]6C[C@H]O)Cccccccccc6c%10
34
CCCcnocC[NH+]C[C@H]C[C@H]5[C@H]4NC=O)cccncc6F))))))))))))))))n5
45
CNCCCO)=NcccC=O)NC[C@@H]CN))[C@H]cccccc6))))))C5))))))ccc6%11
39
Cncc[C@@H]CO))NC=O)C=O)Ncccnn5Cccnn-cccccc6))))))c5))))))))))))))))cn5
45
CC[C@H]C)C=O)N[C@@H]CC[C@H]5[C@@H]NC=O)ccccn[nH]cC)c5c9)))))))))))C6
55
CCC)C)[C@@H]OCCC[C@H]6C[NH2+]Cccnc-cccco5)))))s5
30
CCC)OCC[C@H]O)C[NH+]Ccccc-ncncn5)))))cc6)))))))CCC3
32
CcccC[NH+]C)[C@@H]C[C@@H]CNC=O)OCC)C)C))))C[C@@H]5C8))))))))))nC)n5
54
CNCCSC)=O)=O))))C=O)cccccccc6cc%10OCF)F
20
CCCCNC=O)N[C@@H]cccccNC=O)CCCCCC6))))))))c6))))))CC=O)OC)))=C6C
41
CcccncNC[C@@H]O)[C@H]NC=O)COCC=Ccccccc6)))))))))))))C5)))))n6
41
CNCccnnC)c5))))))C=O)CNCCOC[C@@H]6C#N
23
C=Ccncncc6CCCCC7)))))))))))CCC5
15
CcnocC)c5C[NH+]CC[C@H]CNC=O)CCccnc[nH]5))))))))))C5
33
CCC)CC=O)NCCC[C@@H]CNC=O)CnncnC)c5=O))))))))))C6
33
CCC=O)NCC[C@@H]C)[C@@H]CNC=O)cccCl)cCl)[nH]5))))))))C6
40
CCcnncC[NH2+][C@H]CCO)))cccCl)ccCl)c6)))))))))o5
29
Cncnc-cccccNS=O)=O)cccF)cF)cc6F)))))))))c6))))))n5
31
C[C@@H]CNcnncCNCCCC5=O)))))))n5CCNC=O)CC)C)C)))))))))))C[C@]CCCOC6)))))O6
49
C[NH+]CC[C@@H]C=O)NC[C@@H]CCN4C=O)ccn[nH]c5CCC3))))))))))))))))C5
49
CC[C@H]C)cnncNCCC[C@H]Ocncccc6F))))))))C6))))))n5C[C@@H]CCC=CO6
40
Cnnncc5C[NH+]C[C@H]O)[C@H]NC=O)[C@@H]CCccnnC)c5C9)))))))))))C5
43
C=C[C@@H]OCC[C@H]5C=O)NCC[C@H]CNC=O)ccccnnnnc95))))))))))C[C@H]58
47
CCC)OcccCN=CO)NCCNcccccc6C#N))))))))CC6)))))))))ccn6
28
CcccCCCCO)=NcccC=O)NC)C)))ccc6Cl))))))))))))cC)s5
29
O=C[O-])CCCCNS=O)=O)cccc-cnn[n-]n5)))))cc6Cl))))))))CC6
38
CCcccc=O)cOcccccc6)))))))cC)oc6cC[NH+]C)C)))c%10O
28
COC=O)[C@@H]CCC[C@@H]5S=O)=O)N[C@H]C)COcccccc6CF)F)F
37
NC=O)cccccS[C@@H]CCNcsccc5C=O)[O-])))))))C5=O)))))))c6
37
C=CC)COPCC
5
CCOCCOccCl)cccc6N=CO)CCCO)=Ncccccc6%10
12
CCCS=O)=O)[N-]ccccNC=O)[C@H]CC=CC[C@H]6C=O)[O-]))))))))))cc6
47
CCOcccNC=O)C[C@@H]C=O)NccBr)cccc69))))))))))))ccc6C
31
COC=O)COcccccC=O)Ncnccn5Ccccccc6))))))))))))))c6
24
CCC)COccccS=O)=O)NCCC#N))))CCC)C)C)))))cc6
27
C[C@@H]NC=O)ccccs5)))))))[C@H]CCNC=O)CCC[NH+]C)CC6)))))))C5
44
COccccncC=O)NCCNCCO)))CC6)))))))cC)nc95
19
CcncCOcccccC=O)[C@H]C#N))cnccC)cccc6[nH]9)))))))))))c6))))))))cs5
42
Cccccc-cnncn5C[C@@H]C=O)NCcccccc6)CCO5)))))))))))CC6)))))))))c6
38
O=CNCC[NH+]CCcsccc5C9))))))))))))N[C@H]cccccc6))))))cccccc6)CCCO6
36
C[C@@H]C[C@H]3C=O)NCCNccccSC)=O)=O))cc6[N+]=O)[O-]
38
CccccCC=O)NCcccccNCC[C@H][NH3+])C5)))))n6))))))))))csc5s8
33
CCC)CC[C@H]CCCN5C=O)C=O)NC[C@H]C)cnccs5
25
COCCC)C)C[NH2+]CCcccC)ccc6OC
12
CcccS=O)=O)NCCN=CO)NCCNCCCCC6)))))))))))))))cC)s5
30
CCC)[C@@H][NH2+][C@H]CCCNCC)C))C6=O))))))))C=O)[O-]
41
Ccccnncnc5c9NC=O)NCCOcccc-cnn[n-]n5)))))cc6)))))))C4
26
CCcnc[C@@H]CCCCN6C=O)ccc=O)[nH]ccccF)cc%106)))))))))))))))))no5
41
COcccccC)c6NC=O)C=O)NCCncnnc5CF)F)F)))))C6
21
CccC=O)N[C@@H]CCCN[C@H]C)ccccF)cc6)))))))C6=O)))))))))nnn5CccccF)cc6
44
CNCCC[C@H][NH-])CO)=Ncnccs5))))))))))NS=O)=O)ccC)ccOC=N)))cC)c6C
43
C[C@@H]CNC=O)CC=O)NC)C)))CC3))))C[C@H]5C[NH2+]C[C@H]CCCF)F)C5
48
CC=O)cccNC=O)NCCO[C@@H]C[C@H]37)))))))))ccc6F
32
COC=O)[C@@]C)NC=O)cccSN)=O)=O))cc[N+]=O)[O-]))c6C)))))))))CCC3
49
CccC=O)NCCOC[C@]CNC=O)ccccnnC)cc5c9))))))))))CCO6)))))C7))))))))ncccccn96
42
COcccC)c-cncccC)cSN)=O)=O))cc6n9C))))))))))cn6
27
CCOC=O)CCCCNC=O)N[C@@H]C)ccccBr)cc6))))))))))CC6
31
CCC)CCNC=O)cn[nH]nc5-cccccc6))))))))))))))C3C)C
30
cccC[NH+]CCcn[nH]nc5C9))))))))))ccOCCCCC5))))))c6
28
CC[C@@H]C)CC=O)NCCCC)CNC=O)ccnccF)c6)))))))))CC6
32
CC[C@@H]OCC[C@H]5C=O)NC[C@@H][C@H]C5)[C@H]3NC=O)CC)CCCCCC7
43
CCOCCS=O)=O)NC[C@H]cccccc6Cl)))))))cc[nH]cccccc96)))))))[C@H]6C
39
C[NH+]CCC[C@H]5C=O)NCC[C@H]CO[C@@H]CNC=O)[C@@H]C[C@H]3CCC3)))))))))[C@H]5C9
60
CC[C@@H]CCNC=O)cccccc6SCC=O)NcccccC#N))c6)))))))))))))))))[C@@H]5C
45
COCC=O)NCC[C@@H]C)[C@@H]NC=O)cccccF)ccc6[nH]9)))))))))))C6
42
CNCC[NH+]CCNcccccc6))))))CC6))))))))C=O)NCcccccCN)=O))c6
32
CCC)[NH+]CCC[C@H]Cnc-ccccnc6))))))nnc5NC)CccccBr)cc6))))))))))))))C6
41
O=S=O)Nccccncsc5c9Br)))))))))))cccccc6CF)F)F
23
N#Ccccncc6NCCNS=O)=O)cccc-cccccc6))))))cc6)))))))CC6
27
CCNC=O)[C@@H]C)[NH2+][C@H]C)cccccc6)CCCC6)))))))))))))[C@@H]CCS=O)=O)C5
53
C[C@H][NH2+]CCCO))CCCCC6))))))))cccccc6OCC=O)[O-]
30
CCCcccC=O)NCC[C@H]C)[C@H]C[NH2+]CCF)))))C6)))))))ccn6
37
C=CC[C@H]C)C=O)NC[C@H]CCNC=O)cccnCC)C))n5))))))[C@@H]5ccccnc6
43
CCC)ncccC=O)NC[C@@H]C)[C@H]NC=O)C=CCCCCC7)))))))))C5))))))n5
44
CNC=O)ccccnnccc95))))))))))[C@H]CC[NH+]CC[C@@H]CCCO5)))))))C5
41
COC=O)[C@@H]cccccc6))))))[C@H]CCCC[NH+]6CccncBr)cc6C
30
CC=Ccccccc6))))))[C@H]3CO)=NCCcnocCCC3)))n5
22
COC=O)[C@H]OC))[C@]C)OcccOC))ccc6C=O)[C@H]%10C5
32
CC[C@H]C)NC=O)CCnc-cccccc6OC))))))))nnc5NCCCC=O)NC)))CC6
32
CC=O)O[C@@H]C=O)NccccCNC=O)CncO)csc5=S)))))))))CCC4))))cc6))))))))CC)C
50
CCCOCCNCCCN))OCC)C)C6
8
Cccc-cccc[nH]cC#N))cc5n9)))))))))ccC)c6OCC)C
26
CNC=O)CCNC=O)cccccc6-cncCC)C))no5))))))))))))C4
29
O=CNC[C@H]CCCCS6=O)=O)))))))))NCCO[C@@]CCcccccc69))))))))C6
39
CNCC[NH+]CcccccNC=O)ccccBr)s5)))))))c6)))))))CC6
28
CCC)[C@@H]C=O)NCcccccc6NCC[NH+]C)CC6)))))))))))))))nnnc-cccccc6))))))n5
45
CCCOC=O)COcccccCF)F)F))c6)))))))))))COC4
24
cccc-cccc-cccccc6)-cccccc-cccc-cccccc6))))))cc6))))))ccc-%11c%106)))))))))))))))cc6))))))cc6
54
End of preview.

MLM-Scaling-datasets

Dataset Summary

MLM-Scaling-datasets is the companion dataset repository for the paper "Unveiling Scaling Behaviors in Molecular Language Models: Effects of Model Size, Data, and Representation". It packages the molecular corpora used to study how model size, token budget, and molecular representation affect autoregressive molecular language modeling.

The source molecules are collected from ZINC and UniChem and serialized into multiple molecular string representations. The repository is designed for:

  • compute-controlled scaling studies
  • pretraining GPT-style molecular language models
  • controlled comparison across molecular representations
  • downstream transfer studies for molecular property prediction

Dataset Source

According to the paper, the pretraining corpus is constructed from large-scale unlabeled molecules collected from:

  • ZINC
  • UniChem

Each molecule is then converted into one of the molecular string representations used in the study.

Molecular Representations

The paper studies five string representations:

  • SMILES
  • DeepSMILES
  • SAFE
  • FragSeq
  • FragLink

Repository Contents

The current repository layout contains the following subsets.

DeepSMILES

  • DeepSMILES-100M
  • DeepSMILES-300M
  • DeepSMILES-1B
  • DeepSMILES-3B

FragSeq

  • FragSeq-100M
  • FragSeq-300M
  • FragSeq-1B
  • FragSeq-3B

FragLink

  • FragLink-100M
  • FragLink-300M
  • FragLink-1B
  • FragLink-3B

SAFE

  • SAFE-100M
  • SAFE-300M
  • SAFE-1B
  • SAFE-3B

SMILES

  • SMILES-100M
  • SMILES-300M
  • SMILES-1B
  • SMILES-3B

What the Scale Labels Mean

The scale labels in the repository names refer to token budget after tokenization under a given representation.

For the main scaling grid in the paper, the following token budgets are used:

  • 100M
  • 300M
  • 1B
  • 3B

This point matters: a "1B-token" subset under one molecular representation is not the same thing as a "1B-token" subset under another representation, because sequence length and token statistics change with the representation.

Dataset Creation and Experimental Role

The paper uses these corpora in a structured scaling study with:

  • 8 model sizes: 1M, 4M, 16M, 43M, 85M, 152M, 278M, 650M parameters
  • 4 token budgets: 100M, 300M, 1B, 3B tokens
  • 5 molecular representations

The main training grid uses single-epoch from-scratch runs for compute-controlled comparison.

Dataset Structure

At a conceptual level, each subset contains molecular strings serialized under a specific representation and scale.

Examples of what varies across subsets:

  • the molecular string representation
  • the token budget
  • the token statistics and sequence lengths
  • the downstream behavior of models trained on that subset

This repository is therefore best understood as a family of matched corpora rather than a single flat dataset.

Intended Use

This dataset repository is suitable for:

  1. pretraining autoregressive molecular language models
  2. scaling-law studies under matched compute
  3. representation comparison in molecular language modeling
  4. initialization studies for downstream molecular property prediction

Out-of-Scope Use

This repository is not intended to be used as:

  • a clinical knowledge base
  • a stand-alone benchmark for safety or efficacy claims
  • a substitute for chemistry-specific filtering, synthesis planning, docking, or wet-lab validation

How to Download

For a multi-folder release like this one, the most robust way is to download the repository snapshot and then select the subset you need.

from huggingface_hub import snapshot_download

local_dir = snapshot_download(
    repo_id="SZU-ADDG/MLM-Scaling-datasets",
    repo_type="dataset",
)

print(local_dir)

After downloading, choose the folder that matches your target representation and token budget, for example:

  • SMILES-1B
  • DeepSMILES-300M
  • FragSeq-3B
  • FragLink-1B
  • SAFE-100M

Companion Resources

Citation

If you use this dataset repository in your research, please cite:

@article{xu2026mlmscaling,
  title={Unveiling Scaling Behaviors in Molecular Language Models: Effects of Model Size, Data, and Representation},
  author={Xu, Dong and Pan, Qihua and Yuan, Sisi and Li, Jianqiang and Zhu, Zexuan and Ji, Junkai},
  journal={arXiv preprint arXiv:2601.22757},
  year={2026}
}
Downloads last month
13

Models trained or fine-tuned on SZU-ADDG/MLM-Scaling-datasets

Paper for SZU-ADDG/MLM-Scaling-datasets