Clara
File size: 8,444 Bytes
3fddcc8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
library_name: clara
---

## Model Overview

The code for using the ReaSyn model checkpoint is available in the [official Github repository](https://github.com/NVIDIA-Digital-Bio/ReaSyn).

### Description

ReaSyn is a model for predicting the synthesis pathway, reaction steps from reactants to final product(s), for a target product molecule. When the target molecule cannot be synthesized directly using known reaction steps, ReaSyn will generate the pathways for the most structurally similar, synthesizable analog of the target molecule.The model uses an encoder-decoder Transformer architecture, where a full synthetic pathway is represented as a text sequence. ReaSyn v2 improves the reconstruction and projection capabilities of ReaSyn v1 using a more advanced search (by combining top-down and bottom-up tree traversal) in addition to an Edit Flow model that edits generated pathways via deletion, substitution, and insertion operations. This approach allows the model to achieve SOTA performance in tasks like synthesis planning and incorporating synthesizability into goal-directed molecular property optimization.

This model is ready for commercial use.

### License/Terms of Use

GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). ReaSyn source code is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).

Deployment Geography: Global

Use Case: <br>
ReaSyn v2 is a model for predicting the synthetic pathway, reaction steps from reactants to final product(s), for a target product molecule. The model can be used in the pharmaceutical and chemical industries and in academic research to identify how to synthesize a molecule, help chemists in planning a first time synthesis of a molecule, the optimization of an existing synthesis pathway, or the filtering of candidate molecules based on ease of synthesis.  <br>

Release Date:  <br>
Github 1/8/2026 via https://github.com/NVIDIA-Digital-Bio/ReaSyn <br>
NGC 1/8/2026 via https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/resources/reasyn?version=2.0 <br> 
Hugging Face 1/8/2026 via:
- https://huggingface.co/nvidia/NV-ReaSyn-AR-166M-v2
- https://huggingface.co/nvidia/NV-ReaSyn-EB-174M-v2 <br>

### References
Research paper: "Exploring Synthesizable Chemical Space with Iterative Pathway Refinements" https://arxiv.org/abs/2509.16084

### Model Architecture

Architecture Type: Encoder-decoder
Network Architecture: Encoder-decoder Transformer
ReaSyn v2 utilizes an encoder-decoder Transformer architecture which takes a molecular SMILES as input and outputs its synthetic pathway autoregressively. Encoder contains 6 layers and decoder contains 10 layers. Both encoder and decoder have a hidden size of 768, 16 attention heads, and a feed-forward dimension of 4096.
ReaSyn v2 has another Edit Flow model, which has the same encoder-decoder Transformer architecture as backbone but with three additional heads. The Edit Flow model takes a molecular SMILES and synthetic pathway generated from the autoregressive model as input and outputs the probabilities of edit operations: insertion, deletion, and substitution, that yield a more refined synthetic pathway.

The autoregressive model has 166M parameters and the Edit Bridge model has 174M parameters.

### Autoregressive model

#### Input

Input Types: Text<br>
Input Formats: SMILES string<br>
Input Parameters: One-Dimensional (1D)<br>
Other Properties Related to Input: Maximum input length is 256 tokens.

#### Output

Output Types: Text<br>
Output Formats: Molecular synthetic pathway<br>
Output Parameters: One-Dimensional (1D)<br>
Other Properties Related to Output: Maximum output length is 512 tokens.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

### Edit Flow model

#### Input

Input Types: Text<br>
Input Formats: SMILES string, molecular synthetic pathway<br>
Input Parameters: One-Dimensional (1D)<br>
Other Properties Related to Input: Maximum input length of SMILES string is 256 tokens. Maximum input length of molecular synthetic pathway is 512 tokens.

#### Output

Output Types: Text<br>
Output Formats: Molecular synthetic pathway<br>
Output Parameters: One-Dimensional (1D)<br>
Other Properties Related to Output: Maximum output length is 512 tokens.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

### Software Integration

Runtime Engine: Torch<br>
Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere<br>
Preferred Operating System: Linux, Windows

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

### Model Versions

ReaSyn v2

## Training and Evaluation Datasets

### Training Datasets

SynFormer Reaction Templates<br>
Link: https://github.com/wenhao-gao/synformer/blob/main/data/rxn_templates/comprehensive.txt<br>
Data Modality: Text<br>
Text Training Data Size: 1 Billion to 10 Trillion Tokens<br>
Data Collection Method by dataset: Human<br>
Labeling Method by dataset: Automated<br>
Properties: 115 molecular reaction templates in the SMARTS format

Building Blocks in Enamine US Stock retrieved in October 2023<br>
Link: https://enamine.net/building-blocks/building-blocks-catalog<br>
Data Modality: Text<br>
Text Training Data Size: 1 Billion to 10 Trillion Tokens<br>
Data Collection Method by dataset: Human<br>
Labeling Method by dataset: N/A<br>
Properties: 115 molecular reaction templates in the SMARTS format

### Evaluation Dataset

Enamine REAL Test Set<br>
Link: https://github.com/wenhao-gao/synformer/blob/main/data/enamine_smiles_1k.txt<br>
https://enamine.net/compound-collections/real-compounds/real-database<br>
Data Collection Method by dataset: Human<br>
Labeling Method by dataset: N/A<br>
Properties: Randomly selected 1k test molecules from Enamine REAL to evaluate synthesizable molecule reconstruction.<br>

ChEMBL Test Set<br>
Link: https://github.com/wenhao-gao/synformer/blob/main/data/chembl_filtered_1k.txt<br>
https://www.ebi.ac.uk/chembl<br>
Data Collection Method by dataset: Human<br>
Labeling Method by dataset: N/A<br>
Properties: Randomly selected 1k test molecules from ChEMBL to evaluate synthesizable molecule reconstruction.<br>

ZINC250k Test Set<br>
Link: https://www.kaggle.com/datasets/basu369victor/zinc250k<br>
Data Collection Method by dataset: Synthetic<br>
Labeling Method by dataset: N/A <br>
Properties: Randomly selected 1k test molecules from ZINC250k to evaluate synthesizable molecule reconstruction.<br>

### Inference

Engine: Torch<br>
Test Hardware: Ampere / NVIDIA A100

### Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Users are responsible for ensuring the physical properties of model-generated molecules are appropriately evaluated and comply with applicable safety regulations and ethical standards.

For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns [here](https://app.intigriti.com/programs/nvidia/nvidiavdp/detail).