mishig HF Staff commited on
Commit
4d7d073
·
verified ·
1 Parent(s): b0a73d9

Add 1 files

Browse files
Files changed (1) hide show
  1. 2311/2311.06295.md +3043 -0
2311/2311.06295.md ADDED
@@ -0,0 +1,3043 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Gradual Optimization Learning for Conformational Energy Minimization
2
+
3
+ URL Source: https://arxiv.org/html/2311.06295
4
+
5
+ Markdown Content:
6
+ 1Introduction
7
+ 2Related work
8
+ 3Notation and preliminaries
9
+ 4Conformation optimization with neural networks
10
+ 5GOLF
11
+ 6Experiments
12
+ 7Conclusion
13
+ License: CC BY 4.0
14
+ arXiv:2311.06295v2 [physics.chem-ph] 12 Mar 2024
15
+ Gradual Optimization Learning for Conformational Energy Minimization
16
+ Artem Tsypin
17
+
18
+ 1
19
+ , Leonid Ugadiarov
20
+ 2
21
+ ,
22
+ 4
23
+ , Kuzma Khrabrov
24
+ 1
25
+ , Alexander Telepov
26
+ 1
27
+ ,
28
+ Egor Rumiantsev
29
+ 1
30
+ , Alexey Skrynnik
31
+ 1
32
+ ,
33
+ 2
34
+ , Aleksandr Panov
35
+ 1
36
+ ,
37
+ 2
38
+ ,
39
+ 4
40
+ ,
41
+ Dmitry Vetrov
42
+ 5
43
+ , Elena Tutubalina
44
+ 1
45
+ ,
46
+ 3
47
+ ,
48
+ 6
49
+ , Artur Kadurin
50
+
51
+ 1
52
+ ,
53
+ 7
54
+
55
+
56
+ 1
57
+ AIRI, Moscow
58
+ 2
59
+ FRC CSC RAS, Moscow
60
+ 3
61
+ Sber AI, Moscow
62
+
63
+ 4
64
+ MIPT, Dolgoprudny
65
+ 5
66
+ Constructor University, Bremen
67
+
68
+ 6
69
+ ISP RAS Research Center for Trusted Artificial Intelligence, Moscow
70
+
71
+ 7
72
+ Kuban State University, Krasnodar
73
+ ✉{Tsypin, Kadurin}@airi.net
74
+ ✉Corresponding authors.
75
+ Abstract
76
+
77
+ Molecular conformation optimization is crucial to computer-aided drug discovery and materials design. Traditional energy minimization techniques rely on iterative optimization methods that use molecular forces calculated by a physical simulator (oracle) as anti-gradients. However, this is a computationally expensive approach that requires many interactions with a physical simulator. One way to accelerate this procedure is to replace the physical simulator with a neural network. Despite recent progress in neural networks for molecular conformation energy prediction, such models are prone to errors due to distribution shift, leading to inaccurate energy minimization. We find that the quality of energy minimization with neural networks can be improved by providing optimization trajectories as additional training data. Still, obtaining complete optimization trajectories demands a lot of additional computations. To reduce the required additional data, we present the Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks. The framework consists of an efficient data-collecting scheme and an external optimizer. The external optimizer utilizes gradients from the energy prediction model to generate optimization trajectories, and the data-collecting scheme selects additional training data to be processed by the physical simulator. Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules using significantly less additional data.
78
+
79
+ 1Introduction
80
+
81
+ Numerical quantum chemistry methods are essential for modern computer-aided drug discovery and materials design pipelines. They are used to predict the physical and chemical properties of candidate structures (Matta & Boyd, 2007; Oglic et al., 2017; Tielker et al., 2021). Ab initio property prediction framework for a specific molecule or material could be divided into three main steps as follows: (1) find a low-energy conformation of a given atom system, (2) compute its electron structure with quantum chemistry methods, and (3) calculate properties of interest based on the latest. The computational cost of steps (1) and (2) is defined by the specific physical simulator (oracle
82
+ 𝒪
83
+ ) varying from linear to exponential complexity w.r.t the number of atoms or electrons in the system (Sousa et al., 2007). Overall, the more accurate the oracle is, the more computationally expensive its operations become.
84
+
85
+ The traditional approach to the problem of obtaining low-energy molecular conformations is to run an iterative optimization process using physical approximations, such as those provided by the Density-functional theory (DFT) methods (Kohn & Sham, 1965), as they are reasonably accurate. However, for large molecules, even a single iteration may take up several hours of CPU-compute (Gilmer et al., 2017). Therefore, it is crucial to develop alternative approaches (such as Neural Network-based) that reduce the computational complexity of iterative optimization.
86
+
87
+ The recent growth in computational power led to the emergence of molecular databases with computed quantum properties (Ruddigkeit et al., 2012; Ramakrishnan et al., 2014; Isert et al., 2022; Khrabrov et al., 2022; Jain et al., 2013). For example, nablaDFT (Khrabrov et al., 2022) consists of more than
88
+ 5
89
+ ×
90
+ 10
91
+ 6
92
+ conformations for around
93
+ 10
94
+ 6
95
+ drug-like molecules. This data enabled deep learning research for many molecule-related problems, such as conformational potential energy and quantum properties prediction with Neural Network Potentials (NNP) (Chmiela et al., 2017; Schütt et al., 2017; Chmiela et al., 2018; 2020; Schütt et al., 2021; Shuaibi et al., 2021; Gasteiger et al., 2020; 2021; Chmiela et al., 2023), and conformational distribution estimation (Simm & Hernández-Lobato, 2019; Xu et al., 2021; Ganea et al., 2021; Xu et al., 2022; Jing et al., 2022; Shi et al., 2021; Luo et al., 2021). Naturally, there have been several works that utilize deep learning to tackle the problem of obtaining low-energy conformations. One approach is to reformulate this task as a conditional generation task (Guan et al., 2021; Lu et al., 2023); see Section 2 for further details. Another solution is to train an NNP to predict the potential energy of a molecular conformation and use it as a force field for relaxation (Unke et al., 2021). Assuming the NNP accurately predicts the energy, its gradients can be used as interatomic forces  (Schütt et al., 2017). Such a technique allows for gradient-based optimization without a physical simulator, significantly reducing computational complexity.
96
+
97
+ In this work, we aim to improve the training of NNPs for obtaining low-energy conformations. We trained NNPs on the subset of nablaDFT dataset (Khrabrov et al., 2022) and observed that such models suffer from the distribution shift when used in the optimization task (see Figure 1). To alleviate the distribution shift and improve the quality of energy minimization, we enriched the training dataset with optimization trajectories (see Section 4) generated by the oracle. Our experiments demonstrate that it requires more than
98
+ 5
99
+ ×
100
+ 10
101
+ 5
102
+ additional oracle interactions to match the quality of a physical simulator (see Table 1). These models trained on enriched datasets are used as baselines for our proposed approach.
103
+
104
+ In this paper, we propose the GOLF — Gradual Optimization Learning Framework for the training of NNPs to generate low-energy conformations. GOLF consists of three components: (i) a genuine oracle
105
+ 𝒪
106
+ 𝐺
107
+ , (ii) an optimizer, and (iii) a surrogate oracle
108
+ 𝒪
109
+ 𝑆
110
+ that is computationally inexpensive. The
111
+ 𝒪
112
+ 𝐺
113
+ is an accurate but computationally expensive method used to calculate ground truth energies and forces, and we consider a setting with a limited budget on
114
+ 𝒪
115
+ 𝐺
116
+ interactions. The optimizer (e.g., Adam (Kingma & Ba, 2014) or L-BFGS (Liu & Nocedal, 1989)) utilizes NNP gradients to produce optimization trajectories. The
117
+ 𝒪
118
+ 𝑆
119
+ determines which conformations are added to the training set. We use Psi4 (Smith et al., 2020), a popular software for DFT-based computations, as the
120
+ 𝒪
121
+ 𝐺
122
+ , and RDKit’s (Landrum et al., 2022) MMFF (Halgren, 1996) as the
123
+ 𝒪
124
+ 𝑆
125
+ . The NNP training cycle consists of three steps. First, we generate a batch of optimization trajectories and evaluate all conformations with
126
+ 𝒪
127
+ 𝑆
128
+ . Then we select the first conformation from each trajectory for which the NNP poorly predicts interatomic forces w.r.t.
129
+ 𝒪
130
+ 𝑆
131
+ (see Section 5), calculate its ground truth energy and forces with the
132
+ 𝒪
133
+ 𝐺
134
+ , and add it to the training set. Lastly, we update the NNP by training on batches sampled from initial and collected data. We train the model until we exceed the computational budget for additional
135
+ 𝒪
136
+ 𝐺
137
+ interactions. We show (see Section 6.2) that NNPs trained with GOLF on the nablaDFT (Khrabrov et al., 2022) perform on par with
138
+ 𝒪
139
+ 𝐺
140
+ while using
141
+ 50
142
+ x less additional data compared to the straightforward approach described in the previous paragraph. We also show similar results on another diverse dataset of drug-like molecules called SPICE (Eastman et al., 2023). We publish1 the source code for GOLF along with optimization trajectories datasets, training, and evaluation scripts.
143
+
144
+ Our contributions can be summarized as follows:
145
+
146
+
147
+
148
+ We study the task of conformational optimization and find that NNPs trained on existing datasets are prone to the distribution shift, leading to inaccurate energy minimization.
149
+
150
+
151
+
152
+ We propose a straightforward approach to deal with the distribution shift by enriching the training dataset with optimization trajectories (see Figure 1). Our experiments show that additional
153
+ 5
154
+ ×
155
+ 10
156
+ 5
157
+ conformations make the NNP perform comparably with the DFT-based oracle
158
+ 𝒪
159
+ 𝐺
160
+ on the task of conformational optimization.
161
+
162
+
163
+
164
+ We propose a novel framework (GOLF) for data-efficient training of NNPs, which includes a data-collecting scheme along with an external optimizer. We show that models trained with GOLF perform on par with the physical simulator on the task of conformational optimization using
165
+ 50
166
+ x less additional data than the straightforward approach.
167
+
168
+ 2Related work
169
+ Conformation generation
170
+
171
+ Several recent papers have proposed different approaches for predicting molecule’s 3D conformers. Xu et al. (2021) utilize normalizing flows to predict pairwise distances between atoms for a given molecular structure with subsequent relaxation of the generated conformation. Ganea et al. (2021) construct the molecular conformation by iteratively assembling it from smaller substructures. Xu et al. (2022); Wu et al. (2022); Jing et al. (2022); Huang et al. (2023); Fan et al. (2023) address the conformational generation task with diffusion models (Sohl-Dickstein et al., 2015). Other works employ variational approximations (Zhu et al., 2022; Swanson et al., 2023), and Markov Random Fields (Wang et al., 2022). We evaluate these approaches in Section 6.1. Despite showing promising geometrical metrics, such as Root-mean-square deviation of atomic positions (RMSD), on the tasks reported in the various papers, these models perform poorly in terms of geometry and potential energy on the optimization task. In most cases, additional optimization with a physical simulator is necessary to get a valid conformation.
172
+
173
+ Geometry optimization
174
+
175
+ Guan et al. (2021); Lu et al. (2023) frame the conformation optimization problem as a conditional generation task and train the model to generate low-energy conformations conditioned on RDKit-generated (or the randomly sampled from the pseudo optimization trajectory) conformations by minimizing the RMSD between the corresponding atom coordinates. As RMSD may not be an ideal objective for the conformation optimization task (see Section 6.1), we focus on accurately predicting the interatomic forces along the optimization trajectories in our work.
176
+
177
+ Additional oracle interactions
178
+
179
+ Zhang et al. (2018) show that additional data from the oracle may increase the energy prediction precision of NNP models. Following this idea, Kulichenko et al. (2023) propose an active learning approach based on the uncertainty of the energy prediction to reduce the number of additional oracle interactions. The main limitation of this approach is that it requires training a separate NNP ensemble for every single molecule. Chan et al. (2019) parametrize the molecule as a set of rotatable bonds and utilize the Bayesian Optimization with Gaussian Process prior to efficiently search for low-energy conformations. However, this method requires using the oracle during the inference, which limits its applications. The OC2022 (Tran* et al., 2022) provides relaxation trajectories for catalyst-adsorbate pairs. However, no in-depth analysis of the effects of such additional data on the quality of optimization with NNPs is provided.
180
+
181
+ To sum up, we believe it necessary to explore further the ability of NNPs to optimize molecular conformations according to their energy. Our experiments (see Section 6) show that additional oracle information significantly increases the optimization quality. Since this information may be expensive, we aim to reduce the number of additional interactions while maintaining the quality on par with the oracle.
182
+
183
+ 3Notation and preliminaries
184
+
185
+ We define the conformation
186
+ 𝑠
187
+ =
188
+ {
189
+ 𝒛
190
+ ,
191
+ 𝑿
192
+ }
193
+ of the molecule as a pair of atomic numbers
194
+ 𝒛
195
+ =
196
+ {
197
+ 𝑧
198
+ 1
199
+ ,
200
+
201
+ ,
202
+ 𝑧
203
+ 𝑛
204
+ }
205
+ ,
206
+ 𝑧
207
+ 𝑖
208
+
209
+
210
+ and atomic coordinates
211
+ 𝑿
212
+ =
213
+ {
214
+ 𝒙
215
+ 1
216
+ ,
217
+
218
+ ,
219
+ 𝒙
220
+ 𝑛
221
+ }
222
+ ,
223
+ 𝒙
224
+ 𝑖
225
+
226
+
227
+ 3
228
+ , where
229
+ 𝑛
230
+ is the number of atoms in the molecule. We define the oracle
231
+ 𝒪
232
+ as a function that takes conformation
233
+ 𝑠
234
+ as an input and outputs its potential energy
235
+ 𝐸
236
+ 𝑠
237
+ oracle
238
+
239
+
240
+ and interatomic forces
241
+ 𝑭
242
+ 𝑠
243
+ oracle
244
+
245
+
246
+ 𝑛
247
+ ×
248
+ 3
249
+  : 
250
+ 𝐸
251
+ 𝑠
252
+ oracle
253
+ ,
254
+ 𝑭
255
+ 𝑠
256
+ oracle
257
+ =
258
+ 𝒪
259
+
260
+ (
261
+ 𝑠
262
+ )
263
+ . To denote the ground truth interatomic force acting on the
264
+ 𝑖
265
+ -th atom, we use
266
+ 𝐹
267
+ 𝑠
268
+ ,
269
+ 𝑖
270
+ oracle
271
+ . We use different superscripts to denote energies and forces calculated by different physical simulators. For example, we denote the RDKit’s MMFF-calculated energy as
272
+ 𝐸
273
+ 𝑠
274
+ MMFF
275
+ and the Psi4-calculated energy as
276
+ 𝐸
277
+ 𝑠
278
+ DFT
279
+ .
280
+
281
+ We denote the NNP for the prediction of the potential energy of the conformation parametrized by weights
282
+ 𝜽
283
+ as
284
+ 𝑓
285
+
286
+ (
287
+ 𝑠
288
+ ;
289
+ 𝜽
290
+ )
291
+ :
292
+ {
293
+ 𝒛
294
+ ,
295
+ 𝑿
296
+ }
297
+
298
+
299
+ . Following (Schütt et al., 2017; 2021), we derive forces from the predicted energies:
300
+
301
+
302
+ 𝑭
303
+ 𝑖
304
+
305
+ (
306
+ 𝑠
307
+ ;
308
+ 𝜽
309
+ )
310
+ =
311
+
312
+
313
+ 𝑓
314
+
315
+ (
316
+ 𝑠
317
+ ;
318
+ 𝜽
319
+ )
320
+
321
+ 𝒙
322
+ 𝑖
323
+ ,
324
+
325
+ (1)
326
+
327
+ where
328
+ 𝑭
329
+ 𝑖
330
+
331
+
332
+ 3
333
+ is the force acting on the
334
+ 𝑖
335
+ -th atom as predicted by the NNP. We follow the standard procedure (Schütt et al., 2017; 2021; Gasteiger et al., 2020; Musaelian et al., 2022) and train the NNP to minimize the MSE between predicted and ground truth energies and forces:
336
+
337
+
338
+
339
+
340
+ (
341
+ 𝑠
342
+ ,
343
+ 𝐸
344
+ 𝑠
345
+ oracle
346
+ ,
347
+ 𝑭
348
+ 𝑠
349
+ oracle
350
+ ;
351
+ 𝜽
352
+ )
353
+ =
354
+ 𝜌
355
+
356
+
357
+ 𝐸
358
+ 𝑠
359
+ oracle
360
+
361
+ 𝑓
362
+
363
+ (
364
+ 𝑠
365
+ ;
366
+ 𝜽
367
+ )
368
+
369
+ 2
370
+ +
371
+ 1
372
+ 𝑛
373
+
374
+
375
+ 𝑖
376
+ =
377
+ 1
378
+ 𝑛
379
+
380
+ 𝐹
381
+ 𝑖
382
+ ,
383
+ 𝑠
384
+ oracle
385
+
386
+ 𝑭
387
+ 𝑖
388
+
389
+ (
390
+ 𝑠
391
+ ;
392
+ 𝜽
393
+ )
394
+
395
+ 2
396
+ ,
397
+
398
+ (2)
399
+
400
+ where
401
+
402
+
403
+ (
404
+ 𝑠
405
+ ,
406
+ 𝐸
407
+ 𝑠
408
+ oracle
409
+ ,
410
+ 𝑭
411
+ 𝑠
412
+ oracle
413
+ ;
414
+ 𝜽
415
+ )
416
+ is the loss function for a single conformation
417
+ 𝑠
418
+ , and
419
+ 𝜌
420
+ is the hyperparameter accounting for different scales of energy and forces.
421
+
422
+ To collect the ground truth optimization trajectories (see Section 4), we use the optimize method from Psi-4 and run optimization until convergence. Optimizer
423
+ 𝐎𝐩𝐭
424
+ (L-BFGS, Adam, SGD-momentum) utilizes the forces
425
+ 𝑭
426
+
427
+ (
428
+ 𝑠
429
+ ;
430
+ 𝜽
431
+ )
432
+
433
+
434
+ 𝑛
435
+ ×
436
+ 3
437
+ to get NNP-optimization trajectories 
438
+ 𝑠
439
+ 0
440
+ ,
441
+
442
+ ,
443
+ 𝑠
444
+ 𝑇
445
+ , where
446
+ 𝑠
447
+ 0
448
+ is the initial conformation:
449
+
450
+
451
+ 𝑠
452
+ 𝑡
453
+ +
454
+ 1
455
+ =
456
+ 𝑠
457
+ 𝑡
458
+ +
459
+ 𝛼
460
+
461
+ 𝐎𝐩𝐭
462
+
463
+ (
464
+ 𝑭
465
+
466
+ (
467
+ 𝑠
468
+ 𝑡
469
+ ;
470
+ 𝜽
471
+ )
472
+ )
473
+ .
474
+
475
+ (3)
476
+
477
+ Here,
478
+ 𝛼
479
+ is the optimization rate hyperparameter, and
480
+ 𝑇
481
+ is the total number of NNP optimization steps.
482
+
483
+ In this work, we use NNPs trained on different data. To train the baseline model
484
+ 𝑓
485
+ baseline
486
+
487
+ (
488
+
489
+ ;
490
+ 𝜽
491
+ )
492
+ , we use the fixed subset of nablaDFT (see Appendix D for more details)
493
+ 𝒟
494
+ 0
495
+ . It consists of approximately 10000 triplets of the form
496
+ {
497
+ 𝑠
498
+ ,
499
+ 𝐸
500
+ 𝑠
501
+ DFT
502
+ ,
503
+ 𝑭
504
+ 𝑠
505
+ DFT
506
+ }
507
+ . The
508
+ 𝒟
509
+ 0
510
+ can be extended with the ground truth optimization trajectories obtained with Psi-4 to get datasets denoted according to the total number of additional conformations:
511
+ 𝒟
512
+ traj-10k
513
+ ,
514
+ 𝒟
515
+ traj-100k
516
+ , and so on. The resulting NNPs are dubbed
517
+ 𝑓
518
+ traj-1k
519
+
520
+ (
521
+
522
+ ;
523
+ 𝜽
524
+ )
525
+ ,
526
+ 𝑓
527
+ traj-10k
528
+
529
+ (
530
+
531
+ ;
532
+ 𝜽
533
+ )
534
+ , and so on respectively. We call the models trained with GOLF (see Section 5)
535
+ 𝑓
536
+ GOLF-1k
537
+
538
+ (
539
+
540
+ ;
541
+ 𝜽
542
+ )
543
+ ,
544
+ 𝑓
545
+ GOLF-10k
546
+
547
+ (
548
+
549
+ ;
550
+ 𝜽
551
+ )
552
+ , etc.
553
+
554
+ To evaluate the quality of optimization with NNPs, we use a fixed subset of the nablaDFT dataset
555
+ 𝒟
556
+ test
557
+ , that shares no molecules with
558
+ 𝒟
559
+ 0
560
+ . For each conformation
561
+ 𝑠
562
+
563
+ 𝒟
564
+ test
565
+ we perform the optimization with the
566
+ 𝒪
567
+ 𝐺
568
+ to get the ground truth optimal conformation
569
+ 𝑠
570
+ 𝐨𝐩𝐭
571
+ and its energy
572
+ 𝐸
573
+ 𝑠
574
+ 𝐨𝐩𝐭
575
+ DFT
576
+ . The quality of the NNP-optimization for
577
+ 𝑠
578
+ 𝑡
579
+
580
+ 𝑠
581
+ 0
582
+ ,
583
+
584
+ ,
585
+ 𝑠
586
+ 𝑇
587
+ is evaluated with the percentage of minimized energy:
588
+
589
+
590
+ pct
591
+
592
+ (
593
+ 𝑠
594
+ 𝑡
595
+ )
596
+ =
597
+ 100
598
+ %
599
+ *
600
+ 𝐸
601
+ 𝑠
602
+ 0
603
+ DFT
604
+
605
+ 𝐸
606
+ 𝑠
607
+ 𝑡
608
+ DFT
609
+ 𝐸
610
+ 𝑠
611
+ 0
612
+ DFT
613
+
614
+ 𝐸
615
+ 𝑠
616
+ 𝐨𝐩𝐭
617
+ DFT
618
+ .
619
+
620
+ (4)
621
+
622
+ By aggregating
623
+ pct
624
+
625
+ (
626
+ 𝑠
627
+ 𝑡
628
+ )
629
+ over
630
+ 𝑠
631
+
632
+ 𝒟
633
+ test
634
+ , we get the average percentage of minimized energy at step
635
+ 𝑡
636
+ :
637
+
638
+
639
+ pct
640
+ ¯
641
+ 𝑡
642
+
643
+ =
644
+ 1
645
+ |
646
+ 𝒟
647
+ test
648
+ |
649
+
650
+
651
+ 𝑠
652
+
653
+ 𝒟
654
+ test
655
+ pct
656
+
657
+ (
658
+ 𝑠
659
+ 𝑡
660
+ )
661
+ ;
662
+
663
+ (5)
664
+
665
+ Another metric is the residual energy in state
666
+ 𝑠
667
+ 𝑡
668
+ :
669
+ 𝐸
670
+ res
671
+
672
+ (
673
+ 𝑠
674
+ 𝑡
675
+ )
676
+ . It is calculated as the delta between
677
+ 𝐸
678
+ 𝑠
679
+ 𝑡
680
+ DFT
681
+ and the optimal energy:
682
+
683
+
684
+ 𝐸
685
+ res
686
+
687
+ (
688
+ 𝑠
689
+ 𝑡
690
+ )
691
+ =
692
+ 𝐸
693
+ 𝑠
694
+ 𝑡
695
+ DFT
696
+
697
+ 𝐸
698
+ 𝑠
699
+ 𝐨𝐩𝐭
700
+ DFT
701
+ ;
702
+
703
+ (6)
704
+
705
+ Similar to
706
+ pct
707
+ ¯
708
+ 𝑡
709
+ , this metric can also be aggregated over the evaluation dataset:
710
+
711
+
712
+ 𝐸
713
+ res
714
+ ¯
715
+ 𝑡
716
+ =
717
+ 1
718
+ |
719
+ 𝒟
720
+ test
721
+ |
722
+
723
+
724
+ 𝑠
725
+
726
+ 𝒟
727
+ test
728
+ 𝐸
729
+ res
730
+
731
+ (
732
+ 𝑠
733
+ 𝑡
734
+ )
735
+ .
736
+
737
+ (7)
738
+
739
+ Generally accepted chemical precision is 1 kcal/mol (Helgaker et al., 2004). Thus, another important metric is the percentage of conformations for which the residual energy is less than chemical precision. We consider optimizations with such residual energies successful:
740
+
741
+
742
+ pct
743
+ success
744
+
745
+ =
746
+ 1
747
+ |
748
+ 𝒟
749
+ test
750
+ |
751
+
752
+
753
+ 𝑠
754
+
755
+ 𝒟
756
+ test
757
+ 𝐼
758
+
759
+ [
760
+ 𝐸
761
+ res
762
+
763
+ (
764
+ 𝑠
765
+ 𝑇
766
+ )
767
+ <
768
+ 1
769
+ ]
770
+ .
771
+
772
+ (8)
773
+ 4Conformation optimization with neural networks
774
+ Figure 1:Mean squared error (MSE) of energy and forces prediction for NNPs trained on 
775
+ 𝒟
776
+ 0
777
+ ,
778
+ 𝒟
779
+ traj-10k
780
+ ,
781
+ 𝒟
782
+ traj-100k
783
+ ,
784
+ 𝒟
785
+ traj-500k
786
+ . To compute the MSE, we collect NNP-optimization trajectories of length
787
+ 𝑇
788
+ =
789
+ 100
790
+ and calculate the ground truth energies and forces on steps
791
+ 𝑡
792
+ =
793
+ 1
794
+ ,
795
+ 2
796
+ ,
797
+ 3
798
+ ,
799
+ 5
800
+ ,
801
+ 8
802
+ ,
803
+ 13
804
+ ,
805
+ 21
806
+ ,
807
+ 30
808
+ ,
809
+ 50
810
+ ,
811
+ 75
812
+ ,
813
+ 100
814
+ . Solid lines indicate the median MSE, and the shaded regions indicate the 10th and the 90th percentiles. Both the x-axis and y-axis are log scaled
815
+
816
+ Energy prediction models such as SchNet, DimeNet, and PaiNN can achieve near-perfect quality on tasks of energy and interatomic forces prediction when trained on the datasets of molecular conformations (Schütt et al., 2017; Gasteiger et al., 2020; Schütt et al., 2021; Ying et al., 2021; Shuaibi et al., 2021; Gasteiger et al., 2021; Batzner et al., 2022; Musaelian et al., 2022). In theory, the gradients of these models can be utilized by an external optimizer to perform conformational optimization, replacing the computationally expensive physical simulator. However, in our experiments (see Section 6), this scheme often leads to suboptimal performance in terms of the potential energy of the resulting conformations. We attribute this effect to the distribution shift that naturally occurs during the optimization: As most existing datasets (Isert et al., 2022; Khrabrov et al., 2022; Eastman et al., 2023; Nakata & Maeda, 2023) do not contain conformations sampled from optimization trajectories, the accuracy of prediction deteriorates as the conformation changes along the optimization process. The lack of such conformations in the training can result in either divergence (initial potential energy is lower than the final potential energy) of the optimization or convergence to a conformation with higher final potential energy than the optimization with the oracle.
817
+
818
+ To alleviate the distribution shift’s effect, we propose enriching the training dataset for NNPs with the ground truth optimization trajectories obtained from the
819
+ 𝒪
820
+ 𝐺
821
+ . To illustrate the effectiveness of our approach, we conduct a series of experiments. First, we train a baseline model
822
+ 𝑓
823
+ baseline
824
+
825
+ (
826
+
827
+ ;
828
+ 𝜽
829
+ )
830
+ on a fixed subset
831
+ 𝒟
832
+ 0
833
+ of small molecules from the nablaDFT dataset. The
834
+ 𝒟
835
+ 0
836
+ (
837
+ |
838
+ 𝒟
839
+ 0
840
+ |
841
+
842
+ 10000
843
+ ) contains conformations for 4 000 molecules, with sizes ranging from
844
+ 17
845
+ to
846
+ 35
847
+ atoms, and the average size of
848
+ 32.6
849
+ . Then we train NNPs
850
+ 𝑓
851
+ traj-
852
+
853
+ (
854
+
855
+ ;
856
+ 𝜽
857
+ )
858
+ on enriched datasets
859
+ 𝒟
860
+ traj-10k
861
+ ,
862
+ 𝒟
863
+ traj-100k
864
+ ,
865
+ 𝒟
866
+ traj-500k
867
+ , containing approximately
868
+ 10
869
+ 4
870
+ ,
871
+ 10
872
+ 5
873
+ ,
874
+ 5
875
+ ×
876
+ 10
877
+ 5
878
+ additional conformations respectively. The additional data consists of ground truth optimization trajectories obtained from the
879
+ 𝒪
880
+ 𝐺
881
+ . Then, we evaluate the NNPs by performing the NNP-optimization on all conformations in
882
+ 𝒟
883
+ test
884
+ (
885
+ |
886
+ 𝒟
887
+ test
888
+ |
889
+
890
+ 20000
891
+ , contains
892
+
893
+ 10 000 molecules) and calculating the MSE between ground truth and predicted energies and forces. We use the L-BFGS as
894
+ 𝐎𝐩𝐭
895
+ due to its superior performance compared to other optimizers (see Appendix B). We run the optimization with an NNP for a fixed number of steps
896
+ 𝑇
897
+ =
898
+ 100
899
+ as we observe that this number is sufficient for the optimization to converge (see Figure 3). Figure 1 illustrates the effect of the distribution shift on
900
+ 𝑓
901
+ baseline
902
+
903
+ (
904
+
905
+ ;
906
+ 𝜽
907
+ )
908
+ (the prediction error increases as the optimization progresses) and its gradual alleviation with the addition of new training data.
909
+
910
+ Table 1:Optimization metrics for NNPs trained on enriched datasets
911
+ NNP
912
+ 𝑓
913
+ baseline
914
+
915
+ 𝑓
916
+ traj-10k
917
+
918
+ 𝑓
919
+ traj-100k
920
+
921
+ 𝑓
922
+ traj-500k
923
+
924
+
925
+ pct
926
+ ¯
927
+ 𝑇
928
+ (
929
+ %
930
+ )
931
+
932
+
933
+ 77.9
934
+ ±
935
+ 21.3
936
+
937
+ 95.1
938
+ ±
939
+ 7.6
940
+
941
+ 96.2
942
+ ±
943
+ 8.6
944
+
945
+ 98.8
946
+ ±
947
+ 7.6
948
+
949
+
950
+ 𝐸
951
+ res
952
+ ¯
953
+ 𝑇
954
+
955
+ (kcal/mol)
956
+
957
+ 8.6 2.0 1.5
958
+ 0.5
959
+
960
+
961
+ pct
962
+ success
963
+ (
964
+ %
965
+ )
966
+
967
+ 8.2 37.0 52.7
968
+ 73.4
969
+
970
+ Table 1 presents optimization metrics
971
+ pct
972
+ ¯
973
+ 𝑇
974
+ ,
975
+ 𝐸
976
+ res
977
+ ¯
978
+ 𝑇
979
+ ,
980
+ pct
981
+ success
982
+ for
983
+ 𝑇
984
+ =
985
+ 100
986
+ . Note that the potential energy surfaces of molecules often contain a large number of local minimas (Tsai & Jordan, 1993). Due to this fact and the noise in the predicted forces, the NNP-optimization can converge to a better local minimum than the
987
+ 𝒪
988
+ 𝐺
989
+ , resulting in the optimization percentage greater than a hundred:
990
+ pct
991
+
992
+ (
993
+ 𝑠
994
+ 𝑇
995
+ )
996
+ >
997
+ 100
998
+ %
999
+ (see Appendix H for examples). This explains the range of values in Table 1 and the violin plots in Figure 2. We say that the NNP matches the optimization quality of
1000
+ 𝒪
1001
+ 𝐺
1002
+ if its average residual energy
1003
+ 𝐸
1004
+ res
1005
+ ¯
1006
+ 𝑇
1007
+ is less than the chemical precision. Table 1 shows that it takes approximately
1008
+ 5
1009
+ ×
1010
+ 10
1011
+ 5
1012
+ additional oracle interactions to match the optimization quality of the 
1013
+ 𝒪
1014
+ 𝐺
1015
+ . However, it takes on average
1016
+ 590
1017
+ CPU-seconds to perform a single DFT calculation for a conformation from
1018
+ 𝒟
1019
+ 0
1020
+ with the
1021
+ 𝜔
1022
+ B97X-D/def2-SVP level of theory on our cluster with a total of 960 Intel(R) Xeon(R) Gold 2.60Hz CPU-cores (assuming there are 240 parallel workers each using four threads). This amounts to approximately 9.36 CPU-years of compute for
1023
+ 5
1024
+ ×
1025
+ 10
1026
+ 5
1027
+ additional conformations.
1028
+
1029
+ 5GOLF
1030
+
1031
+ Motivated by the desire to reduce the amount of additional data (and compute) required to match the optimization quality of the
1032
+ 𝒪
1033
+ 𝐺
1034
+ , we propose the GOLF. Following the idea of Active Learning, we want to enrich the training dataset with conformations where the NNP’s prediction quality deteriorates. We propose to select such conformations by identifying pairs of consecutive conformations
1035
+ 𝑠
1036
+ 𝑡
1037
+ ,
1038
+ 𝑠
1039
+ 𝑡
1040
+ +
1041
+ 1
1042
+ in NNP-optimization trajectories, for which the potential energy does not decrease:
1043
+ 𝐸
1044
+ 𝑠
1045
+ 𝑡
1046
+ DFT
1047
+ <
1048
+ 𝐸
1049
+ 𝑠
1050
+ 𝑡
1051
+ +
1052
+ 1
1053
+ DFT
1054
+ .
1055
+ This type of error indicates that the NNP poorly predicts forces in
1056
+ 𝑠
1057
+ 𝑡
1058
+ , so we add this conformation to the training dataset.
1059
+
1060
+ 1:training dataset
1061
+ 𝒟
1062
+ 0
1063
+ , genuine oracle
1064
+ 𝒪
1065
+ 𝐺
1066
+ , surrogate oracle
1067
+ 𝒪
1068
+ 𝑆
1069
+ , optimizer
1070
+ 𝐎𝐩𝐭
1071
+ , optimization rate
1072
+ 𝛼
1073
+ , NNP
1074
+ 𝑓
1075
+
1076
+ (
1077
+
1078
+ ;
1079
+ 𝜽
1080
+ )
1081
+ , number of additional
1082
+ 𝒪
1083
+ 𝐺
1084
+ interactions
1085
+ 𝐾
1086
+ , timelimit
1087
+ 𝑇
1088
+ , update-to-data ratio
1089
+ 𝑈
1090
+ 2:Initialize the NNP
1091
+ 𝑓
1092
+
1093
+ (
1094
+
1095
+ ;
1096
+ 𝜽
1097
+ )
1098
+ with the weights of the baseline NNP model
1099
+ 3:Set
1100
+ 𝒟
1101
+
1102
+ 𝐶
1103
+
1104
+ 𝑜
1105
+
1106
+ 𝑝
1107
+
1108
+ 𝑦
1109
+
1110
+ (
1111
+ 𝒟
1112
+ 0
1113
+ )
1114
+ , set
1115
+ 𝑡
1116
+
1117
+ 0
1118
+ 4:Sample
1119
+ 𝑠
1120
+
1121
+ 𝒟
1122
+ , and calculate its energy with
1123
+ 𝒪
1124
+ 𝑆
1125
+ :
1126
+ 𝐸
1127
+ 𝑝
1128
+
1129
+ 𝑟
1130
+
1131
+ 𝑒
1132
+
1133
+ 𝑣
1134
+
1135
+ 𝐸
1136
+ 𝑠
1137
+ MMFF
1138
+ 5:repeat
1139
+ 6:     
1140
+ 𝑠
1141
+
1142
+
1143
+ 𝑠
1144
+ +
1145
+ 𝛼
1146
+
1147
+ 𝐎𝐩𝐭
1148
+
1149
+ (
1150
+ 𝑭
1151
+
1152
+ (
1153
+ 𝑠
1154
+ ;
1155
+ 𝜽
1156
+ )
1157
+ )
1158
+
1159
+ Get next conformation using NNP
1160
+ 7:     Calculate new energy with the
1161
+ 𝒪
1162
+ 𝑆
1163
+ :
1164
+ 𝐸
1165
+ 𝑐
1166
+
1167
+ 𝑢
1168
+
1169
+ 𝑟
1170
+
1171
+ 𝐸
1172
+ 𝑠
1173
+
1174
+ MMFF
1175
+ 8:     if 
1176
+ 𝐸
1177
+ 𝑐
1178
+
1179
+ 𝑢
1180
+
1181
+ 𝑟
1182
+ >
1183
+ 𝐸
1184
+ 𝑝
1185
+
1186
+ 𝑟
1187
+
1188
+ 𝑒
1189
+
1190
+ 𝑣
1191
+ or
1192
+ 𝑡
1193
+
1194
+ 𝑇
1195
+  then
1196
+
1197
+ Incorrect forces predicted in
1198
+ 𝑠
1199
+ , or
1200
+ 𝑇
1201
+ reached
1202
+ 9:         Calculate
1203
+ ���
1204
+ 𝑠
1205
+ DFT
1206
+ ,
1207
+ 𝑭
1208
+ 𝑠
1209
+ DFT
1210
+ =
1211
+ 𝒪
1212
+ 𝐺
1213
+
1214
+ (
1215
+ 𝑠
1216
+ )
1217
+ 10:         
1218
+ 𝒟
1219
+
1220
+ add
1221
+ {
1222
+ 𝑠
1223
+ ,
1224
+ 𝐸
1225
+ 𝑠
1226
+ DFT
1227
+ ,
1228
+ 𝑭
1229
+ 𝑠
1230
+ DFT
1231
+ }
1232
+
1233
+ Add new data to
1234
+ 𝒟
1235
+ 11:         Train
1236
+ 𝑓
1237
+
1238
+ (
1239
+
1240
+ ;
1241
+ 𝜽
1242
+ )
1243
+ on
1244
+ 𝒟
1245
+ using Eq. 2
1246
+ 𝑈
1247
+ times
1248
+ 12:         Set
1249
+ 𝑡
1250
+
1251
+ 0
1252
+ 13:         Sample
1253
+ 𝑠
1254
+
1255
+ 𝒟
1256
+ , and calculate its energy with
1257
+ 𝒪
1258
+ 𝑆
1259
+ :
1260
+ 𝐸
1261
+ 𝑝
1262
+
1263
+ 𝑟
1264
+
1265
+ 𝑒
1266
+
1267
+ 𝑣
1268
+
1269
+ 𝐸
1270
+ 𝑠
1271
+ MMFF
1272
+ 14:     else
1273
+ 15:         
1274
+ 𝑠
1275
+
1276
+ 𝑠
1277
+
1278
+ 16:         
1279
+ 𝐸
1280
+ 𝑝
1281
+
1282
+ 𝑟
1283
+
1284
+ 𝑒
1285
+
1286
+ 𝑣
1287
+
1288
+ 𝐸
1289
+ 𝑐
1290
+
1291
+ 𝑢
1292
+
1293
+ 𝑟
1294
+ 17:         
1295
+ 𝑡
1296
+
1297
+ 𝑡
1298
+ +
1299
+ 1
1300
+ 18:     end if
1301
+ 19:until 
1302
+ |
1303
+ 𝒟
1304
+ |
1305
+
1306
+ |
1307
+ 𝒟
1308
+ 0
1309
+ |
1310
+ <
1311
+ 𝐾
1312
+ Algorithm 1 GOLF
1313
+
1314
+ However, this scheme requires estimating the energy for all conformations in generated NNP-optimization trajectories, which makes it computationally intractable. To cope with that, we employ a computationally inexpensive surrogate oracle
1315
+ 𝒪
1316
+ 𝑆
1317
+ to determine which conformations to evaluate with the
1318
+ 𝒪
1319
+ 𝐺
1320
+ and add to the training set. Although the energy estimation provided by the
1321
+ 𝒪
1322
+ 𝑆
1323
+ is less accurate, such simplification allows us to efficiently collect the additional training data and successfully train the NNPs. We chose the RDKit’s (Landrum et al., 2022) MMFF (Halgren, 1996) as the
1324
+ 𝒪
1325
+ 𝑆
1326
+ due to its efficiency. In our experiments, it takes
1327
+ 120
1328
+ microseconds on average on a single CPU core to evaluate a single conformation with MMFF, which is about
1329
+ 5
1330
+ ×
1331
+ 10
1332
+ 6
1333
+ times faster than the average DFT calculation time.
1334
+
1335
+ Algorithm 1 describes the GOLF training procedure. We start with an NNP
1336
+ 𝑓
1337
+
1338
+ (
1339
+
1340
+ ;
1341
+ 𝜽
1342
+ )
1343
+ pretrained on the
1344
+ 𝒟
1345
+ 0
1346
+ . We calculate a new optimization trajectory on every iteration using forces from the current NNP and choose a conformation from this trajectory to extend the training set. Then, we update the NNP on batches sampled from the extended training set
1347
+ 𝒟
1348
+ . This approach helps the NNP learn the conformational space by gradually descending towards minimal conformations.
1349
+
1350
+ 6Experiments
1351
+
1352
+ We evaluate NNPs and baseline models on a subset of nablaDFT
1353
+ 𝒟
1354
+ test
1355
+ ,
1356
+ |
1357
+ 𝒟
1358
+ test
1359
+ |
1360
+ =
1361
+ 19477
1362
+ , containing conformations for 10273 molecules. The evaluation dataset
1363
+ 𝒟
1364
+ test
1365
+ shares no molecules with either
1366
+ 𝒟
1367
+ 0
1368
+ or additional training data. We use PaiNN (Schütt et al., 2021) for all NNP experiments. First, we train a baseline NNP
1369
+ 𝑓
1370
+ baseline
1371
+
1372
+ (
1373
+
1374
+ ;
1375
+ 𝜽
1376
+ )
1377
+ on
1378
+ 𝒟
1379
+ 0
1380
+ for
1381
+ 5
1382
+ ×
1383
+ 10
1384
+ 5
1385
+ training steps. To train
1386
+ 𝑓
1387
+ traj-
1388
+
1389
+ (
1390
+
1391
+ ;
1392
+ 𝜽
1393
+ )
1394
+ we first initialize the weights of the network with
1395
+ 𝑓
1396
+ baseline
1397
+
1398
+ (
1399
+
1400
+ ;
1401
+ 𝜽
1402
+ )
1403
+ and then train it on the corresponding dataset (
1404
+ 𝒟
1405
+ traj-10k
1406
+ ,
1407
+ 𝒟
1408
+ traj-100k
1409
+ ,
1410
+ 𝒟
1411
+ traj-500k
1412
+ ) concatenated with
1413
+ 𝒟
1414
+ 0
1415
+ for additional
1416
+ 5
1417
+ ×
1418
+ 10
1419
+ 5
1420
+ training steps. The only exception is the
1421
+ 𝑓
1422
+ traj-500k
1423
+
1424
+ (
1425
+
1426
+ ;
1427
+ 𝜽
1428
+ )
1429
+ , which is trained for
1430
+ 10
1431
+ 6
1432
+ training steps due to a larger dataset.
1433
+
1434
+ To train the
1435
+ 𝑓
1436
+ GOLF-
1437
+
1438
+ (
1439
+
1440
+ ;
1441
+ 𝜽
1442
+ )
1443
+ models, we select the total number of additional
1444
+ 𝒪
1445
+ 𝐺
1446
+ interactions
1447
+ 𝐾
1448
+ and adjust the update-to-data ratio
1449
+ 𝑈
1450
+ to keep the total number of updates equal to
1451
+ 5
1452
+ ×
1453
+ 10
1454
+ 5
1455
+ . For example, if
1456
+ 𝐾
1457
+ is set to
1458
+ 10
1459
+ 4
1460
+ , we perform
1461
+ 𝑈
1462
+ =
1463
+ 50
1464
+ updates for each additional conformation collected (see line 11 of Algorithm 1). The Algorithm 1 describes a non-parallel version of GOLF with a single
1465
+ 𝒪
1466
+ 𝐺
1467
+ . To parallelize the
1468
+ 𝒪
1469
+ 𝐺
1470
+ calculations (line 9), we use a batched version of the Algorithm 1, where a batch of NNP-optimization trajectories is generated and then processed by a large number of parallel DFT oracles.
1471
+
1472
+ To evaluate NNPs, we use them to generate optimization trajectories
1473
+ 𝑠
1474
+ 0
1475
+ ,
1476
+
1477
+ ,
1478
+ 𝑠
1479
+ 𝑇
1480
+ ,
1481
+ 𝑇
1482
+ =
1483
+ 100
1484
+ for all
1485
+ 𝑠
1486
+
1487
+ 𝒟
1488
+ test
1489
+ . We then calculate
1490
+ 𝐸
1491
+ DFT
1492
+ at steps
1493
+ 𝑡
1494
+ =
1495
+ {
1496
+ 1
1497
+ ,
1498
+ 2
1499
+ ,
1500
+ 3
1501
+ ,
1502
+ 5
1503
+ ,
1504
+ 8
1505
+ ,
1506
+ 13
1507
+ ,
1508
+ 21
1509
+ ,
1510
+ 30
1511
+ ,
1512
+ 50
1513
+ ,
1514
+ 75
1515
+ ,
1516
+ 100
1517
+ }
1518
+ , as calculating in every step is computationally expensive. Having calculated
1519
+ 𝐸
1520
+ 𝑠
1521
+ 𝑇
1522
+ DFT
1523
+ for all
1524
+ 𝑠
1525
+
1526
+ 𝒟
1527
+ test
1528
+ , we can compute
1529
+ pct
1530
+
1531
+ (
1532
+ 𝑠
1533
+ 𝑇
1534
+ )
1535
+ ,
1536
+ 𝐸
1537
+ res
1538
+
1539
+ (
1540
+ 𝑠
1541
+ 𝑡
1542
+ )
1543
+ ,
1544
+ 𝑠
1545
+
1546
+ 𝒟
1547
+ test
1548
+ along with
1549
+ pct
1550
+ ¯
1551
+ 𝑡
1552
+ ,
1553
+ 𝐸
1554
+ res
1555
+ ¯
1556
+ 𝑡
1557
+ ,
1558
+ pct
1559
+ success
1560
+ . In all our experiments, we use the L-BFGS as
1561
+ 𝐎𝐩𝐭
1562
+ , except for Appendix B, where we test the effect of different external optimizers on the model’s performance. We run the optimization with an NNP for a fixed number of steps
1563
+ 𝑇
1564
+ =
1565
+ 100
1566
+ as we observe that this number is sufficient for the optimization to converge (see Figure 3). We report the optimization quality of RDKit’s MMFF as a non-neural baseline. If
1567
+ 𝐸
1568
+ 𝑠
1569
+ 𝑇
1570
+ DFT
1571
+ >
1572
+ 𝐸
1573
+ 𝑠
1574
+ 0
1575
+ DFT
1576
+ , we say that the optimization has diverged and do not take such conformations into account when computing
1577
+ pct
1578
+ ¯
1579
+ 𝑡
1580
+ ,
1581
+ 𝐸
1582
+ res
1583
+ ¯
1584
+ 𝑡
1585
+ ,
1586
+ pct
1587
+ success
1588
+ . We denote the percentage of diverged optimizations as
1589
+ pct
1590
+ div
1591
+ . We also report well-known metrics COV and MAT (Xu et al., 2021). More information on these metrics can be found in Appendix F. We present all metrics in Table 2.
1592
+
1593
+ 6.1Generative baselines
1594
+ Table 2:Optimization and recall-based metrics. We set
1595
+ 𝛿
1596
+ =
1597
+ 0.5
1598
+ Å   when computing the
1599
+ COV
1600
+ . We use bold for the best value in each column.
1601
+ Methods
1602
+ pct
1603
+ ¯
1604
+ 𝑇
1605
+ (%)
1606
+
1607
+
1608
+ pct
1609
+ div
1610
+ (%)
1611
+
1612
+
1613
+ 𝐸
1614
+ res
1615
+ ¯
1616
+ 𝑇
1617
+
1618
+ (kc/mol)
1619
+
1620
+
1621
+ pct
1622
+ success
1623
+ (%)
1624
+
1625
+ COV(%)
1626
+
1627
+ MAT (Å)
1628
+
1629
+
1630
+ RDKit
1631
+ 85.5
1632
+ ±
1633
+ 8.8
1634
+ 0.6 5.5 4.1 54.9 0.61
1635
+ TD
1636
+ 23.8
1637
+ ±
1638
+ 19.8
1639
+ 61.4 33.8 0.0 10.0 1.42
1640
+ ConfOpt
1641
+ 39.1
1642
+ ±
1643
+ 22.8
1644
+ 71.1 27.9 0.2 25.0 1.13
1645
+ Uni-Mol+
1646
+ 54.6
1647
+ ±
1648
+ 20.4
1649
+ 8.1 18.6 0.2 56.3 0.53
1650
+
1651
+ 𝑓
1652
+ baseline
1653
+
1654
+ 77.9
1655
+ ±
1656
+ 21.3
1657
+ 7.5 8.6 8.2 58.8 0.55
1658
+
1659
+ 𝑓
1660
+ rdkit
1661
+
1662
+ 93.0
1663
+ ±
1664
+ 11.6
1665
+ 4.4 2.8 35.4 63.8 0.51
1666
+
1667
+ 𝑓
1668
+ traj-10k
1669
+
1670
+ 95.1
1671
+ ±
1672
+ 7.6
1673
+ 4.5 2.0 37.0 63.3 0.52
1674
+
1675
+ 𝑓
1676
+ traj-100k
1677
+
1678
+ 96.2
1679
+ ±
1680
+ 8.6
1681
+ 2.8 1.5 52.7 65.6 0.49
1682
+
1683
+ 𝑓
1684
+ traj-500k
1685
+
1686
+ 98.8
1687
+ ±
1688
+ 7.6
1689
+ 2.0
1690
+ 0.5
1691
+ 73.4 67.0 0.48
1692
+
1693
+ 𝑓
1694
+ GOLF-1k
1695
+
1696
+ 97.3
1697
+ ±
1698
+ 5.1
1699
+ 3.9 1.1 62.9 71.0
1700
+ 0.42
1701
+
1702
+
1703
+ 𝑓
1704
+ GOLF-10k
1705
+
1706
+ 98.8
1707
+ ±
1708
+ 5.0
1709
+ 3.0
1710
+ 0.5
1711
+
1712
+ 77.3
1713
+
1714
+ 71.2
1715
+
1716
+ 0.42
1717
+
1718
+ To compare our approach with other NN-based methods, we adapt ConfOpt (Guan et al., 2021), Torsional diffusion (TD) (Jing et al., 2022), and Uni-Mol+ (Lu et al., 2023) for the task of conformational optimization. The training dataset is composed of a single conformation for each of 4000 molecules in
1719
+ 𝒟
1720
+ 0
1721
+ . We first optimize geometry for each conformation with
1722
+ 𝒪
1723
+ 𝐺
1724
+ and then train the generative models to map initial conformations to final conformations from corresponding optimization trajectories. Table 2 reports the best metrics for each model type. Refer to Appendix G for an in-depth discussion of results. The training details and metrics for all the variants of the models are also reported in Appendix G.
1725
+
1726
+ 6.2NNPs trained on nablaDFT dataset
1727
+ (a)Distribution of
1728
+ pct
1729
+
1730
+ (
1731
+ 𝑠
1732
+ 𝑇
1733
+ )
1734
+ for NNPs on nablaDFT
1735
+ (b)Distribution of
1736
+ pct
1737
+
1738
+ (
1739
+ 𝑠
1740
+ 𝑇
1741
+ )
1742
+ for NNPs on SPICE
1743
+ Figure 2:Violin plots of the percentage of optimized energy
1744
+ pct
1745
+
1746
+ (
1747
+ 𝑠
1748
+ 𝑇
1749
+ )
1750
+ calculated for various NNPs on
1751
+ 𝒟
1752
+ test
1753
+ and
1754
+ 𝒟
1755
+ test
1756
+ SPICE
1757
+ . Blue marks denote the mean percentage of optimized energy
1758
+ pct
1759
+ ¯
1760
+ 𝑇
1761
+ , the 10th, and the 90th quantile.
1762
+
1763
+ To illustrate the performance of various NNPs trained on molecules from the nablaDFT dataset (Khrabrov et al., 2022), we plot the distribution of
1764
+ pct
1765
+
1766
+ (
1767
+ 𝑠
1768
+ 𝑇
1769
+ )
1770
+ using a violin plot (see Figure 1(a)). To highlight the data efficiency of the proposed GOLF framework, we report
1771
+ 𝑓
1772
+ GOLF-1k
1773
+
1774
+ (
1775
+
1776
+ ;
1777
+ 𝜽
1778
+ )
1779
+ , as well as our primary model
1780
+ 𝑓
1781
+ GOLF-10k
1782
+
1783
+ (
1784
+
1785
+ ;
1786
+ 𝜽
1787
+ )
1788
+ . To demonstrate the significance of our proposed data-collecting scheme, we compare the NNPs trained with GOLF against an NNP trained on
1789
+ 𝒟
1790
+ rdkit
1791
+ =
1792
+ {
1793
+ 𝑠
1794
+ 𝐎𝐩𝐭
1795
+ MMFF
1796
+ }
1797
+ 𝑠
1798
+
1799
+ 𝒟
1800
+ 0
1801
+ , which is composed of the optimal conformations obtained by the
1802
+ 𝒪
1803
+ 𝑆
1804
+ .
1805
+
1806
+ As shown in Figure 1(a) and in Table 2, the NNPs benefit from additional training data and outperform the baseline in terms of all optimization metrics. The
1807
+ pct
1808
+ ¯
1809
+ 𝑇
1810
+ and
1811
+ pct
1812
+ success
1813
+ gradually increase with the amount of additional training data both for
1814
+ 𝑓
1815
+ traj-
1816
+
1817
+ (
1818
+
1819
+ ;
1820
+ 𝜽
1821
+ )
1822
+ and
1823
+ 𝑓
1824
+ GOLF-
1825
+
1826
+ (
1827
+
1828
+ ;
1829
+ 𝜽
1830
+ )
1831
+ models. However, the NNPs trained with GOLF require significantly less additional training data:
1832
+ 𝑓
1833
+ GOLF-1k
1834
+
1835
+ (
1836
+
1837
+ ;
1838
+ 𝜽
1839
+ )
1840
+ outperforms
1841
+ 𝑓
1842
+ traj-100k
1843
+
1844
+ (
1845
+
1846
+ ;
1847
+ 𝜽
1848
+ )
1849
+ , while using
1850
+ 100
1851
+ times less data; our main model,
1852
+ 𝑓
1853
+ GOLF-10k
1854
+
1855
+ (
1856
+
1857
+ ;
1858
+ 𝜽
1859
+ )
1860
+ outperforms
1861
+ 𝑓
1862
+ traj-500k
1863
+
1864
+ (
1865
+
1866
+ ;
1867
+ 𝜽
1868
+ )
1869
+ in terms of
1870
+ pct
1871
+ success
1872
+ , while using 50 times less data. NNPs trained with GOLF also outperform
1873
+ 𝑓
1874
+ rdkit
1875
+
1876
+ (
1877
+
1878
+ ;
1879
+ 𝜽
1880
+ )
1881
+ , which shows the importance of enriching the dataset with conformations based on the proposed Active Learning-inspired data collecting scheme.
1882
+
1883
+ 6.3NNPs trained on SPICE dataset
1884
+
1885
+ To demonstrate the generalization ability of our approach, we perform a similar set of experiments on another diverse dataset of small molecules called SPICE (Eastman et al., 2023). Namely, we select a subset
1886
+ 𝒟
1887
+ 0
1888
+ SPICE
1889
+ (see Appendix E for detailed description) from the SPICE dataset to be roughly the same size as
1890
+ 𝒟
1891
+ 0
1892
+ and trained a baseline model
1893
+ 𝑓
1894
+ SPICE
1895
+ baseline
1896
+
1897
+ (
1898
+
1899
+ ;
1900
+ 𝜽
1901
+ )
1902
+ . We then use the same DFT-based oracle
1903
+ 𝒪
1904
+ 𝐺
1905
+ to get ground truth optimization trajectories and obtain enriched training datasets
1906
+ 𝒟
1907
+ traj-10k
1908
+ SPICE
1909
+ ,
1910
+ 𝒟
1911
+ traj-100k
1912
+ SPICE
1913
+ ,
1914
+ 𝒟
1915
+ traj-220k
1916
+ SPICE
1917
+ . Finally, we train
1918
+ 𝑓
1919
+ SPICE
1920
+ traj-
1921
+
1922
+ (
1923
+
1924
+ ;
1925
+ 𝜽
1926
+ )
1927
+ models and
1928
+ 𝑓
1929
+ SPICE
1930
+ GOLF-10k
1931
+
1932
+ (
1933
+
1934
+ ;
1935
+ 𝜽
1936
+ )
1937
+ model. All the models are evaluated on
1938
+ 𝒟
1939
+ test
1940
+ SPICE
1941
+ dataset (
1942
+ |
1943
+ 𝒟
1944
+ test
1945
+ SPICE
1946
+ |
1947
+ =
1948
+ 17724
1949
+ ) that shares no molecules with
1950
+ 𝒟
1951
+ 0
1952
+ SPICE
1953
+ . The results are in Figure 1(b) and Table 3. It should be noted that the hyperparameters used in these experiments were not specifically optimized for the SPICE dataset, suggesting potential for further improvements in the metrics with tailored adjustments.
1954
+
1955
+ Table 3:Optimization metrics for NNPs trained on
1956
+ 𝒟
1957
+ 0
1958
+ SPICE
1959
+ NNP
1960
+ 𝑓
1961
+ baseline
1962
+
1963
+ 𝑓
1964
+ traj-10k
1965
+
1966
+ 𝑓
1967
+ traj-100k
1968
+
1969
+ 𝑓
1970
+ traj-220k
1971
+
1972
+ 𝑓
1973
+ GOLF-10k
1974
+
1975
+
1976
+ pct
1977
+ ¯
1978
+ 𝑇
1979
+ (
1980
+ %
1981
+ )
1982
+
1983
+
1984
+ 90.4
1985
+ ±
1986
+ 12.0
1987
+
1988
+ 93.4
1989
+ ±
1990
+ 10.0
1991
+
1992
+ 94.3
1993
+ ±
1994
+ 9.4
1995
+
1996
+ 93.9
1997
+ ±
1998
+ 9.6
1999
+
2000
+ 94.2
2001
+ ±
2002
+ 8.9
2003
+
2004
+
2005
+ pct
2006
+ div
2007
+ (
2008
+ %
2009
+ )
2010
+
2011
+
2012
+ 4.7
2013
+ 6.8
2014
+ 2.4
2015
+
2016
+ 2.4
2017
+ 3.2
2018
+
2019
+ 𝐸
2020
+ res
2021
+ ¯
2022
+ 𝑇
2023
+
2024
+ (kcal/mol)
2025
+
2026
+ 3.6 2.4
2027
+ 2.1
2028
+ 2.3
2029
+ 2.1
2030
+
2031
+
2032
+ pct
2033
+ success
2034
+ (
2035
+ %
2036
+ )
2037
+
2038
+ 19.7 37.4
2039
+ 44.2
2040
+ 41.6 40.9
2041
+ 6.4Large Molecules
2042
+
2043
+ Finally, we test the ability of our models trained on
2044
+ 𝒟
2045
+ 0
2046
+ to generalize to unseen molecules of bigger size. To do that, we collect a dataset
2047
+ 𝒟
2048
+ LM
2049
+ (LM for Large Molecules) of 2000 molecules from the nablaDFT dataset. Sizes of molecules in
2050
+ 𝒟
2051
+ LM
2052
+ range from 36 atoms to 57 atoms with an average size of
2053
+ 41.8
2054
+ atoms.
2055
+
2056
+ Table 4:Optimization metrics for NNPs trained on
2057
+ 𝒟
2058
+ 0
2059
+ NNP
2060
+ 𝑓
2061
+ baseline
2062
+
2063
+ 𝑓
2064
+ traj-500k
2065
+
2066
+ 𝑓
2067
+ GOLF-10k
2068
+
2069
+
2070
+ pct
2071
+ ¯
2072
+ 𝑇
2073
+ (
2074
+ %
2075
+ )
2076
+
2077
+
2078
+ 77.7
2079
+ ±
2080
+ 19.7
2081
+
2082
+ 97.4
2083
+ ±
2084
+ 6.7
2085
+
2086
+ 97.7
2087
+ ±
2088
+ 4.1
2089
+
2090
+
2091
+ pct
2092
+ div
2093
+ (
2094
+ %
2095
+ )
2096
+
2097
+
2098
+ 5.1
2099
+
2100
+ 1.9
2101
+
2102
+ 2.7
2103
+
2104
+
2105
+ 𝐸
2106
+ res
2107
+ ¯
2108
+ 𝑇
2109
+
2110
+ (kcal/mol)
2111
+
2112
+ 9.6 1.1
2113
+ 1.0
2114
+
2115
+
2116
+ pct
2117
+ success
2118
+ (
2119
+ %
2120
+ )
2121
+
2122
+ 4.8 58.2
2123
+ 61.4
2124
+
2125
+ As it can be seen in Table 4, the
2126
+ 𝑓
2127
+ GOLF-10k
2128
+
2129
+ (
2130
+
2131
+ ;
2132
+ 𝜽
2133
+ )
2134
+ matches the quality of ground truth optimization (
2135
+ 𝐸
2136
+ res
2137
+ ¯
2138
+ 𝑇
2139
+ <
2140
+ 1
2141
+ ), the only downside being a lower
2142
+ pct
2143
+ success
2144
+ compared to results in Table 2. We hypothesize that this percentage can be increased by adding a small amount of larger molecules to
2145
+ 𝒟
2146
+ 0
2147
+ but leave this for future work.
2148
+
2149
+ 7Conclusion
2150
+
2151
+ In this work, we have presented a new framework called GOLF for molecular conformation optimization learning. We show that additional information from the physical simulator can help NNPs overcome the distribution shift and increase their quality on energy prediction and optimization tasks. We thoroughly compare our approach with several baselines, including recent conformation generation models and an inexpensive physical simulator. Using GOLF, we achieve state-of-the-art performance on the optimization task while reducing the number of additional interactions with the physical simulator by a factor of
2152
+ 50
2153
+ compared to the naive approach. The resulting model matches the DFT methods’ optimization quality on a diverse set of drug-like molecules. In addition, we find that our models generalize to bigger molecules unseen during training. We consider the following two directions for future work. First, we plan to adopt the proposed approach for molecular dynamics simulations. Second, we plan to account for molecular environments such as a solvent or a protein binding pocket.
2154
+
2155
+ Acknowledgments
2156
+
2157
+ The work was supported by a grant for research centers in the field of artificial intelligence, provided by the Analytical Center in accordance with the subsidy agreement (agreement identifier 000000D730321P5Q0002) and the agreement with the Ivannikov Institute for System Programming of dated November 2, 2021 No. 70-2021-00142.
2158
+
2159
+ References
2160
+ Axelrod & Gomez-Bombarelli (2022) Simon Axelrod and Rafael Gomez-Bombarelli.Geom, energy-annotated molecular conformations for property prediction and molecular generation.Scientific Data, 9(1):185, 2022.
2161
+ Barnard & Downs (1992) J. M. Barnard and G. M. Downs.Clustering of chemical structures on the basis of two-dimensional similarity measures.Journal of Chemical Information and Computer Sciences, 32(6):644–649, 1992.doi: 10.1021/ci00010a010.URL https://doi.org/10.1021/ci00010a010.
2162
+ Batzner et al. (2022) Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky.E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials.Nature communications, 13(1):1–11, 2022.
2163
+ Chan et al. (2019) Lucian Chan, Geoffrey R Hutchison, and Garrett M Morris.Bayesian optimization for conformer generation.J. Cheminform., 11(1):32, May 2019.
2164
+ Chmiela et al. (2017) Stefan Chmiela, Alexandre Tkatchenko, Huziel E Sauceda, Igor Poltavsky, Kristof T Schütt, and Klaus-Robert Müller.Machine learning of accurate energy-conserving molecular force fields.Science advances, 3(5):e1603015, 2017.
2165
+ Chmiela et al. (2018) Stefan Chmiela, Huziel E. Sauceda, Klaus-Robert Müller, and Alexandre Tkatchenko.Towards exact molecular dynamics simulations with machine-learned force fields.Nature Communications, 9(1):3887, 2018.doi: 10.1038/s41467-018-06169-2.
2166
+ Chmiela et al. (2020) Stefan Chmiela, Huziel E. Sauceda, Alexandre Tkatchenko, and Klaus-Robert Müller.Accurate molecular dynamics enabled by efficient physically-constrained machine learning approaches, pp.  129–154.Springer International Publishing, 2020.doi: 10.1007/978-3-030-40245-7_7.
2167
+ Chmiela et al. (2023) Stefan Chmiela, Valentin Vassilev-Galindo, Oliver T. Unke, Adil Kabylda, Huziel E. Sauceda, Alexandre Tkatchenko, and Klaus-Robert Müller.Accurate global machine learning force fields for molecules with hundreds of atoms.Science Advances, 9(2):eadf0873, 2023.doi: 10.1126/sciadv.adf0873.
2168
+ Eastman et al. (2023) Peter Eastman, Pavan Kumar Behara, David L Dotson, Raimondas Galvelis, John E Herr, Josh T Horton, Yuezhi Mao, John D Chodera, Benjamin P Pritchard, Yuanqing Wang, et al.Spice, a dataset of drug-like molecules and peptides for training machine learning potentials.Scientific Data, 10(1):11, 2023.
2169
+ Fan et al. (2023) Zhiguang Fan, Yuedong Yang, Mingyuan Xu, and Hongming Chen.Ec-conf: A ultra-fast diffusion model for molecular conformation generation with equivariant consistency.arXiv preprint arXiv:2308.00237, 2023.
2170
+ Ganea et al. (2021) Octavian Ganea, Lagnajit Pattanaik, Connor Coley, Regina Barzilay, Klavs Jensen, William Green, and Tommi Jaakkola.Geomol: Torsional geometric generation of molecular 3d conformer ensembles.Advances in Neural Information Processing Systems, 34:13757–13769, 2021.
2171
+ Gasteiger et al. (2020) Johannes Gasteiger, Janek Groß, and Stephan Günnemann.Directional message passing for molecular graphs.arXiv preprint arXiv:2003.03123, 2020.
2172
+ Gasteiger et al. (2021) Johannes Gasteiger, Florian Becker, and Stephan Günnemann.Gemnet: Universal directional graph neural networks for molecules.Advances in Neural Information Processing Systems, 34:6790–6802, 2021.
2173
+ Gilmer et al. (2017) Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl.Neural message passing for quantum chemistry.In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp.  1263–1272. PMLR, 2017.
2174
+ Guan et al. (2021) Jiaqi Guan, Wesley Wei Qian, Wei-Ying Ma, Jianzhu Ma, Jian Peng, et al.Energy-inspired molecular conformation optimization.In international conference on learning representations, 2021.
2175
+ Halgren (1996) Thomas A. Halgren.Merck molecular force field. i. basis, form, scope, parameterization, and performance of mmff94.Journal of Computational Chemistry, 17(5-6):490–519, 1996.doi: https://doi.org/10.1002/(SICI)1096-987X(199604)17:5/6¡490::AID-JCC1¿3.0.CO;2-P.
2176
+ Helgaker et al. (2004) Trygve Helgaker, Torgeir A Ruden, Poul Jørgensen, Jeppe Olsen, and Wim Klopper.A priori calculation of molecular properties to chemical accuracy.Journal of Physical Organic Chemistry, 17(11):913–933, 2004.
2177
+ Huang et al. (2023) Lei Huang, Hengtong Zhang, Tingyang Xu, and Ka-Chun Wong.Mdm: Molecular diffusion model for 3d molecule generation.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp.  5105–5112, 2023.
2178
+ Isert et al. (2022) Clemens Isert, Kenneth Atz, José Jiménez-Luna, and Gisbert Schneider.Qmugs, quantum mechanical properties of drug-like molecules.Scientific Data, 9(1):273, 2022.
2179
+ Jain et al. (2013) Anubhav Jain, Shyue Ping Ong, Geoffroy Hautier, Wei Chen, William Davidson Richards, Stephen Dacek, Shreyas Cholia, Dan Gunter, David Skinner, Gerbrand Ceder, et al.Commentary: The materials project: A materials genome approach to accelerating materials innovation.APL materials, 1(1):011002, 2013.
2180
+ Jing et al. (2022) Bowen Jing, Gabriele Corso, Jeffrey Chang, Regina Barzilay, and Tommi Jaakkola.Torsional diffusion for molecular conformer generation.Advances in Neural Information Processing Systems, 35:24240–24253, 2022.
2181
+ Khrabrov et al. (2022) Kuzma Khrabrov, Ilya Shenbin, Alexander Ryabov, Artem Tsypin, Alexander Telepov, Anton Alekseev, Alexander Grishin, Pavel Strashnov, Petr Zhilyaev, Sergey Nikolenko, and Artur Kadurin.nabladft: Large-scale conformational energy and hamiltonian prediction benchmark and dataset.Phys. Chem. Chem. Phys., 24:25853–25863, 2022.doi: 10.1039/D2CP03966D.URL http://dx.doi.org/10.1039/D2CP03966D.
2182
+ Kim et al. (2023) Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, et al.Pubchem 2023 update.Nucleic acids research, 51(D1):D1373–D1380, 2023.
2183
+ Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba.Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980, 2014.
2184
+ Kohn & Sham (1965) Walter Kohn and Lu Jeu Sham.Self-consistent equations including exchange and correlation effects.Physical review, 140(4A):A1133, 1965.
2185
+ Kulichenko et al. (2023) Maksim Kulichenko, Kipton Barros, Nicholas Lubbers, Ying Wai Li, Richard Messerly, Sergei Tretiak, Justin S Smith, and Benjamin Nebgen.Uncertainty-driven dynamics for active learning of interatomic potentials.Nature Computational Science, 3(3):230–239, March 2023.
2186
+ Landrum et al. (2022) Greg Landrum, Paolo Tosco, Brian Kelley, Ric, sriniker, gedeck, Riccardo Vianello, NadineSchneider, Eisuke Kawashima, Andrew Dalke, Dan N, David Cosgrove, Brian Cole, Matt Swain, Samo Turk, AlexanderSavelyev, Gareth Jones, Alain Vaucher, Maciej Wójcikowski, Ichiru Take, Daniel Probst, Kazuya Ujihara, Vincent F. Scalfani, guillaume godin, Axel Pahl, Francois Berenger, JLVarjo, strets123, JP, and DoliathGavid.rdkit/rdkit: 2022_03_1 (q1 2022) release, March 2022.URL https://doi.org/10.5281/zenodo.6388425.
2187
+ Liu & Nocedal (1989) Dong C Liu and Jorge Nocedal.On the limited memory bfgs method for large scale optimization.Mathematical programming, 45(1-3):503–528, 1989.
2188
+ Lu et al. (2023) Shuqi Lu, Zhifeng Gao, Di He, Linfeng Zhang, and Guolin Ke.Highly accurate quantum chemical property prediction with uni-mol+.arXiv preprint arXiv:2303.16982, 2023.
2189
+ Luo et al. (2021) Shitong Luo, Chence Shi, Minkai Xu, and Jian Tang.Predicting molecular conformation via dynamic graph score matching.Advances in Neural Information Processing Systems, 34:19784–19795, 2021.
2190
+ Matta & Boyd (2007) Chérif F Matta and Russell J Boyd.The Quantum Theory of Atoms in Molecules: From Solid State to DNA and Drug Design.John Wiley & Sons, April 2007.
2191
+ Musaelian et al. (2022) Albert Musaelian, Simon Batzner, Anders Johansson, Lixin Sun, Cameron J Owen, Mordechai Kornbluth, and Boris Kozinsky.Learning local equivariant representations for large-scale atomistic dynamics.arXiv preprint arXiv:2204.05249, 2022.
2192
+ Nakata & Maeda (2023) Maho Nakata and Toshiyuki Maeda.Pubchemqc b3lyp/6-31g*//pm6 data set: The electronic structures of 86 million molecules using b3lyp/6-31g* calculations.Journal of Chemical Information and Modeling, 63(18):5734–5754, 2023.doi: 10.1021/acs.jcim.3c00899.URL https://doi.org/10.1021/acs.jcim.3c00899.PMID: 37677147.
2193
+ Oglic et al. (2017) Dino Oglic, Roman Garnett, and Thomas Gaertner.Active search in intensionally specified structured spaces.AAAI, 31(1), February 2017.
2194
+ Ramakrishnan et al. (2014) Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld.Quantum chemistry structures and properties of 134 kilo molecules.Scientific data, 1(1):1–7, 2014.
2195
+ Rego & Koes (2015) Nicholas Rego and David Koes.3dmol. js: molecular visualization with webgl.Bioinformatics, 31(8):1322–1324, 2015.
2196
+ Robbins & Monro (1951) Herbert Robbins and Sutton Monro.A stochastic approximation method.The annals of mathematical statistics, pp.  400–407, 1951.
2197
+ Ruddigkeit et al. (2012) Lars Ruddigkeit, Ruud van Deursen, Lorenz C. Blum, and Jean-Louis Reymond.Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17.Journal of Chemical Information and Modeling, 52(11):2864–2875, 2012.doi: 10.1021/ci300415d.URL https://doi.org/10.1021/ci300415d.PMID: 23088335.
2198
+ Schütt et al. (2017) Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller.Schnet: A continuous-filter convolutional neural network for modeling quantum interactions.Advances in neural information processing systems, 30:992–1002, 2017.
2199
+ Schütt et al. (2021) Kristof Schütt, Oliver Unke, and Michael Gastegger.Equivariant message passing for the prediction of tensorial properties and molecular spectra.In International Conference on Machine Learning, pp.  9377–9388. PMLR, 2021.
2200
+ Schütt et al. (2023) Kristof T. Schütt, Stefaan S. P. Hessmann, Niklas W. A. Gebauer, Jonas Lederer, and Michael Gastegger.SchNetPack 2.0: A neural network toolbox for atomistic machine learning.The Journal of Chemical Physics, 158(14):144801, 04 2023.ISSN 0021-9606.doi: 10.1063/5.0138367.URL https://doi.org/10.1063/5.0138367.
2201
+ Shi et al. (2021) Chence Shi, Shitong Luo, Minkai Xu, and Jian Tang.Learning gradient fields for molecular conformation generation.In International conference on machine learning, pp.  9558–9568. PMLR, 2021.
2202
+ Shuaibi et al. (2021) Muhammed Shuaibi, Adeesh Kolluru, Abhishek Das, Aditya Grover, Anuroop Sriram, Zachary W. Ulissi, and C. Lawrence Zitnick.Rotation invariant graph neural networks using spin convolutions.ArXiv, abs/2106.09575, 2021.
2203
+ Simm & Hernández-Lobato (2019) Gregor N. C. Simm and José Miguel Hernández-Lobato.A generative model for molecular distance geometry.In International Conference on Machine Learning, 2019.URL https://api.semanticscholar.org/CorpusID:202749839.
2204
+ Smith et al. (2020) Daniel GA Smith, Lori A Burns, Andrew C Simmonett, Robert M Parrish, Matthew C Schieber, Raimondas Galvelis, Peter Kraus, Holger Kruse, Roberto Di Remigio, Asem Alenaizan, et al.Psi4 1.4: Open-source software for high-throughput quantum chemistry.The Journal of chemical physics, 152(18), 2020.
2205
+ Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.Deep unsupervised learning using nonequilibrium thermodynamics.In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp.  2256–2265, Lille, France, 2015. PMLR.
2206
+ Sousa et al. (2007) Sérgio Filipe Sousa, Pedro Alexandrino Fernandes, and Maria Joao Ramos.General performance of density functionals.The Journal of Physical Chemistry A, 111(42):10439–10452, 2007.
2207
+ Swanson et al. (2023) Kirk Swanson, Jake Lawrence Williams, and Eric M Jonas.Von mises mixture distributions for molecular conformation generation.In International Conference on Machine Learning, pp.  33319–33342. PMLR, 2023.
2208
+ Tielker et al. (2021) Nicolas Tielker, Lukas Eberlein, Gerhard Hessler, K Friedemann Schmidt, Stefan Güssregen, and Stefan M Kast.Quantum-mechanical property prediction of solvated drug molecules: what have we learned from a decade of SAMPL blind prediction challenges?J. Comput. Aided Mol. Des., 35(4):453–472, April 2021.
2209
+ Tran* et al. (2022) Richard Tran*, Janice Lan*, Muhammed Shuaibi*, Brandon Wood*, Siddharth Goyal*, Abhishek Das, Javier Heras-Domingo, Adeesh Kolluru, Ammar Rizvi, Nima Shoghi, Anuroop Sriram, Zachary Ulissi, and C. Lawrence Zitnick.The open catalyst 2022 (oc22) dataset and challenges for oxide electrocatalysis.arXiv preprint arXiv:2206.08917, 2022.
2210
+ Tsai & Jordan (1993) CJ Tsai and KD Jordan.Use of an eigenmode method to locate the stationary points on the potential energy surfaces of selected argon and water clusters.The Journal of Physical Chemistry, 97(43):11227–11237, 1993.
2211
+ Unke et al. (2021) Oliver T. Unke, Stefan Chmiela, Huziel E. Sauceda, Michael Gastegger, Igor Poltavsky, Kristof T. Schütt, Alexandre Tkatchenko, and Klaus-Robert Müller.Machine learning force fields.Chemical Reviews, 121(16):10142–10186, 2021.doi: 10.1021/acs.chemrev.0c01111.URL https://doi.org/10.1021/acs.chemrev.0c01111.PMID: 33705118.
2212
+ Wang et al. (2022) Lihao Wang, Yi Zhou, Yiqun Wang, Xiaoqing Zheng, Xuanjing Huang, and Hao Zhou.Regularized molecular conformation fields.Advances in Neural Information Processing Systems, 35:18929–18941, 2022.
2213
+ Wang et al. (2020) Shuzhe Wang, Jagna Witek, Gregory A. Landrum, and Sereina Riniker.Improving conformer generation for small rings and macrocycles based on distance geometry and experimental torsional-angle preferences.Journal of Chemical Information and Modeling, 60(4):2044–2058, 2020.doi: 10.1021/acs.jcim.0c00025.URL https://doi.org/10.1021/acs.jcim.0c00025.PMID: 32155061.
2214
+ Wu et al. (2022) Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, and Qiang Liu.Diffusion-based molecule generation with informative prior bridges.Advances in Neural Information Processing Systems, 35:36533–36545, 2022.
2215
+ Xu et al. (2021) Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, and Jian Tang.Learning neural generative dynamics for molecular conformation generation.In International Conference on Learning Representations, 2021.URL https://openreview.net/forum?id=pAbm1qfheGk.
2216
+ Xu et al. (2022) Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang.Geodiff: A geometric diffusion model for molecular conformation generation.In International Conference on Learning Representations, 2022.URL https://openreview.net/forum?id=PzcvxEMzvQC.
2217
+ Ying et al. (2021) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu.Do transformers really perform badly for graph representation?Advances in Neural Information Processing Systems, 34:28877–28888, 2021.
2218
+ Zhang et al. (2018) Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and EJPRL Weinan.Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics.Physical review letters, 120(14):143001, 2018.
2219
+ Zhu et al. (2022) Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, and Tie-Yan Liu.Direct molecular conformation generation.Transactions on Machine Learning Research, 2022.ISSN 2835-8856.URL https://openreview.net/forum?id=lCPOHiztuw.
2220
+ Appendix AExperimental Setup
2221
+
2222
+ Our implementation of GOLF is based on Schnetpack2.0 (Schütt et al., 2023). Namely, we use Schnetpack2.0’s implementation of PaiNN and the data processing pipeline. All the experiments were carried out on a cluster with 2 Nvidia Tesla V100 and 960 Intel(R) Xeon(R) Gold 2.60Hz CPU-cores, and the total computational cost is
2223
+
2224
+ 80
2225
+ CPU-years and
2226
+
2227
+ 1900
2228
+ GPU-hours.
2229
+
2230
+ To train
2231
+ 𝑓
2232
+ GOLF-*
2233
+
2234
+ (
2235
+
2236
+ ;
2237
+ 𝜽
2238
+ )
2239
+ , we use a batched version of Algorithm 1 that simultaneously generates several NNP-optimization trajectories with the same NNP and calculates energies and forces using “number of parallel
2240
+ 𝒪
2241
+ 𝐺
2242
+ ” DFT-workers running in parallel. We use a smaller value of “number of parallel
2243
+ 𝒪
2244
+ 𝐺
2245
+
2246
+ =
2247
+ 48
2248
+ for
2249
+ 𝑓
2250
+ GOLF-1k
2251
+
2252
+ (
2253
+
2254
+ ;
2255
+ 𝜽
2256
+ )
2257
+ to reduce the number of correlated samples in the replay buffer. To prevent the biasing of the model towards newly collected conformations, we sample 10% of each mini-batch from the initial training dataset
2258
+ 𝒟
2259
+ 0
2260
+ during training.
2261
+
2262
+ We list all the hyperparameters in Table 5. When evaluating the NNPs on new molecules, we do not employ the
2263
+ 𝒪
2264
+ 𝑆
2265
+ to terminate the optimization trajectory and instead use a fixed timelimit
2266
+ 𝑇
2267
+ eval
2268
+ =
2269
+ 100
2270
+ .
2271
+
2272
+ Table 5:Hyperparameter values for GOLF-10k.
2273
+ GOLF-10k
2274
+ NNP hyperparameters
2275
+ Backbone PaiNN
2276
+ Number of interaction layers 3
2277
+ Cutoff radius
2278
+ 5.0
2279
+ Å
2280
+ Number of radial basis functions 50
2281
+ Hidden size (n_atom_basis) 128
2282
+ Training hyperparameters
2283
+ Number of parallel
2284
+ 𝒪
2285
+ 𝐺
2286
+ 120
2287
+ Batch size 64
2288
+ Optimizer Adam
2289
+ Learning rate scheduler CosineAnnealing
2290
+ Initial learning rate
2291
+ 1
2292
+ ×
2293
+ 10
2294
+
2295
+ 4
2296
+
2297
+ Final learning rate
2298
+ 1
2299
+ ×
2300
+ 10
2301
+
2302
+ 7
2303
+
2304
+ Gradient clipping value 1.0
2305
+ Weight coefficient
2306
+ 𝜌
2307
+
2308
+ 1
2309
+ ×
2310
+ 10
2311
+
2312
+ 2
2313
+
2314
+ Total number of training steps
2315
+ 5
2316
+ ×
2317
+ 10
2318
+ 5
2319
+
2320
+ Number of additional GO interactions
2321
+ 𝐾
2322
+ 10000
2323
+ Update-to-data ratio
2324
+ 𝑈
2325
+ 50
2326
+ Timelimit
2327
+ 𝑇
2328
+ train
2329
+ 100
2330
+ Timelimit
2331
+ 𝑇
2332
+ eval
2333
+ 100
2334
+ Conformation optimizer hyperparameters
2335
+ Conformation optimizer L-BFGS
2336
+ Optimization rate
2337
+ 𝛼
2338
+ 1.0
2339
+ Max number of iterations in the inner cycle 5
2340
+ Figure 3:
2341
+ pct
2342
+ ¯
2343
+ 𝑡
2344
+ and
2345
+ pct
2346
+ div
2347
+ 𝑡
2348
+ ,
2349
+ 𝑡
2350
+ =
2351
+ 1
2352
+ ,
2353
+ 2
2354
+ ,
2355
+ 3
2356
+ ,
2357
+ 5
2358
+ ,
2359
+ 8
2360
+ ,
2361
+ 13
2362
+ ,
2363
+ 21
2364
+ ,
2365
+ 30
2366
+ ,
2367
+ 50
2368
+ ,
2369
+ 75
2370
+ ,
2371
+ 100
2372
+ . Shaded regions indicate the 10th and the 90th percentiles of the
2373
+ pct
2374
+
2375
+ (
2376
+ 𝑠
2377
+ 𝑡
2378
+ )
2379
+ ,
2380
+ 𝑠
2381
+
2382
+ 𝒟
2383
+ test
2384
+ distribution. The x-axis is log-scaled.
2385
+ Table 6:Hyperparameter values for GOLF with different external optimizers.
2386
+ GOLF-LBFGS GOLF-Adam GOLF-SGD-momentum
2387
+ Training hyperparameters
2388
+ Total number of training steps
2389
+ 2
2390
+ ×
2391
+ 10
2392
+ 5
2393
+
2394
+ 2
2395
+ ×
2396
+ 10
2397
+ 5
2398
+
2399
+ 2
2400
+ ×
2401
+ 10
2402
+ 5
2403
+
2404
+ Update-to-data ratio
2405
+ 𝑈
2406
+ 20 20 20
2407
+ Timelimit
2408
+ 𝑇
2409
+ (training) 100 200 200
2410
+ Timelimit
2411
+ 𝑇
2412
+ (evaluation) 100 500 500
2413
+ Conformation optimizer hyperparameters
2414
+ Conformation optimizer L-BFGS Adam SGD
2415
+ Optimization rate
2416
+ 𝛼
2417
+ 1.0
2418
+ 5
2419
+ ×
2420
+ 10
2421
+
2422
+ 3
2423
+
2424
+ 5
2425
+ ×
2426
+ 10
2427
+
2428
+ 3
2429
+
2430
+ Max number of iterations in the inner cycle 5 – –
2431
+ Momentum – – 0.9
2432
+ Appendix BExternal Optimizers
2433
+
2434
+ The external optimizer
2435
+ 𝐎𝐩𝐭
2436
+ is a crucial component of GOLF, as it generates the NNP-optimization trajectories from which we sample the additional training data. To test the effect of the external optimizer on the training and the evaluation of NNPs, we conduct a series of experiments with SGD (Robbins & Monro, 1951) with momentum, Adam (Kingma & Ba, 2014), and L-BFGS (Liu & Nocedal, 1989). We use the same optimizer for the training and the evaluation of NNPs. We dub the resulting models as
2437
+ 𝑓
2438
+ GOLF-10k-SGD
2439
+
2440
+ (
2441
+
2442
+ ;
2443
+ 𝜽
2444
+ )
2445
+ ,
2446
+ 𝑓
2447
+ GOLF-10k-Adam
2448
+
2449
+ (
2450
+
2451
+ ;
2452
+ 𝜽
2453
+ )
2454
+ and
2455
+ 𝑓
2456
+ GOLF-10k-LBFGS
2457
+
2458
+ (
2459
+
2460
+ ;
2461
+ 𝜽
2462
+ )
2463
+ respectively. As the pytorch implementation of L-BFGS includes an inner cycle with up to 5 (empirically chosen hyperparameter) NNP evaluations, we run
2464
+ 𝑓
2465
+ GOLF-10k-Adam
2466
+
2467
+ (
2468
+
2469
+ ;
2470
+ 𝜽
2471
+ )
2472
+ and
2473
+ 𝑓
2474
+ GOLF-10k-SGD
2475
+
2476
+ (
2477
+
2478
+ ;
2479
+ 𝜽
2480
+ )
2481
+ for 500 steps instead of 100 for
2482
+ 𝑓
2483
+ GOLF-10k-LBFGS
2484
+
2485
+ (
2486
+
2487
+ ;
2488
+ 𝜽
2489
+ )
2490
+ . We train such models for
2491
+ 2
2492
+ ×
2493
+ 10
2494
+ 5
2495
+ training steps instead of
2496
+ 5
2497
+ ×
2498
+ 10
2499
+ 5
2500
+ to save computational resources. We provide training hyperparameters for
2501
+ 𝑓
2502
+ GOLF-10k-*
2503
+
2504
+ (
2505
+
2506
+ ;
2507
+ 𝜽
2508
+ )
2509
+ with different external optimizers in Table 6 and omit hyperparameters identical to those in Table 5. Such a number of training steps is enough to show the superiority of the L-BFGS external optimizer compared to other optimizers.
2510
+
2511
+ As it can be seen in Figure 3,
2512
+ 𝑓
2513
+ GOLF-10k-LBFGS
2514
+
2515
+ (
2516
+
2517
+ ;
2518
+ 𝜽
2519
+ )
2520
+ outperforms other optimizers in terms of
2521
+ pct
2522
+ ¯
2523
+ 𝑇
2524
+ . However,
2525
+ 𝑓
2526
+ GOLF-10k-Adam
2527
+
2528
+ (
2529
+
2530
+ ;
2531
+ 𝜽
2532
+ )
2533
+ performs better in terms of
2534
+ pct
2535
+ div
2536
+ . We hypothesize that
2537
+ 𝑓
2538
+ GOLF-10k-Adam
2539
+
2540
+ (
2541
+
2542
+ ;
2543
+ 𝜽
2544
+ )
2545
+ can be tuned to match the optimization quality of
2546
+ 𝑓
2547
+ GOLF-10k-LBFGS
2548
+
2549
+ (
2550
+
2551
+ ;
2552
+ 𝜽
2553
+ )
2554
+ , while retaining close-to-zero
2555
+ pct
2556
+ div
2557
+ , but leave this for future work.
2558
+
2559
+ Appendix CMSE for GOLF
2560
+ Figure 4:Mean squared error (MSE) of energy and forces prediction for NNPs trained on
2561
+ 𝒟
2562
+ 0
2563
+ and
2564
+ 𝒟
2565
+ traj-500k
2566
+ , and NNP trained with GOLF. To compute the MSE, we collect NNP-optimization trajectories of length
2567
+ 𝑇
2568
+ =
2569
+ 100
2570
+ and calculate the ground truth energies and forces in steps
2571
+ 𝑡
2572
+ =
2573
+ 1
2574
+ ,
2575
+ 2
2576
+ ,
2577
+ 3
2578
+ ,
2579
+ 5
2580
+ ,
2581
+ 8
2582
+ ,
2583
+ 13
2584
+ ,
2585
+ 21
2586
+ ,
2587
+ 30
2588
+ ,
2589
+ 50
2590
+ ,
2591
+ 75
2592
+ ,
2593
+ 100
2594
+ . Solid lines indicate the median MSE, and the shaded regions indicate the 10th and the 90th percentiles. Both the x-axis and y-axis are log scaled
2595
+
2596
+ To further show that
2597
+ 𝑓
2598
+ GOLF-10k
2599
+
2600
+ (
2601
+
2602
+ ;
2603
+ 𝜽
2604
+ )
2605
+ and
2606
+ 𝑓
2607
+ traj-500k
2608
+
2609
+ (
2610
+
2611
+ ;
2612
+ 𝜽
2613
+ )
2614
+ perform similarly, we evaluate the prediction quality of
2615
+ 𝑓
2616
+ GOLF-10k
2617
+
2618
+ (
2619
+
2620
+ ;
2621
+ 𝜽
2622
+ )
2623
+ along the NNP-generated trajectories and plot the MSE for predicted energies and forces in Figure 4.
2624
+
2625
+ Appendix DnablaDFT dataset
2626
+
2627
+ Throughout this work, we use several subsets of nablaDFT (Khrabrov et al., 2022) dataset. The nablaDFT dataset is based on the Molecular Sets (MOSES) dataset, which is a diverse subset of the ZINC dataset, containing approximately one million drug-like molecules with atoms C, N, S, O, F, Cl, Br, and H. For each molecule from the dataset, the authors ran the conformation generation method Wang et al. (2020) from the RDKit software Landrum et al. (2022). Next, they clustered the resulting conformations with the Butina clustering method Barnard & Downs (1992). Lastly, they selected the smallest number of clusters that cover at least
2628
+ 95
2629
+ %
2630
+ of conformations and used their centroids as a set of conformations for a given molecule. This procedure has resulted in
2631
+ 1
2632
+ to
2633
+ 62
2634
+ unique conformations for each molecule, with 5 340 152 total conformations in the full dataset. Finally, these conformations were evaluated with a DFT-based oracle. The baselines and GOLF models are trained on the train set
2635
+ 𝒟
2636
+ 0
2637
+ : a subset of nablaDFT, which contains 4 000 molecules and 
2638
+
2639
+ 10 000 conformations ( 2.5 conformations per molecule). The test set
2640
+ 𝒟
2641
+ test
2642
+ contains  
2643
+
2644
+ 10 000 different molecules and 19 447 conformations. Optimization trajectories for
2645
+ 𝑓
2646
+ traj-
2647
+
2648
+ (
2649
+
2650
+ ;
2651
+ 𝜽
2652
+ )
2653
+ were obtained with a DFT-based oracle by optimizing conformations from the train set. The average length of a trajectory is
2654
+
2655
+ 100
2656
+ steps. Finally, generative baselines were trained to map conformations from
2657
+ 𝒟
2658
+ 0
2659
+ to their optimal counterparts.
2660
+
2661
+ Appendix ESPICE dataset
2662
+
2663
+ Another dataset that is used in our work is SPICE (Eastman et al., 2023). It is a subset of the PubChem dataset (Kim et al., 2023) and contains a diverse set of drug-like molecules. The total number of molecules in SPICE is 14644. The dataset contains 25 high-energy conformations and 25 low-energy near-optimal conformations per molecule. Molecules contain the following atoms: C, N, S, O, F, Cl, Br, I, P, and H. To cross-validate models trained on SPICE and nablaDFT, we filtered out molecules containing I and P atoms. This procedure resulted in 13 231 filtered molecules. To make the training setup consistent with nablaDFT, we selected
2664
+
2665
+ 3500
2666
+ molecules and
2667
+
2668
+ 9500
2669
+ conformations for the SPICE training set
2670
+ 𝒟
2671
+ 0
2672
+ SPICE
2673
+ . The training set only contains the high-energy conformations, as we observed that training on near-optimal conformations leads to instabilities. The test set
2674
+ 𝒟
2675
+ test
2676
+ SPICE
2677
+ includes
2678
+
2679
+ 7000
2680
+ molecules and
2681
+
2682
+ 18000
2683
+ conformations. The test set contains both high-energy and low-energy conformations in equal parts. Note that initially,
2684
+ 𝒟
2685
+ test
2686
+ SPICE
2687
+ was supposed to match the size of
2688
+ 𝒟
2689
+ test
2690
+ but the DFT-based optimization for some molecules did not converge, so we excluded them from the test set. Similar to Section D, optimization trajectories were obtained with a DFT-based oracle by optimizing conformations from
2691
+ 𝒟
2692
+ 0
2693
+ SPICE
2694
+ . The only difference is that we used optimization in spherical coordinates instead of Cartesian. The change of coordinates resulted in shorter optimization trajectories (around 25 steps on average). The biggest trajectories dataset for SPICE
2695
+ 𝒟
2696
+ traj-220k
2697
+ SPICE
2698
+ thus only contains
2699
+
2700
+ 220 000 conformations.
2701
+
2702
+ Appendix FDistribution matching metrics
2703
+
2704
+ Consider the evaluation of the NNP on the dataset 
2705
+ 𝒟
2706
+ test
2707
+ . Let
2708
+ 𝕊
2709
+ 𝑔
2710
+ =
2711
+ {
2712
+ 𝑠
2713
+ 𝑇
2714
+ }
2715
+ 𝑠
2716
+
2717
+ 𝒟
2718
+ test
2719
+ denote the set of all final conformations in the NNP-optimization trajectories, and
2720
+ 𝕊
2721
+ 𝑟
2722
+ =
2723
+ {
2724
+ 𝑠
2725
+ 𝐎𝐩𝐭
2726
+ DFT
2727
+ }
2728
+ 𝑠
2729
+
2730
+ 𝒟
2731
+ test
2732
+ denote the set of all ground truth optimal conformations obtained by the GO. To measure the difference between
2733
+ 𝑠
2734
+
2735
+ 𝕊
2736
+ 𝑔
2737
+ and
2738
+ 𝑠
2739
+ ~
2740
+
2741
+ 𝕊
2742
+ 𝑟
2743
+ , we use the GetBestRMSD in the RDKit package and denote the root-mean-square deviation as
2744
+ RMSD
2745
+
2746
+ (
2747
+ 𝑠
2748
+ ,
2749
+ 𝑠
2750
+ ~
2751
+ )
2752
+ . The recall-based coverage and matching scores are defined as follows:
2753
+
2754
+
2755
+ COV
2756
+
2757
+ (
2758
+ 𝕊
2759
+ 𝑔
2760
+ ,
2761
+ 𝕊
2762
+ 𝑟
2763
+ )
2764
+ =
2765
+ 1
2766
+ |
2767
+ 𝕊
2768
+ 𝑟
2769
+ |
2770
+
2771
+ |
2772
+ {
2773
+ 𝑠
2774
+
2775
+ 𝕊
2776
+ 𝑟
2777
+
2778
+ RMSD
2779
+
2780
+ (
2781
+ 𝑠
2782
+ ,
2783
+ 𝑠
2784
+ ~
2785
+ )
2786
+ <
2787
+ 𝛿
2788
+ ,
2789
+
2790
+ 𝑠
2791
+ ~
2792
+
2793
+ 𝕊
2794
+ 𝑔
2795
+ }
2796
+ |
2797
+ ;
2798
+
2799
+ (9)
2800
+
2801
+
2802
+ MAT
2803
+
2804
+ (
2805
+ 𝕊
2806
+ 𝑔
2807
+ ,
2808
+ 𝕊
2809
+ 𝑟
2810
+ )
2811
+ =
2812
+ 1
2813
+ |
2814
+ 𝕊
2815
+ 𝑟
2816
+ |
2817
+
2818
+
2819
+ 𝑠
2820
+
2821
+ 𝕊
2822
+ 𝑟
2823
+ min
2824
+ 𝑠
2825
+ ~
2826
+
2827
+ 𝕊
2828
+ 𝑔
2829
+
2830
+ RMSD
2831
+
2832
+ (
2833
+ 𝑠
2834
+ ,
2835
+ 𝑠
2836
+ ~
2837
+ )
2838
+ .
2839
+
2840
+
2841
+ COV
2842
+ is number of conformations in
2843
+ 𝕊
2844
+ 𝑟
2845
+ that are “reasonably” close (
2846
+ RMSD
2847
+ <
2848
+ 𝛿
2849
+ ) to some conformation from
2850
+ 𝕊
2851
+ 𝑠
2852
+ .
2853
+ MAT
2854
+ is the average over all
2855
+ 𝑠
2856
+
2857
+ 𝕊
2858
+ 𝑟
2859
+
2860
+ RMSD
2861
+ to the closest conformation from
2862
+ 𝕊
2863
+ 𝑔
2864
+ . Note that both
2865
+ COV
2866
+ and
2867
+ MAT
2868
+ are not ideal metrics for the optimization task because they do not consider the energy of the final conformation.
2869
+
2870
+ Appendix GGenerative baselines
2871
+ Table 7:Energy and Recall-based scores. We set
2872
+ 𝛿
2873
+ =
2874
+ 0.5
2875
+ Å   when computing the
2876
+ COV
2877
+ .
2878
+ Methods
2879
+ pct
2880
+ ¯
2881
+ 𝑇
2882
+ (%)
2883
+
2884
+
2885
+ pct
2886
+ div
2887
+ (%)
2888
+
2889
+ COV(%)
2890
+
2891
+ MAT (Å)
2892
+
2893
+
2894
+ Mean Mean
2895
+ TD
2896
+ 24.04
2897
+ ±
2898
+ 21.3
2899
+ 54.1 12.53 1.284
2900
+ ConfOpt
2901
+ 33.36
2902
+ ±
2903
+ 22.0
2904
+ 92.5 24.08 1.004
2905
+ Uni-Mol+ -
2906
+ *
2907
+ -
2908
+ *
2909
+ 13.49 1.25
2910
+ TD
2911
+ 𝑝
2912
+
2913
+ 𝑟
2914
+
2915
+ 25.63
2916
+ ±
2917
+ 21.4
2918
+ 46.9 11.25 1.33
2919
+ ConfOpt
2920
+ 𝑝
2921
+
2922
+ 𝑟
2923
+
2924
+ 36.48
2925
+ ±
2926
+ 23.0
2927
+ 84.5 19.88 1.05
2928
+ Uni-Mol+
2929
+ 𝑝
2930
+
2931
+ 𝑟
2932
+
2933
+ 69.9
2934
+ ±
2935
+ 23.1
2936
+ 23.2 15.29 1.23
2937
+ Uni-Mol+
2938
+ 𝑖
2939
+
2940
+ 𝑛
2941
+
2942
+ 𝑖
2943
+
2944
+ 𝑡
2945
+
2946
+ 54.92
2947
+ ±
2948
+ 20.5
2949
+ 8.0 63.41 0.44
2950
+ Uni-Mol+
2951
+ 𝑝
2952
+
2953
+ 𝑟
2954
+ +
2955
+ 𝑖
2956
+
2957
+ 𝑛
2958
+
2959
+ 𝑖
2960
+
2961
+ 𝑡
2962
+
2963
+ 62.20
2964
+ ±
2965
+ 17.2
2966
+ 2.8 68.79 0.407
2967
+
2968
+ * The energy-based metrics for the Uni-Mol+ model are not reported due to the problems with energy computation.
2969
+
2970
+ In this section, we provide additional details considering the training of generative baselines and the corresponding metrics (see Table 7). We consider three architectures designed for conformation generation (Energy-inspired molecular conformational optimization (ConfOpt) (Guan et al., 2021), Torsional diffusion (TD) (Jing et al., 2022), and Uni-Mol+ (Lu et al., 2023)) and adapt them to the task of geometry optimization. For the first two models, we follow the same setup proposed in the corresponding papers and train models to generate optimal conformations from the ones generated by RDKit. In the case of Uni-Mol+, we compare two setups: i) the model is trained to generate optimal conformations conditioned on geometries from RDKit; ii) the model is trained to generate optimal conformations conditioned on non-optimal conformations from nablaDFT. We add a subscript
2971
+ 𝑖
2972
+
2973
+ 𝑛
2974
+
2975
+ 𝑖
2976
+
2977
+ 𝑡
2978
+ in the latter case. Moreover, we also experiment with starting the training with randomly initialized weights and pretrained checkpoints. We use a checkpoint obtained on the PCQM4MV2 dataset (Nakata & Maeda, 2023) in the case of Uni-Mol+ and on the GEOM-DRUGS dataset (Axelrod & Gomez-Bombarelli, 2022) otherwise. We add a subscript
2979
+ 𝑝
2980
+
2981
+ 𝑟
2982
+ for pre-trained models.
2983
+
2984
+ To save computational resources, all the models from Table 7 were evaluated on a subset of
2985
+ 𝒟
2986
+ test
2987
+ that we call
2988
+ 𝒟
2989
+ test
2990
+ small
2991
+ (
2992
+ |
2993
+ 𝒟
2994
+ test
2995
+ small
2996
+ |
2997
+ = 2044). Our findings are as follows: ConfOpt and TD models perform much worse on the energy optimization task in our setup than on the tasks reported in the corresponding papers. Uni-Mol+ performs on par with NNP baselines but worse than the models trained on additional data. We suspect that the reason for such behavior of TD is the small amount of data and the necessity to model a discrete distribution over the optimal geometries instead of the whole conformational space. The TD authors also report that the resulting conformations differ by a large margin in terms of energies and other quantum chemical properties from the reference conformations and require additional optimization with the simulator. We hypothesize that in the case of ConfOpt, the main problems are the choice of architecture and the fact that the model generates optimal conformations from SMILES and does not use initial geometries.
2998
+
2999
+ In Table 7, we observe that i) all generative baselines benefit in terms of
3000
+ pct
3001
+ div
3002
+ from using pre-trained weights. Even though the pre-training was done using data generated by DFT-based methods with different levels of theory than in nablaDFT; ii) starting from non-optimal conformations from nablaDFT greatly benefits all metrics for Uni-Mol+, indicating that a reasonable initial conformation is crucial for generative baselines.
3003
+
3004
+ Appendix HFinal conformations comparison
3005
+ Figure 5: Visualization of final conformations obtained by various models, the 2D view of the molecule, and the reference optimal conformation obtained with the
3006
+ 𝒪
3007
+ 𝐺
3008
+ .
3009
+
3010
+ To highlight the difference in the quality of conformation optimization, we visualize final conformations for ConfOpt, Torsional Diffusion, Uni-Mol+, and our best-performing model (GOLF-10k) with py3Dmol (Rego & Koes, 2015). In Figure 5, we provide visualizations for
3011
+ 4
3012
+ molecules from the test set
3013
+ 𝒟
3014
+ test
3015
+ . We also provide the 2D visualization of the molecule obtained with RDKit and a visualization of the reference optimal conformation obtained with the
3016
+ 𝒪
3017
+ 𝐺
3018
+ .
3019
+
3020
+ Molecules 1 and 3 are an example of the case where the conformation optimization with GOLF-10k converges to the same local minima as
3021
+ 𝒪
3022
+ 𝐺
3023
+ : the RMSD to the reference conformation is close to zero, while the
3024
+ pct
3025
+ ¯
3026
+ 100
3027
+ is close to 100%. It is also hard to spot any visual differences. On the other hand, molecules 2 and 4 illustrate the case where the conformation optimization with GOLF-10k converges to the different local minima: RMSD is larger than zero, but the
3028
+ pct
3029
+ ¯
3030
+ 100
3031
+ is 100% or even greater than 100% in case of the molecule 4. The visual difference between the resulting conformations is prominent.
3032
+
3033
+ Negative values for the
3034
+ pct
3035
+ ¯
3036
+ 100
3037
+ are often caused by distorted distances between atoms in cycles (ConfOpt optimization for molecules 3 and 4). Low positive values of
3038
+ pct
3039
+ ¯
3040
+ 100
3041
+ generally indicate conformations with correct distances between atoms but incorrect dihedral angles between different parts of the molecule (ConfOpt optimization for molecule 2, Torsional diffusion for molecule 1, Uni-Mol+ optimization for molecule 2).
3042
+
3043
+ Generated on Tue Mar 12 07:33:55 2024 by LATExml