Add Batch 374c2040-b05a-4e7c-802e-6c0e397626e3
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 8bitoptimizersviablockwisequantization/4803c8c7-b0ba-4e0a-becc-2572a517fa8e_content_list.json +3 -0
- 8bitoptimizersviablockwisequantization/4803c8c7-b0ba-4e0a-becc-2572a517fa8e_model.json +3 -0
- 8bitoptimizersviablockwisequantization/4803c8c7-b0ba-4e0a-becc-2572a517fa8e_origin.pdf +3 -0
- 8bitoptimizersviablockwisequantization/full.md +387 -0
- 8bitoptimizersviablockwisequantization/images.zip +3 -0
- 8bitoptimizersviablockwisequantization/layout.json +3 -0
- abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/f2f4139b-878c-4f54-9232-83badb2fceae_content_list.json +3 -0
- abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/f2f4139b-878c-4f54-9232-83badb2fceae_model.json +3 -0
- abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/f2f4139b-878c-4f54-9232-83badb2fceae_origin.pdf +3 -0
- abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/full.md +449 -0
- abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/images.zip +3 -0
- abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/layout.json +3 -0
- adarlwhatwhereandhowtoadaptintransferreinforcementlearning/4a7f7977-6d9e-4d6f-8994-382f57f08d13_content_list.json +3 -0
- adarlwhatwhereandhowtoadaptintransferreinforcementlearning/4a7f7977-6d9e-4d6f-8994-382f57f08d13_model.json +3 -0
- adarlwhatwhereandhowtoadaptintransferreinforcementlearning/4a7f7977-6d9e-4d6f-8994-382f57f08d13_origin.pdf +3 -0
- adarlwhatwhereandhowtoadaptintransferreinforcementlearning/full.md +0 -0
- adarlwhatwhereandhowtoadaptintransferreinforcementlearning/images.zip +3 -0
- adarlwhatwhereandhowtoadaptintransferreinforcementlearning/layout.json +3 -0
- adversarialsupportalignment/bdd1c5f0-c535-4607-b0f0-86237a14a8df_content_list.json +3 -0
- adversarialsupportalignment/bdd1c5f0-c535-4607-b0f0-86237a14a8df_model.json +3 -0
- adversarialsupportalignment/bdd1c5f0-c535-4607-b0f0-86237a14a8df_origin.pdf +3 -0
- adversarialsupportalignment/full.md +0 -0
- adversarialsupportalignment/images.zip +3 -0
- adversarialsupportalignment/layout.json +3 -0
- ageneralanalysisofexampleselectionforstochasticgradientdescent/8e3a5072-3fc7-4170-9d70-10ef51362aa8_content_list.json +3 -0
- ageneralanalysisofexampleselectionforstochasticgradientdescent/8e3a5072-3fc7-4170-9d70-10ef51362aa8_model.json +3 -0
- ageneralanalysisofexampleselectionforstochasticgradientdescent/8e3a5072-3fc7-4170-9d70-10ef51362aa8_origin.pdf +3 -0
- ageneralanalysisofexampleselectionforstochasticgradientdescent/full.md +0 -0
- ageneralanalysisofexampleselectionforstochasticgradientdescent/images.zip +3 -0
- ageneralanalysisofexampleselectionforstochasticgradientdescent/layout.json +3 -0
- amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/6804f67d-cd8c-4775-9ac5-7d50279291ac_content_list.json +3 -0
- amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/6804f67d-cd8c-4775-9ac5-7d50279291ac_model.json +3 -0
- amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/6804f67d-cd8c-4775-9ac5-7d50279291ac_origin.pdf +3 -0
- amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/full.md +471 -0
- amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/images.zip +3 -0
- amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/layout.json +3 -0
- analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/0bcbe61b-1f79-4ef5-bba6-d757f2071a87_content_list.json +3 -0
- analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/0bcbe61b-1f79-4ef5-bba6-d757f2071a87_model.json +3 -0
- analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/0bcbe61b-1f79-4ef5-bba6-d757f2071a87_origin.pdf +3 -0
- analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/full.md +0 -0
- analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/images.zip +3 -0
- analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/layout.json +3 -0
- anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/1a555d73-3897-4439-aa69-6ee02f262397_content_list.json +3 -0
- anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/1a555d73-3897-4439-aa69-6ee02f262397_model.json +3 -0
- anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/1a555d73-3897-4439-aa69-6ee02f262397_origin.pdf +3 -0
- anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/full.md +481 -0
- anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/images.zip +3 -0
- anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/layout.json +3 -0
- assessinggeneralizationofsgdviadisagreement/9244bf2e-51d2-4901-8c9d-762b80ce69da_content_list.json +3 -0
- assessinggeneralizationofsgdviadisagreement/9244bf2e-51d2-4901-8c9d-762b80ce69da_model.json +3 -0
8bitoptimizersviablockwisequantization/4803c8c7-b0ba-4e0a-becc-2572a517fa8e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e3cfbda90c798a41a50b23f3d8a44a81534bb35d3b91c7ad03fe26c2232941d
|
| 3 |
+
size 112722
|
8bitoptimizersviablockwisequantization/4803c8c7-b0ba-4e0a-becc-2572a517fa8e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc443b19b3c0e7229cc4ff70ec88f3d8c16f0bdcc338edc6b0ed781c4cc07583
|
| 3 |
+
size 136366
|
8bitoptimizersviablockwisequantization/4803c8c7-b0ba-4e0a-becc-2572a517fa8e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:34dd41c2aa127fa4083703f3abda945bceafe736c782134e1e021d38fd8775d3
|
| 3 |
+
size 934109
|
8bitoptimizersviablockwisequantization/full.md
ADDED
|
@@ -0,0 +1,387 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 8-BIT OPTIMIZERS VIA BLOCK-WISE QUANTIZATION
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
# ABSTRACT
|
| 8 |
+
|
| 9 |
+
Stateful optimizers maintain gradient statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate optimization compared to plain stochastic gradient descent but uses memory that might otherwise be allocated to model parameters, thereby limiting the maximum size of models trained in practice. In this paper, we develop the first optimizers that use 8-bit statistics while maintaining the performance levels of using 32-bit optimizer states. To overcome the resulting computational, quantization, and stability challenges, we develop block-wise dynamic quantization. Block-wise quantization divides input tensors into smaller blocks that are independently quantized. Each block is processed in parallel across cores, yielding faster optimization and high precision quantization. To maintain stability and performance, we combine block-wise quantization with two additional changes: (1) dynamic quantization, a form of non-linear optimization that is precise for both large and small magnitude values, and (2) a stable embedding layer to reduce gradient variance that comes from the highly non-uniform distribution of input tokens in language models. As a result, our 8-bit optimizers maintain 32-bit performance with a small fraction of the memory footprint on a range of tasks, including 1.5B parameter language modeling, GLUE finetuning, ImageNet classification, WMT'14 machine translation, MoCo v2 contrastive ImageNet pretraining+finetuning, and RoBERTa pretraining, without changes to the original optimizer hyperparameters. We open-source our 8-bit optimizers as a drop-in replacement that only requires a two-line code change.
|
| 10 |
+
|
| 11 |
+
Increasing model size is an effective way to achieve better performance for given resources (Kaplan et al., 2020; Henighan et al., 2020; Raffel et al., 2019; Lewis et al., 2021). However, training such large models requires storing the model, gradient, and state of the optimizer (e.g., exponentially smoothed sum and squared sum of previous gradients for Adam), all in a fixed amount of available memory. Although significant research has focused on enabling larger model training by reducing or efficiently distributing the memory required for the model parameters (Shoeybi et al., 2019; Lepikhin et al., 2020; Fedus et al., 2021; Brown et al., 2020; Rajbhandari et al., 2020), reducing the memory footprint of optimizer gradient statistics is much less studied. This is a significant missed opportunity since these optimizer states use $33 - 75\%$ of the total memory footprint during training. For example, the Adam optimizer states for the largest GPT-2 (Radford et al., 2019) and T5 (Raffel et al., 2019) models are 11 GB and 41 GB in size. In this paper, we develop a fast, high-precision non-linear quantization method – block-wise dynamic quantization – that enables stable 8-bit optimizers (e.g., Adam, AdamW, and Momentum) which maintain 32-bit performance at a fraction of the memory footprint and without any changes to the original hyperparameters. $^{1}$
|
| 12 |
+
|
| 13 |
+
While most current work uses 32-bit optimizer states, recent high-profile efforts to use 16-bit optimizers report difficulty for large models with more than 1B parameters (Ramesh et al., 2021). Going from 16-bit optimizers to 8-bit optimizers reduces the range of possible values from $2^{16} = 65536$ values to just $2^{8} = 256$ . To our knowledge, this has not been attempted before.
|
| 14 |
+
|
| 15 |
+
Effectively using this very limited range is challenging for three reasons: quantization accuracy, computational efficiency, and large-scale stability. To maintain accuracy, it is critical to introduce some form of non-linear quantization to reduce errors for both common small magnitude values and rare large ones. However, to be practical, 8-bit optimizers need to be fast enough to not slow
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: Schematic of 8-bit optimizers via block-wise dynamic quantization, see Section 2 for more details. After the optimizer update is performed in 32-bit, the state tensor is chunked into blocks, normalized by the absolute maximum value of each block. Then dynamic quantization is performed, and the index is stored. For dequantization, a lookup in the index is performed, with subsequent denormalization by multiplication with the block-wise absolute maximum value. Outliers are confined to a single block through block-wise quantization, and their effect on normalization is limited.
|
| 19 |
+
|
| 20 |
+
down training, which is especially difficult for non-linear methods that require more complex data structures to maintain the quantization buckets. Finally, to maintain stability with huge models beyond 1B parameters, a quantization method needs to not only have a good mean error but excellent worse case performance since a single large quantization error can cause the entire training run to diverge.
|
| 21 |
+
|
| 22 |
+
We introduce a new block-wise quantization approach that addresses all three of these challenges. Block-wise quantization splits input tensors into blocks and performs quantization on each block independently. This block-wise division reduces the effect of outliers on the quantization process since they are isolated to particular blocks, thereby improving stability and performance, especially for large-scale models. Block-wise processing also allows for high optimizer throughput since each normalization can be computed independently in each core. This contrasts with tensor-wide normalization, which requires slow cross-core synchronization that is highly dependent on task-core scheduling. We combine block-wise quantization with two novel methods for stable, high-performance 8-bit optimizers: dynamic quantization and a stable embedding layer. Dynamic quantization is an extension of dynamic tree quantization for unsigned input data. The stable embedding layer is a variation of a standard word embedding layer that supports more aggressive quantization by normalizing the highly non-uniform distribution of inputs to avoid extreme gradient variation.
|
| 23 |
+
|
| 24 |
+
Our 8-bit optimizers maintain 32-bit performance at a fraction of the original memory footprint. We show this for a broad range of tasks: 1.5B and 355M parameter language modeling, GLUE finetuning, ImageNet classification, WMT'14+WMT'16 machine translation, MoCo v2 contrastive image pretraining+finetuning, and RoBERTa pretraining. We also report additional ablations and sensitivity analysis showing that all components – block-wise quantization, dynamic quantization, and stable embedding layer – are crucial for these results and that 8-bit Adam can be used as a simple drop-in replacement for 32-bit Adam, with no hyperparameter changes. We open-source our custom CUDA kernels and provide a PyTorch implementation that enables 8-bit optimization by changing two lines of code.
|
| 25 |
+
|
| 26 |
+
# 1 BACKGROUND
|
| 27 |
+
|
| 28 |
+
# 1.1 STATEFUL OPTIMIZERS
|
| 29 |
+
|
| 30 |
+
An optimizer updates the parameters $\mathbf{w}$ of a neural network by using the gradient of the loss with respect to the weight $\mathbf{g}_t = \frac{\partial\mathbf{L}}{\partial\mathbf{w}}$ at update iteration $t$ . Stateful optimizers compute statistics of the gradient with respect to each parameter over time for accelerated optimization. Two of the most commonly used stateful optimizers are Adam (Kingma and Ba, 2014), and SGD with momentum
|
| 31 |
+
|
| 32 |
+
(Qian, 1999) – or Momentum for short. Without damping and scaling constants, the update rules of these optimizers are given by:
|
| 33 |
+
|
| 34 |
+
$$
|
| 35 |
+
\operatorname {M o m e n t u m} \left(\mathbf {g} _ {t}, \mathbf {w} _ {t - 1}, \mathbf {m} _ {t - 1}\right) = \left\{ \begin{array}{l l} \mathbf {m} _ {0} = \mathbf {g} _ {0} & \text {I n i t i a l i z a t i o n} \\ \mathbf {m} _ {t} = \beta_ {1} \mathbf {m} _ {t - 1} + \mathbf {g} _ {t} & \text {S t a t e 1 u p d a t e} \\ \mathbf {w} _ {t} = \mathbf {w} _ {t - 1} - \alpha \cdot \mathbf {m} _ {t} & \text {W e i g h t u p d a t e} \end{array} \right. \tag {1}
|
| 36 |
+
$$
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
\operatorname {A d a m} \left(\mathbf {g} _ {t}, \mathbf {w} _ {t - 1}, \mathbf {m} _ {t - 1}, \mathbf {r} _ {t - 1}\right) = \left\{ \begin{array}{l l} \mathbf {r} _ {0} = \mathbf {m} _ {0} = \mathbf {0} & \text {I n i t i a l i z a t i o n} \\ \mathbf {m} _ {t} = \beta_ {1} \mathbf {m} _ {t - 1} + (1 - \beta_ {1}) \mathbf {g} _ {t} & \text {S t a t e 1 u p d a t e} \\ \mathbf {r} _ {t} = \beta_ {2} \mathbf {r} _ {t - 1} + (1 - \beta_ {2}) \mathbf {g} _ {t} ^ {2} & \text {S t a t e 2 u p d a t e} \\ \mathbf {w} _ {t} = \mathbf {w} _ {t - 1} - \alpha \cdot \frac {\mathbf {m} _ {t}}{\sqrt {\mathbf {r} _ {t}} + \epsilon} & \text {W e i g h t u p d a t e ,} \end{array} \right. \tag {2}
|
| 40 |
+
$$
|
| 41 |
+
|
| 42 |
+
where $\beta_{1}$ and $\beta_{2}$ are smoothing constants, $\epsilon$ is a small constant, and $\alpha$ is the learning rate.
|
| 43 |
+
|
| 44 |
+
For 32-bit states, Momentum and Adam consume 4 and 8 bytes per parameter. That is 4 GB and 8 GB for a 1B parameter model. Our 8-bit non-linear quantization reduces these costs to 1 GB and 2 GB.
|
| 45 |
+
|
| 46 |
+
# 1.2 NON-LINEAR QUANTIZATION
|
| 47 |
+
|
| 48 |
+
Quantization compresses numeric representations to save space at the cost of precision. Quantization is the mapping of a $k$ -bit integer to a real element in $D$ , that is, $\mathbf{Q}^{\mathrm{map}}: [0, 2^k - 1] \mapsto D$ . For example, the IEEE 32-bit floating point data type maps the indices $0\dots 2^{32} - 1$ to the domain $[-3.4\mathrm{e}38, +3.4\mathrm{e}38]$ . We use the following notation: $\mathbf{Q}^{\mathrm{map}}(i) = \mathbf{Q}_i^{\mathrm{map}} = q_i$ , for example $\mathbf{Q}^{\mathrm{map}}(2^{31} + 131072) = 2.03125$ , for the IEEE 32-bit floating point data type.
|
| 49 |
+
|
| 50 |
+
To perform general quantization from one data type into another we require three steps. (1) Compute a normalization constant $N$ that transforms the input tensor $\mathbf{T}$ into the range of the domain $D$ of the target quantization data type $\mathbf{Q}^{\mathrm{map}}$ , (2) for each element of $\mathbf{T} / N$ find the closest corresponding value $q_{i}$ in the domain $D$ , (3) store the index $i$ corresponding to $q_{i}$ in the quantized output tensor $\mathbf{T}^{Q}$ . To receive the dequantized tensor $\mathbf{T}^{D}$ we look up the index and denormalize: $\mathbf{T}_{i}^{D} = \mathbf{Q}^{\mathrm{map}}(\mathbf{T}_{i}^{Q}) \cdot N$ .
|
| 51 |
+
|
| 52 |
+
To perform this procedure for dynamic quantization we first normalize into the range $[-1, 1]$ through division by the absolute maximum value: $N = \max(|\mathbf{T}|)$ .
|
| 53 |
+
|
| 54 |
+
Then we find the closest values via a binary search:
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
\mathbf {T} _ {i} ^ {Q} = \underset {j = 0} {\arg \min } ^ {2 ^ {n}} \left| \mathbf {Q} _ {j} ^ {\operatorname {m a p}} - \frac {\mathbf {T} _ {i}}{N} \right| \tag {3}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
# 1.3 DYNAMIC TREE QUANTIZATION
|
| 61 |
+
|
| 62 |
+
Dynamic Tree quantization (Dettmers, 2016) is a method that yields low quantization error for both small and large magnitude values. Unlike data types with fixed exponent and fraction, dynamic tree quantization uses a datatype with a dynamic exponent and fraction that can change with each number. It is made up of four parts, as seen in Figure 2: (1) The first bit of the data type is reserved for a sign. (2) The number of subsequent zero bits indicates the magnitude of the exponent. (3) The first bit that is set to one indicates that all following values are reserved for (4) linear quantization. By moving the indicator bit, num
|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
Figure 2: Dynamic tree quantization.
|
| 66 |
+
|
| 67 |
+
bers can have a large exponent $10^{-7}$ or precision as high as 1/63. Compared to linear quantization, dynamic tree quantization has better absolute and relative quantization errors for non-uniform distributions. Dynamic tree quantization is strictly defined to quantize numbers in the range [-1.0, 1.0], which is ensured by performing tensor-level absolute max normalization.
|
| 68 |
+
|
| 69 |
+
# 2 8-BIT OPTIMIZERS
|
| 70 |
+
|
| 71 |
+
Our 8-bit optimizers have three components: (1) block-wise quantization that isolates outliers and distributes the error more equally over all bits; (2) dynamic quantization, which quantizes both small and large values with high precision; and (3) a stable embedding layer to improve stability during optimization for models with word embeddings.
|
| 72 |
+
|
| 73 |
+
With these components, performing an optimizer update with 8-bit states is straightforward. We dequantize the 8-bit optimizer states to 32-bit, perform the update, and then quantize the states back to 8-bit for storage. We do this 8-bit to 32-bit conversion element-by-element in registers, which means no slow copies to GPU memory or additional temporary memory are needed to perform quantization and dequantization. For GPUs, this makes 8-bit optimizers faster than regular 32-bit optimizers, as we show in Section 3.
|
| 74 |
+
|
| 75 |
+
# 2.1 BLOCK-WISE QUANTIZATION
|
| 76 |
+
|
| 77 |
+
Our block-wise quantization reduces the cost of computing normalization and improves quantization precision by isolating outliers. In order to dynamically quantify a tensor, as defined in Section 1.2, we need to normalize the tensor into the range [-1, 1]. Such normalization requires a reduction over the entire tensor, which entails multiple synchronizations across GPU cores. Block-wise dynamic quantization reduces this cost by chunking an input tensor into small blocks of size $B = 2048$ and performing normalization independently in each core across this block.
|
| 78 |
+
|
| 79 |
+
More formally, using the notation introduced in Section 1.2, in block-wise quantization, we treat $\mathbf{T}$ as a one-dimensional sequence of elements that we chunk in blocks of size $B$ . This means for an input tensor $\mathbf{T}$ with $n$ elements we have $n / B$ blocks. We proceed to compute a normalization constant for each block: $N_{b} = \max(|\mathbf{T}_{b}|)$ , where $b$ is the index of the block $0..n / B$ . With this block-wise normalization constant, each block can be quantized independently:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathbf {T} _ {b i} ^ {Q} = \underset {j = 0} {\arg \min } ^ {2 ^ {n}} \left| \mathbf {Q} _ {j} ^ {\operatorname {m a p}} - \frac {\mathbf {T} _ {b i}}{N _ {b}} \right| \Bigg | _ {0 < i < B} \tag {4}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
This approach has several advantages, both for stability and efficiency. First, each block normalization can be computed independently. Thus no synchronization between cores is required, and throughput is enhanced.
|
| 86 |
+
|
| 87 |
+
Secondly, it is also much more robust to outliers in the input tensor. For example, to contrast blockwise and regular quantization, if we create an input tensor with one million elements sampled from the standard normal distribution, we expect less than $1\%$ of elements of the tensor will be in the range $[3, +\infty)$ . However, since we normalize the input tensor into the range [-1,1] this means the maximum values of the distribution determine the range of quantization buckets. This means if the input tensor contains an outlier with magnitude 5, the quantization buckets reserved for numbers between 3 and 5 will mostly go unused since less than $1\%$ of numbers are in this range. With blockwise quantization, the effect of outliers is limited to a single block. As such, most bits are used effectively in other blocks.
|
| 88 |
+
|
| 89 |
+
Furthermore, because outliers represent the absolute maximum value in the input tensor, blockwise quantization approximates outlier values without any error. This guarantees that the largest optimizer states, arguably the most important, will always be quantized with full precision. This property makes block-wise dynamic quantization both robust and precise and is essential for good training performance in practice.
|
| 90 |
+
|
| 91 |
+
# 2.2 DYNAMIC QUANTIZATION
|
| 92 |
+
|
| 93 |
+
In this work, we extend dynamic tree quantization (Section 1.3) for non-signed input tensors by re-purposing the sign bit. Since the second Adam state is strictly positive, the sign bit is not needed. Instead of just removing the sign bit, we opt to extend dynamic tree quantization with a fixed bit for the fraction. This extension is motivated by the observation that the second Adam state varies around 3-5 orders of magnitude during the training of a language model. In comparison, dynamic tree quantization already has a range of 7 orders of magnitude. We refer to this quantization as
|
| 94 |
+
|
| 95 |
+
dynamic quantization to distinguish it from dynamic tree quantization in our experiments. A study of additional quantization data types and their performance is detailed in Appendix E.
|
| 96 |
+
|
| 97 |
+
# 2.3 STABLE EMBEDDING LAYER
|
| 98 |
+
|
| 99 |
+
Our stable embedding layer is a standard word embedding layer variation (Devlin et al., 2019) designed to ensure stable training for NLP tasks. This embedding layer supports more aggressive quantization by normalizing the highly non-uniform distribution of inputs to avoid extreme gradient variation. See Appendix B for a discussion of why commonly adopted embedding layers (Ott et al., 2019) are so unstable.
|
| 100 |
+
|
| 101 |
+
We initialize the Stable Embedding layer with Xavier uniform initialization (Glorot and Bengio, 2010) and apply layer normalization (Ba et al., 2016) before adding position embeddings. This method maintains a variance of roughly one both at initialization and during training. Additionally, the uniform distribution initialization has less extreme values than a normal distribution, reducing maximum gradient size. Like Ramesh et al. (2021), we find that the stability of training improves significantly if we use 32-bit optimizer states for the embedding layers. This is the only layer that uses 32-bit optimizer states. We still use the standard precision for weights and gradients for the embedding layers – usually 16-bit. We show in our Ablation Analysis in Section 4 that this change is a necessary detail.
|
| 102 |
+
|
| 103 |
+
# 3 8-BIT VS 32-BIT OPTIMIZER PERFORMANCE FOR COMMON BENCHMARKS
|
| 104 |
+
|
| 105 |
+
Experimental Setup We compare the performance of 8-bit optimizers to their 32-bit counterparts on a range of challenging public benchmarks. These benchmarks either use Adam (Kingma and Ba, 2014), AdamW (Loshchilov and Hutter, 2018), or Momentum (Qian, 1999).
|
| 106 |
+
|
| 107 |
+
We do not change any hyperparameters or precision of weights, gradients, and activations/Input gradients for each experimental setting compared to the public baseline—the only change is to replace 32-bit optimizers with 8-bit optimizers. This means that for most experiments, we train in 16-bit mixed-precision (Micikevicius et al., 2017). We also compare with Adafactor (Shazeer and Stern, 2018), with the time-independent formulation for $\beta_{2}$ (Shazeer and Stern, 2018) – which is the same formulation used in Adam. We also do not change any hyperparameters for Adafactor.
|
| 108 |
+
|
| 109 |
+
We report on benchmarks in neural machine translation (Ott et al., 2018) $^{2}$ trained on WMT'16 (Sennrich et al., 2016) and evaluated on en-de WMT'14 (Macháček and Bojar, 2014), large-scale language modeling (Lewis et al., 2021; Brown et al., 2020) and RoBERTa pretraining (Liu et al., 2019) on English CC-100 + RoBERTa corpus (Nagel, 2016; Gokaslan and Cohen, 2019; Zhu et al., 2015; Wenzek et al., 2020), finetuning the pretrained masked language model RoBERTa (Liu et al., 2019) $^{3}$ on GLUE (Wang et al., 2018a), ResNet-50 v1.5 image classification (He et al., 2016) $^{4}$ on ImageNet-1k (Deng et al., 2009), and Moco v2 contrastive image pretraining and linear finetuning (Chen et al., 2020b) $^{5}$ on ImageNet-1k (Deng et al., 2009).
|
| 110 |
+
|
| 111 |
+
We use the stable embedding layer for all NLP tasks except for finetuning on GLUE. Beyond this, we follow the exact experimental setup outlined in the referenced papers and codebases. We consistently report replication results for each benchmark with public codebases and report median accuracy, perplexity, or BLEU over ten random seeds for GLUE, three random seeds for others tasks, and a single random seed for large scale language modeling. While it is standard to report means and standard errors on some tasks, others use median performance. We opted to report medians for all tasks for consistency.
|
| 112 |
+
|
| 113 |
+
Results In Table 1, we see that 8-bit optimizers match replicated 32-bit performance for all tasks. While Adafactor is competitive with 8-bit Adam, 8-bit Adam uses less memory and provides faster optimization. Our 8-bit optimizers save up to 8.5 GB of GPU memory for our largest 1.5B parameter language model and 2.0 GB for RoBERTa. Thus, 8-bit optimizers maintain performance
|
| 114 |
+
|
| 115 |
+
Table 1: Median performance on diverse NLP and computer vision tasks: GLUE, object classification with (Moco v2) and without pretraining (CLS), machine translation (MT), and large-scale language modeling (LM). While 32-bit Adafactor is competitive with 8-bit Adam, it uses almost twice as much memory and trains slower. 8-bit Optimizers match or exceed replicated 32-bit performance on all tasks. We observe no instabilities for 8-bit optimizers. Time is total GPU time on V100 GPUs, except for RoBERTa and GPT3 pretraining, which were done on A100 GPUs.
|
| 116 |
+
|
| 117 |
+
<table><tr><td>Optimizer</td><td>Task</td><td>Data</td><td>Model</td><td>\(Metric^†\)</td><td>Time</td><td>Mem saved</td></tr><tr><td>32-bit AdamW</td><td>GLUE</td><td>Multiple</td><td>RoBERTa-Large</td><td>88.9</td><td>-</td><td>Reference</td></tr><tr><td>32-bit AdamW</td><td>GLUE</td><td>Multiple</td><td>RoBERTa-Large</td><td>88.6</td><td>17h</td><td>0.0 GB</td></tr><tr><td>32-bit Adafactor</td><td>GLUE</td><td>Multiple</td><td>RoBERTa-Large</td><td>88.7</td><td>24h</td><td>1.3 GB</td></tr><tr><td>8-bit AdamW</td><td>GLUE</td><td>Multiple</td><td>RoBERTa-Large</td><td>88.7</td><td>15h</td><td>2.0 GB</td></tr><tr><td>32-bit Momentum</td><td>CLS</td><td>ImageNet-1k</td><td>ResNet-50</td><td>77.1</td><td>-</td><td>Reference</td></tr><tr><td>32-bit Momentum</td><td>CLS</td><td>ImageNet-1k</td><td>ResNet-50</td><td>77.1</td><td>118h</td><td>0.0 GB</td></tr><tr><td>8-bit Momentum</td><td>CLS</td><td>ImageNet-1k</td><td>ResNet-50</td><td>77.2</td><td>116 h</td><td>0.1 GB</td></tr><tr><td>32-bit Adam</td><td>MT</td><td>WMT'14+16</td><td>Transformer</td><td>29.3</td><td>-</td><td>Reference</td></tr><tr><td>32-bit Adam</td><td>MT</td><td>WMT'14+16</td><td>Transformer</td><td>29.0</td><td>126h</td><td>0.0 GB</td></tr><tr><td>32-bit Adafactor</td><td>MT</td><td>WMT'14+16</td><td>Transformer</td><td>29.0</td><td>127h</td><td>0.3 GB</td></tr><tr><td>8-bit Adam</td><td>MT</td><td>WMT'14+16</td><td>Transformer</td><td>29.1</td><td>115h</td><td>1.1 GB</td></tr><tr><td>32-bit Momentum</td><td>MoCo v2</td><td>ImageNet-1k</td><td>ResNet-50</td><td>67.5</td><td>-</td><td>Reference</td></tr><tr><td>32-bit Momentum</td><td>MoCo v2</td><td>ImageNet-1k</td><td>ResNet-50</td><td>67.3</td><td>30 days</td><td>0.0 GB</td></tr><tr><td>8-bit Momentum</td><td>MoCo v2</td><td>ImageNet-1k</td><td>ResNet-50</td><td>67.4</td><td>28 days</td><td>0.1 GB</td></tr><tr><td>32-bit Adam</td><td>LM</td><td>Multiple</td><td>Transformer-1.5B</td><td>9.0</td><td>308 days</td><td>0.0 GB</td></tr><tr><td>32-bit Adafactor</td><td>LM</td><td>Multiple</td><td>Transformer-1.5B</td><td>8.9</td><td>316 days</td><td>5.6 GB</td></tr><tr><td>8-bit Adam</td><td>LM</td><td>Multiple</td><td>Transformer-1.5B</td><td>9.0</td><td>297 days</td><td>8.5 GB</td></tr><tr><td>32-bit Adam</td><td>LM</td><td>Multiple</td><td>GPT3-Medium</td><td>10.62</td><td>795 days</td><td>0.0 GB</td></tr><tr><td>32-bit Adafactor</td><td>LM</td><td>Multiple</td><td>GPT3-Medium</td><td>10.68</td><td>816 days</td><td>1.5 GB</td></tr><tr><td>8-bit Adam</td><td>LM</td><td>Multiple</td><td>GPT3-Medium</td><td>10.62</td><td>761 days</td><td>1.7 GB</td></tr><tr><td>32-bit Adam</td><td>Masked-LM</td><td>Multiple</td><td>RoBERTa-Base</td><td>3.49</td><td>101 days</td><td>0.0 GB</td></tr><tr><td>32-bit Adafactor</td><td>Masked-LM</td><td>Multiple</td><td>RoBERTa-Base</td><td>3.59</td><td>112 days</td><td>0.7 GB</td></tr><tr><td>8-bit Adam</td><td>Masked-LM</td><td>Multiple</td><td>RoBERTa-Base</td><td>3.48</td><td>94 days</td><td>1.1 GB</td></tr></table>
|
| 118 |
+
|
| 119 |
+
†Metric: GLUE=Mean Accuracy/Correlation. CLS/MoCo = Accuracy. MT=BLEU. LM=Perplexity.
|
| 120 |
+
|
| 121 |
+
and improve accessibility to the finetuning of large models for those that cannot afford GPUs with large memory buffers. We show models that are now accessible with smaller GPUs in Table 2. A breakdown of individual dataset results on GLUE can be found in Appendix A).
|
| 122 |
+
|
| 123 |
+
The broad range of tasks and competitive results demonstrate that 8-bit optimizers are a robust and effective replacement for 32-bit optimizers, do not require any additional changes in hyperparameters, and save a significant amount of memory while speeding up training slightly.
|
| 124 |
+
|
| 125 |
+
Table 2: With 8-bit optimizers, larger models can be finetuned with the same GPU memory compared to standard 32-bit optimizer training. We use a batch size of one for this comparison.
|
| 126 |
+
|
| 127 |
+
<table><tr><td rowspan="2">GPU size in GB</td><td colspan="2">Largest finetunable Model (parameters)</td></tr><tr><td>32-bit Adam</td><td>8-bit Adam</td></tr><tr><td>6</td><td>RoBERTa-base (110M)</td><td>RoBERTa-large (355M)</td></tr><tr><td>11</td><td>MT5-small (300M)</td><td>MT5-base (580M)</td></tr><tr><td>24</td><td>MT5-base (580M)</td><td>MT5-large (1.2B)</td></tr><tr><td>24</td><td>GPT-2-medium (762M)</td><td>GPT-2-large (1.5B)</td></tr></table>
|
| 128 |
+
|
| 129 |
+
# 4 ANALYSIS
|
| 130 |
+
|
| 131 |
+
We analyze our method in two ways. First, we ablate all 8-bit optimizer components and show that they are necessary for good performance. Second, we look at the sensitivity to hyperparameters
|
| 132 |
+
|
| 133 |
+
compared to 32-bit Adam and show that 8-bit Adam with block-wise dynamic quantization is a reliable replacement that does not require further hyperparameter tuning.
|
| 134 |
+
|
| 135 |
+
Experimental Setup We perform our analysis on a strong 32-bit Adam baseline for language modeling with transformers (Vaswani et al., 2017). We subsample from the RoBERTa corpus (Liu et al., 2019) which consists of the English sub-datasets: Books (Zhu et al., 2015), Stories (Trinh and Le, 2018), OpenWebText-1 (Gokaslan and Cohen, 2019), Wikipedia, and CC-News (Nagel, 2016). We use a 50k token BPE encoded vocabulary (Sennrich et al., 2015). We find the best 2-GPU-day transformer baseline for 32-bit Adam with multiple hyperparameter searches that take in a total of 440 GPU days. Key hyperparameters include 10 layers with a model dimension of 1024, a fully connected hidden dimension of 8192, 16 heads, and input sub-sequences with a length of 512 tokens each. The final model has $209\mathrm{m}$ parameters.
|
| 136 |
+
|
| 137 |
+
Table 3: Ablation analysis of 8-bit Adam for small (2 GPU days) and large-scale ( $\approx$ 1 GPU year) transformer language models on the RoBERTa corpus. The runs without dynamic quantization use linear quantization. The percentage of unstable runs indicates either divergence or crashed training due to exploding gradients. We report median perplexity for successful runs. We can see that dynamic quantization is critical for general stability and block-wise quantization is critical for large-scale stability. The stable embedding layer is useful for both 8-bit and 32-bit Adam and enhances stability to some degree.
|
| 138 |
+
|
| 139 |
+
<table><tr><td>Parameters</td><td>Optimizer</td><td>Dynamic</td><td>Block-wise</td><td>Stable Emb</td><td>Unstable (%)</td><td>Perplexity</td></tr><tr><td rowspan="8">209M</td><td>32-bit Adam</td><td></td><td></td><td></td><td>0</td><td>16.7</td></tr><tr><td>32-bit Adam</td><td></td><td></td><td>✓</td><td>0</td><td>16.3</td></tr><tr><td>8-bit Adam</td><td></td><td></td><td></td><td>90</td><td>253.0</td></tr><tr><td>8-bit Adam</td><td></td><td></td><td>✓</td><td>50</td><td>194.4</td></tr><tr><td>8-bit Adam</td><td>✓</td><td></td><td></td><td>10</td><td>18.6</td></tr><tr><td>8-bit Adam</td><td>✓</td><td></td><td>✓</td><td>0</td><td>17.7</td></tr><tr><td>8-bit Adam</td><td>✓</td><td>✓</td><td></td><td>0</td><td>16.8</td></tr><tr><td>8-bit Adam</td><td>✓</td><td>✓</td><td>✓</td><td>0</td><td>16.4</td></tr><tr><td>1.3B</td><td>32-bit Adam</td><td></td><td></td><td></td><td>0</td><td>10.4</td></tr><tr><td>1.3B</td><td>8-bit Adam</td><td>✓</td><td></td><td></td><td>100</td><td>N/A</td></tr><tr><td>1.3B</td><td>8-bit Adam</td><td>✓</td><td></td><td>✓</td><td>80</td><td>10.9</td></tr><tr><td>1.5B</td><td>32-bit Adam</td><td></td><td></td><td></td><td>0</td><td>9.0</td></tr><tr><td>1.5B</td><td>8-bit Adam</td><td>✓</td><td>✓</td><td>✓</td><td>0</td><td>9.0</td></tr></table>
|
| 140 |
+
|
| 141 |
+
Ablation Analysis For the ablation analysis, we compare small and large-scale language modeling perplexity and training stability against a 32-bit Adam baseline. We ablate components individually and include combinations of methods that highlight their interactions. The baseline method uses linear quantization, and we add dynamic quantization, block-wise quantization, and the stable embedding layer to demonstrate their effect. To test optimization stability for small-scale language modeling, we run each setting with different hyperparameters and report median performance across all successful runs. A successful run is a run that does not crash due to exploding gradients or diverges in the loss. We use the hyperparameters $\epsilon$ {1e-8, 1e-7, 1e-6}, $\beta_{1}$ {0.90, 0.87, 0.93}, $\beta_{2}$ {0.999, 0.99, 0.98} and small changes in learning rates. We also include some partial ablations for large-scale models beyond 1B parameters. In the large-scale setting, we run several seeds with the same hyperparameters. We use a single seed for 32-bit Adam, five seeds for 8-bit Adam at 1.3B parameters, and a single seed for 8-bit Adam at 1.5B parameters. $^{6}$ Results are shown in Table 3.
|
| 142 |
+
|
| 143 |
+
The Ablations show that dynamic quantization, block-wise quantization, and the stable embedding layer are critical for either performance or stability. In addition, block-wise quantization is critical for large-scale language model stability.
|
| 144 |
+
|
| 145 |
+
Sensitivity Analysis We compare the perplexity of 32-bit Adam vs 8-bit Adam + Stable Embedding as we change the optimizer hyperparameters: learning rate, betas, and $\epsilon$ . We change each hyperparameter individually from the baseline hyperparameters $\beta_{1} = 0.9$ , $\beta_{2} = 0.995$ , $\epsilon = 1e - 7$ , and $\mathrm{lr} = 0.0163$
|
| 146 |
+
|
| 147 |
+
and run two random seeds for both 8-bit and 32-bit Adam for each setting. If 8-bit Adam is perfectly insensitive to hyperparameters compared to 32-bit Adam, we would expect the same constant offset in performance for any hyperparameter combination. The results can be seen in Figure 3. The results show a relatively steady gap between 8-bit and 32-bit Adam, suggesting that 8-bit Adam does not require any further hyperparameter tuning compared to 32-bit Adam.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
Figure 3: Sensitivity analysis of 8-bit vs 32-bit Adam hyperparameters. We can see that there is little variance between 8 and 32-bit performance, which suggests that 8-bit Adam can be used as a drop-in replacement for 32-bit Adam without any further hyperparameter tuning.
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+
# 5 RELATED WORK
|
| 157 |
+
|
| 158 |
+
Compressing & Distributing Optimizer States While 16-bit Adam has been used in several publications, the stability of 16-bit Adam was first explicitly studied for a text-to-image generation model DALL-E (Ramesh et al., 2021). They show that a stable embedding layer, tensor-wise scaling constants for both Adam states, and multiple loss scaling blocks are critical to achieving stability during training. Our work reduces the memory footprint of Adam further, from 16 to 8-bit. In addition, we achieve stability by developing new training procedures and non-linear quantization, both of which complement previous developments.
|
| 159 |
+
|
| 160 |
+
Adafactor (Shazeer and Stern, 2018) uses a different strategy to save memory. All optimizer states are still 32-bit, but the second Adam state is factorized by a row-column outer product resulting in a comparable memory footprint to 16-bit Adam. Alternatively, Adafactor can also be used without using the first moment $(\beta_{1} = 0.0)$ (Lepikhin et al., 2020). This version is as memory efficient as 8-bit Adam, but unlike 8-bit Adam, hyperparameters for this Adafactor variant need to be re-tuned to achieve good performance. We compare 8-bit Adam with Adafactor $\beta_{1} > 0.0$ in our experiments.
|
| 161 |
+
|
| 162 |
+
AdaGrad (Duchi et al., 2011) adapts the gradient with aggregate training statistics over the entire training run. AdaGrad that uses only the main diagonal as optimizer state and extensions of AdaGrad such as SM3 (Anil et al., 2019) and extreme tensoring (Chen et al., 2020a) can be more efficient than 8-bit Adam. We include some initial comparison with AdaGrad in Appendix G.
|
| 163 |
+
|
| 164 |
+
Optimizer sharding (Rajbhandari et al., 2020) splits optimizer states across multiple accelerators such as GPUs/TPUs. While very effective, it can only be used if multiple accelerators are available and data parallelism is used. Optimizer sharding can also have significant communication overhead (Rajbhandari et al., 2021). Our 8-bit optimizers work with all kinds of parallelism. They can also complement optimizer sharding, as they reduce communication overhead by $75\%$ .
|
| 165 |
+
|
| 166 |
+
General Memory Reduction Techniques Other complementary methods for efficient training can be either distributed or local. Distributed approaches spread out the memory of a model across several accelerators such as GPUs/TPUs. Such approaches are model parallelism (Krizhevsky et al., 2009), pipeline parallelism (Krizhevsky et al., 2009; Huang et al., 2018; Harlap et al., 2018), and operator parallelism (Lepikhin et al., 2020). These approaches are useful if one has multiple accelerators available. Our 8-bit optimizers are useful for both single and multiple devices.
|
| 167 |
+
|
| 168 |
+
Local approaches work for a single accelerator. They include gradient checkpointing (Chen et al., 2016), reversible residual connections (Gomez et al., 2017), and offloading (Pudipeddi et al., 2020;
|
| 169 |
+
|
| 170 |
+
Rajbhandari et al., 2021). All these methods save memory at the cost of increased computational or communication costs. Our 8-bit optimizers reduce the memory footprint of the model while maintaining 32-bit training speed.
|
| 171 |
+
|
| 172 |
+
Quantization Methods and Data Types While our work is the first to apply 8-bit quantization to optimizer statistics, quantization for neural network model compression, training, and inference are well-studied problems. One of the most common formats of 8-bit quantization is to use data types composed of static sign, exponent, and fraction bits. The most common combination is 5 bits for the exponent and 2 bits for the fraction (Wang et al., 2018b; Sun et al., 2019; Cambier et al., 2020; Mellempudi et al., 2019) with either no normalization or min-max normalization. These data types offer high precision for small magnitude values but have large errors for large magnitude values since only 2 bits are assigned to the fraction. Other methods improve quantization through soft constraints (Li et al., 2021) or more general uniform affine quantizations (Pappalardo, 2021).
|
| 173 |
+
|
| 174 |
+
Data types lower than 8-bit are usually used to prepare a model for deployment, and the main focus is on improving network inference speed and memory footprint rather than maintaining accuracy. There are methods that use 1-bit (Courbariaux and Bengio, 2016; Rastegari et al., 2016; Courbariaux et al., 2015), 2-bit/3 values (Zhu et al., 2017; Choi et al., 2019), 4-bits (Li et al., 2019), more bits (Courbariaux et al., 2014), or a variable amount of bits (Gong et al., 2019). See also Qin et al. (2020) for a survey on binary neural networks. While these low-bit quantization techniques allow for efficient storage, they likely lead to instability when used for optimizer states.
|
| 175 |
+
|
| 176 |
+
The work most similar to our block-wise quantization is work on Hybrid Block Floating Point (HBFP) (Drumond et al., 2018) which uses a 24-bit fraction data type with a separate exponent for each tile in matrix multiplication to perform 24-bit matrix multiplication. However, unlike HBFP, block-wise dynamic quantization has the advantage of having both block-wise normalization and a dynamic exponent for each number. This allows for a much broader range of important values since optimizer state values vary by about 5 orders of magnitude. Furthermore, unlike HBFP, block-wise quantization approximates the maximum magnitude values within each block without any quantization error, which is critical for optimization stability, particularly for large networks.
|
| 177 |
+
|
| 178 |
+
# 6 DISCUSSION & LIMITATIONS
|
| 179 |
+
|
| 180 |
+
Here we have shown that high precision quantization can yield 8-bit optimizers that maintain 32-bit optimizer performance without requiring any change in hyperparameters. One of the main limitations of our work is that 8-bit optimizers for natural language tasks require a stable embedding layer to be trained to 32-bit performance. On the other hand, we show that 32-bit optimizers also benefit from a stable embedding layer. As such, the stable embedding layer could be seen as a general replacement for other embedding layers.
|
| 181 |
+
|
| 182 |
+
We show that 8-bit optimizers reduce the memory footprint and accelerate optimization on a wide range of tasks. However, since 8-bit optimizers reduce only the memory footprint proportional to the number of parameters, models that use large amounts of activation memory and little memory for parameters, such as convolutional networks, have few benefits from using 8-bit optimizers. Thus, 8-bit optimizers are most beneficial for training or finetuning models with many parameters on highly memory-constrained GPUs.
|
| 183 |
+
|
| 184 |
+
Furthermore, there remain sources of instability that, to our knowledge, are not well understood. For example, we observed that models with over 1B parameters often have hard systemic divergence, where many parameters simultaneously cause exploding gradients. In other cases, a single parameter among those 1B parameters assumed a value too large, caused an exploding gradient, and led to a cascade of instability. It might be that this rare, soft cascading instability is related to the phenomena where instability disappears after reloading a model checkpoint and rolling a new random seed – a method standard for training huge models. This cascading instability might also be related to the observation that the larger a model is, the more unstable it becomes. For our 8-bit optimizers, we primarily needed the stable embedding layer to avoid cascading instability. Thus the stable embedding layer could potentially be viewed as decreasing the probability of extreme outlier gradients. If such phenomena were better understood, it could lead to better 8-bit optimizers and more stable training in general.
|
| 185 |
+
|
| 186 |
+
# REFERENCES
|
| 187 |
+
|
| 188 |
+
Anil, R., Gupta, V., Koren, T., and Singer, Y. (2019). Memory efficient adaptive optimization. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 9746-9755.
|
| 189 |
+
Ba, J. L., Kiros, J. R., and Hinton, G. E. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.
|
| 190 |
+
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
|
| 191 |
+
Cambier, L., Bhiwandiwalla, A., Gong, T., Elibol, O. H., Nekuii, M., and Tang, H. (2020). Shifted and squeezed 8-bit floating point format for low-precision training of deep neural networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 192 |
+
Chen, E. J. and Kelton, W. D. (2001). Quantile and histogram estimation. In Proceedings of the 2001 Winter Simulation Conference (Cat. No. 01CH37304), volume 1, pages 451-459. IEEE.
|
| 193 |
+
Chen, T., Xu, B., Zhang, C., and Guestrin, C. (2016). Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174.
|
| 194 |
+
Chen, X., Agarwal, N., Hazan, E., Zhang, C., and Zhang, Y. (2020a). Extreme tensoring for low-memory preconditioning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 195 |
+
Chen, X., Fan, H., Girshick, R., and He, K. (2020b). Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297.
|
| 196 |
+
Choi, J., Venkataramani, S., Srinivasan, V., Gopalakrishnan, K., Wang, Z., and Chuang, P. (2019). Accurate and efficient 2-bit quantized neural networks. In Talwalkar, A., Smith, V., and Zaharia, M., editors, Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019. mlsys.org.
|
| 197 |
+
Courbariaux, M. and Bengio, Y. (2016). Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, abs/1602.02830.
|
| 198 |
+
Courbariaux, M., Bengio, Y., and David, J. (2015). Binaryconnect: Training deep neural networks with binary weights during propagations. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 3123-3131.
|
| 199 |
+
Courbariaux, M., Bengio, Y., and David, J.-P. (2014). Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024.
|
| 200 |
+
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE.
|
| 201 |
+
Dettmers, T. (2016). 8-bit approximations for parallelism in deep learning. International Conference on Learning Representations (ICLR).
|
| 202 |
+
Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C., and Solorio, T., editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
|
| 203 |
+
|
| 204 |
+
Drumond, M., Lin, T., Jaggi, M., and Falsafi, B. (2018). Training dnns with hybrid block floating point. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 451-461.
|
| 205 |
+
Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7).
|
| 206 |
+
Dunning, T. and Ertl, O. (2019). Computing extremely accurate quantiles using t-digests. arXiv preprint arXiv:1902.04023.
|
| 207 |
+
Fedus, W., Zoph, B., and Shazeer, N. (2021). Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961.
|
| 208 |
+
Glorot, X. and Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256. JMLR Workshop and Conference Proceedings.
|
| 209 |
+
Gokaslan, A. and Cohen, V. (2019). Openwebtext corpus.
|
| 210 |
+
Gomez, A. N., Ren, M., Urtasun, R., and Grosse, R. B. (2017). The reversible residual network: Backpropagation without storing activations. arXiv preprint arXiv:1707.04585.
|
| 211 |
+
Gong, R., Liu, X., Jiang, S., Li, T., Hu, P., Lin, J., Yu, F., and Yan, J. (2019). Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 4851-4860. IEEE.
|
| 212 |
+
Govindaraju, N. K., Raghuvanshi, N., and Manocha, D. (2005). Fast and approximate stream mining of quantiles and frequencies using graphics processors. In Proceedings of the 2005 ACM SIGMOD international conference on Management of data, pages 611-622.
|
| 213 |
+
Greenwald, M. and Khanna, S. (2001). Space-efficient online computation of quantile summaries. ACM SIGMOD Record, 30(2):58-66.
|
| 214 |
+
Harlap, A., Narayanan, D., Phanishayee, A., Seshadri, V., Devanur, N., Ganger, G., and Gibbons, P. (2018). Pipedream: Fast and efficient pipeline parallel dnn training. arXiv preprint arXiv:1806.03377.
|
| 215 |
+
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.
|
| 216 |
+
Henighan, T., Kaplan, J., Katz, M., Chen, M., Hesse, C., Jackson, J., Jun, H., Brown, T. B., Dhariwal, P., Gray, S., et al. (2020). Scaling laws for autoregressive generative modeling. arXiv preprint arXiv:2010.14701.
|
| 217 |
+
Huang, Y., Cheng, Y., Bapna, A., First, O., Chen, M. X., Chen, D., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. (2018). Gpipe: Efficient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965.
|
| 218 |
+
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
|
| 219 |
+
Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., and Socher, R. (2019). CTRL: A conditional transformer language model for controllable generation. CoRR, abs/1909.05858.
|
| 220 |
+
Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
|
| 221 |
+
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images.
|
| 222 |
+
|
| 223 |
+
Lepikhin, D., Lee, H., Xu, Y., Chen, D., First, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. (2020). Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668.
|
| 224 |
+
Lewis, M., Bhosale, S., Dettmers, T., Goyal, N., and Zettlemoyer, L. (2021). Base layers: Simplifying training of large, sparse models. arXiv preprint arXiv:2103.16716.
|
| 225 |
+
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. (2020). BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Jurafsky, D., Chai, J., Schluter, N., and Tetrault, J. R., editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871-7880. Association for Computational Linguistics.
|
| 226 |
+
Li, J. B., Qu, S., Li, X., Strubell, E., and Metze, F. (2021). End-to-end quantized training via log-barrier extensions.
|
| 227 |
+
Li, R., Wang, Y., Liang, F., Qin, H., Yan, J., and Fan, R. (2019). Fully quantized network for object detection. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 2810-2819. Computer Vision Foundation / IEEE.
|
| 228 |
+
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 229 |
+
Loshchilov, I. and Hutter, F. (2018). Fixing weight decay regularization in adam.
|
| 230 |
+
Macháček, M. and Bojar, O. (2014). Results of the wmt14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 293-301.
|
| 231 |
+
Mellempudi, N., Srinivasan, S., Das, D., and Kaul, B. (2019). Mixed precision training with 8-bit floating point. CoRR, abs/1905.12334.
|
| 232 |
+
Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., et al. (2017). Mixed precision training. arXiv preprint arXiv:1710.03740.
|
| 233 |
+
Nagel, S. (2016). Cc-news.
|
| 234 |
+
Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., and Auli, M. (2019). pairwise: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.
|
| 235 |
+
Ott, M., Edunov, S., Grangier, D., and Auli, M. (2018). Scaling neural machine translation. arXiv preprint arXiv:1806.00187.
|
| 236 |
+
Pappalardo, A. (2021). Xilinx/brevitas.
|
| 237 |
+
Pudipeddi, B., Mesmakhosroshahi, M., Xi, J., and Bharadwaj, S. (2020). Training large neural networks with constant memory using a new execution algorithm. arXiv preprint arXiv:2002.05645.
|
| 238 |
+
Qian, N. (1999). On the momentum term in gradient descent learning algorithms. Neural networks : the official journal of the International Neural Network Society, 12 1:145-151.
|
| 239 |
+
Qin, H., Gong, R., Liu, X., Bai, X., Song, J., and Sebe, N. (2020). Binary neural networks: A survey. CoRR, abs/2004.03333.
|
| 240 |
+
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 241 |
+
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
|
| 242 |
+
|
| 243 |
+
Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. (2020). Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-16. IEEE.
|
| 244 |
+
Rajbhandari, S., Ruwase, O., Rasley, J., Smith, S., and He, Y. (2021). Zero-infinity: Breaking thegpu memory wall for extreme scale deep learning. arXiv preprint arXiv:2104.07857.
|
| 245 |
+
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. (2021). Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092.
|
| 246 |
+
Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. (2016). Xnor-net: Imagenet classification using binary convolutional neural networks. In Leibe, B., Matas, J., Sebe, N., and Welling, M., editors, Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, volume 9908 of Lecture Notes in Computer Science, pages 525-542. Springer.
|
| 247 |
+
Sennrich, R., Haddow, B., and Birch, A. (2015). Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
|
| 248 |
+
Sennrich, R., Haddow, B., and Birch, A. (2016). Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891.
|
| 249 |
+
Shazeer, N. and Stern, M. (2018). Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596-4604. PMLR.
|
| 250 |
+
Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. (2019). Megatronlm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053.
|
| 251 |
+
Sun, X., Choi, J., Chen, C., Wang, N., Venkataramani, S., Srinivasan, V., Cui, X., Zhang, W., and Gopalakrishnan, K. (2019). Hybrid 8-bit floating point (HFP8) training and inference for deep neural networks. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4901-4910.
|
| 252 |
+
Trinh, T. H. and Le, Q. V. (2018). A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847.
|
| 253 |
+
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. arXiv preprint arXiv:1706.03762.
|
| 254 |
+
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2018a). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
|
| 255 |
+
Wang, N., Choi, J., Brand, D., Chen, C., and Gopalakrishnan, K. (2018b). Training deep neural networks with 8-bit floating point numbers. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 7686-7695.
|
| 256 |
+
Wenzek, G., Lachaux, M.-A., Conneau, A., Chaudhary, V., Guzmán, F., Joulin, A., and Grave, E. (2020). CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.
|
| 257 |
+
Zhu, C., Han, S., Mao, H., and Dally, W. J. (2017). Trained ternary quantization. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
|
| 258 |
+
Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., and Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19-27.
|
| 259 |
+
|
| 260 |
+
# A GLUE SCORE BREAKDOWN
|
| 261 |
+
|
| 262 |
+
Table 4 contains the breakdown of individual scores on the GLUE datasets.
|
| 263 |
+
|
| 264 |
+
Table 4: Breakdown of GLUE scores. Each column is the median of 10 random seeds. The mean is the mean over medians.
|
| 265 |
+
|
| 266 |
+
<table><tr><td>Model</td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>RTE</td><td>SST-2</td><td>MRPC</td><td>CoLA</td><td>STS-B</td><td>Mean</td></tr><tr><td>32-bit Adam</td><td>90.40</td><td>94.85</td><td>92.2</td><td>84.5</td><td>96.40</td><td>90.1</td><td>67.41</td><td>93.03</td><td>88.61</td></tr><tr><td>32-bit Adafactor</td><td>90.35</td><td>94.70</td><td>92.2</td><td>85.4</td><td>96.45</td><td>90.0</td><td>67.63</td><td>92.91</td><td>88.71</td></tr><tr><td>8-bit Adam</td><td>90.30</td><td>94.70</td><td>92.2</td><td>85.9</td><td>96.40</td><td>90.3</td><td>67.20</td><td>92.87</td><td>88.73</td></tr></table>
|
| 267 |
+
|
| 268 |
+
# B STABILITY OF EMBEDDING LAYERS
|
| 269 |
+
|
| 270 |
+
Highly variable gradients can lead to unpredictable optimization behavior and instability that manifests as divergence or exploding gradients. Low precision optimizers can amplify variance of gradient updates due to the noise introduced during quantization. While our 8-bit optimizers appear to be stable for convolutional networks, similar to Ramesh et al. (2021), we find that word embedding layers are a major source of instability.
|
| 271 |
+
|
| 272 |
+
The main instability from the word embedding layer comes from the fact that it is a sparse layer with non-uniform distribution of inputs which can produce maximum gradient magnitudes $100\mathrm{x}$ larger than other layers. For dense layers, if given $n$ samples arranged into $k$ mini-batches the sum of gradients of all mini-batches is always the same independent of how the $n$ samples are arranged into $k$ mini-batches. For embedding gradients, this depends on the arrangement of samples into mini-batches. This is because most deep learning frameworks normalize the gradient by the number of total tokens in the mini-batch, rather than the frequency of each individual token. This approximation allows stable learning with a single learning rate rather than variable learning rates that depend on token frequency in each individual mini-batch. However a side-effect of this method is that the magnitude of gradients for a particular token can vary widely with batch sizes and between different mini-batches.
|
| 273 |
+
|
| 274 |
+
There are multiple recipes for initialization word embedding layers. One of the most common recipes used in all models trained with fairseq (Ott et al., 2019) such as RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020), large NMT models (Ott et al., 2018), and sparse expert models (Lewis et al., 2021), is the following: Initialize the word embedding layer with $N(0,1/\sqrt{k})$ where $k$ is the embedding size of the embedding layer and to scale the outputs by $\sqrt{k}$ . This scheme has a variance of one at the start of training for the output distribution to ensure good gradient flow.
|
| 275 |
+
|
| 276 |
+
We find this approach to induce some instability for 8-bit optimizers. We develop the stable embedding layer to solve this instability problem.
|
| 277 |
+
|
| 278 |
+
While the full recipe for our stable embedding layer is new, components of it has been used before. The layer norm after the embedding has been used before in work such as Devlin et al. (2019) and Radford et al. (2019) and enhanced precision for this particular layer was used in Ramesh et al. (2021). As pointed out above, these elements are not standard and the stable embedding layer combines three aspects that are all important: (1) enhanced precision, (2) layer norm, and (3) Xavier initialization.
|
| 279 |
+
|
| 280 |
+
# C QUANTIZATION ERROR ANALYSIS
|
| 281 |
+
|
| 282 |
+
To gain more insights into why block-wise dynamic quantization works so well and how it could be improved, we performed a quantization error analysis of Adam quantization errors during language model training. Adam quantization errors are the deviations between the quantized 8-bit Adam update and the 32-bit Adam updates: $|\mathbf{u_8} - \mathbf{u_{16}}|$ , where $\mathbf{u_k} = \mathbf{s_1^k} / \mathbf{s_2^k}$ for $k$ bits. See Background Section 1.1 for details on Adam.
|
| 283 |
+
|
| 284 |
+
A good 8-bit quantization has the property that, for a given input distribution, the inputs are only rarely quantized into intervals with high quantization error and most often quantized into intervals with low error.
|
| 285 |
+
|
| 286 |
+
In 8-bit, there are $255 \times 256$ possible 8-bit Adam updates, 256 possible values for the first and 256 for the second Adam state. We look at the average quantization error of each of these possible updates to see where the largest errors are and we plot histograms to see how often do these values with high error occur. Taken together, these two perspectives give a detailed view of the magnitude of deviations and how often large deviations occur.
|
| 287 |
+
|
| 288 |
+
We study these questions by looking at how often each of the 256 values for both Adam states are used during language model training. We also analyze the average error for each of the inputs quantized to each of the 256 values. With this analysis it is easy to find regions of high use and high error, and visualize their overlap. An overlap of these regions is associated with large frequent errors that cause unstable training. The quantization error analysis is shown in Figure 4.
|
| 289 |
+
|
| 290 |
+
The plots show two things: (1) The region of high usage (histogram) shows how often each combination of $256 \times 256$ bit values is used for the first Adam state $\mathbf{s}_1$ (exponentially smoothed running sum) and the second Adam state $\mathbf{s}_2$ (exponentially smoothed running squared sum). (2) The error plots show for $k$ -bit Adam updates $\mathbf{u_k} = \mathbf{s_1} / (\sqrt{\mathbf{s_2}} + \epsilon)$ the mean absolute Adam error $|u_{32} - u_{8}|$ and the relative Adam error $|u_{32} - u_{8}| / |u_{32}|$ averaged over each bit combination. In conjunction these plots show which bits have the highest error per use and how often each bit is used. The x-axis/y-axis represents the quantization type range which means the largest positive/negative Adam states per block/tensor take the values 1.0/-1.0.
|
| 291 |
+
|
| 292 |
+
We can see that block-wise dynamic quantization has the smallest overlap between regions of high use and high error. While the absolute Adam quantization error of block-wise dynamic quantization is 0.0061, which is not much lower than that of dynamic quantization with 0.0067, the plots can also be interpreted as block-wise dynamic having rarer large errors that likely contribute to improved stability during optimization.
|
| 293 |
+
|
| 294 |
+
# D FINE-GRAINED OPTIMIZER RUNTIME PERFORMANCE
|
| 295 |
+
|
| 296 |
+
Table 5 shows optimizer performance that is benchmarked in isolation without any training. We use a large sample of a normal distribution and benchmark the average time to perform 100 optimizer updates per billion parameters in milliseconds.
|
| 297 |
+
|
| 298 |
+
Table 5: Runtime performance of 8-bit optimizers vs commonly used 32-bit optimizers in milliseconds per update per 1B parameters for 32-bit gradients. This comparison was run on a V100 GPU.
|
| 299 |
+
|
| 300 |
+
<table><tr><td rowspan="2">Optimizer</td><td colspan="3">Millisecond per update per 1B param</td></tr><tr><td>32-bit PyTorch</td><td>32-bit Apex</td><td>8-bit (Ours)</td></tr><tr><td>Adam</td><td>145</td><td>63</td><td>47</td></tr><tr><td>Momentum</td><td>58</td><td>46</td><td>34</td></tr><tr><td>LAMB</td><td>-</td><td>91</td><td>65</td></tr><tr><td>LARS</td><td>-</td><td>119</td><td>43</td></tr></table>
|
| 301 |
+
|
| 302 |
+
# E ADDITIONAL QUANTIZATION DATA TYPES
|
| 303 |
+
|
| 304 |
+
This section describes additional quantization data types that we tried but which we found to perform poorly in quantization performance or stability. While quantile quantization has an average quantization twice as low as dynamic quantization for any normal distribution it has sporadic large errors that lead to large Adam errors and poor model performance (see Figure 5) and even with state-of-the-art quantile estimation algorithms (see Section F) quantile quantization is too slow to be practical. An overview of quantization performance of this additional quantization data types compared to dynamic quantization (without block-wise quantization) can be found in Table 6.
|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
|
| 308 |
+

|
| 309 |
+
|
| 310 |
+

|
| 311 |
+
|
| 312 |
+

|
| 313 |
+
|
| 314 |
+

|
| 315 |
+
|
| 316 |
+

|
| 317 |
+
|
| 318 |
+

|
| 319 |
+
Figure 4: Good quantization methods do not have overlaps between regions of high use and high error. The plot shows that for linear quantization regions of high usage and high error overlap. For dynamic quantization regions with high relative error are used infrequently while only small regions have high usage and high absolute error. Block-wise dynamic quantization spreads out the usage over a large space and has the lowest overlap between regions of high use and errors. This means that not only is the overall error of block-wise dynamic quantization lower, but also that large errors for individual parameter updates are rarer compared to other methods, thus improving stability. See the main text for more details.
|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
|
| 325 |
+
Table 6: Mean relative Adam and absolute quantization error for the first Adam state for different quantization methods. Results show mean±standard error. We can see that Dynamic Quantization has best relative error and that both Dynamic methods have the best absolute error.
|
| 326 |
+
|
| 327 |
+
<table><tr><td>Method</td><td>Relative Adam Error</td><td>Absolute Quantization Error</td></tr><tr><td>Linear</td><td>201% ±17%</td><td>41.2e-10±3.1e-10</td></tr><tr><td>Quantile</td><td>11.9% ± 0.3%</td><td>8.8e-10±0.9e-10</td></tr><tr><td>Inverse Dynamic</td><td>6.5%± 0.1%</td><td>4.6e-10±0.4e-10</td></tr><tr><td>Dynamic</td><td>4.8%± 0.4%</td><td>3.5e-10±1.1e-10</td></tr></table>
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
Figure 5: Distribution of Adam error among each of the 256 8-bit values of the first Adam state. We normalize the values into the range [-1,1]. With this, -1 indicates the largest negative value, 0 the value that is closest to 0, and so forth. See Figure 6 for a visualization of this normalization. Quantile quantization has large errors for large values, while dynamic quantization has small errors for both small and large values while the bulk of the errors is concentrated in intermediate values.
|
| 331 |
+
|
| 332 |
+
# E.1 INVERSE DYNAMIC QUANTIZATION
|
| 333 |
+
|
| 334 |
+
Inverse Dynamic Quantization is motivated by the hypothesis that large Adam updates are more important than small updates. Since Adam is composed of a ratio of optimizer states $\mathbf{m}_t / (\sqrt{\mathbf{r}_t} +\epsilon)$ , we expect that small values in the second state $\mathbf{r}_t$ to produce large Adam updates. To get a better quantization error for small values we can switch the dynamic exponent and the base exponent. For regular dynamic quantization the base exponent is $10^{0} = 1$ and each zero bit decreases the exponent by a factor of 10 for a minimum value of $10^{-7}$ . We invert this starting with base $10^{-7}$ and each zero bit increases the exponent by 10 for a maximum value of 1. We denote this quantization as inverse dynamic quantization.
|
| 335 |
+
|
| 336 |
+
# E.2 QUANTILE QUANTIZATION: A LOSSY MINIMUM ENTROPY ENCODING
|
| 337 |
+
|
| 338 |
+
A lossy minimum entropy encoding with $k$ bits has the property that for any input data, the quantized outputs take the value of each of the $2^k$ different bit representations equally often.
|
| 339 |
+
|
| 340 |
+
More formally, a lossy minimum entropy encoding can be described in the following way. Given an infinite stream of sampled real numbers $x_{i}$ where $x_{i}$ is distributed as $X$ , an arbitrary probability distribution, a lossy minimum entropy encoding is given by the $k$ -bit quantization map $\mathbf{Q}^{\mathrm{map}} \in \mathbb{R}^{2^k}$ which maps values $q \in \mathbb{R}^{2^k}$ to indices $0,1,\ldots,2^k$ which has the property that if any number of elements $x_{i}$ from the stream are quantized to $x_{i}^{q}$ we do not gain any information which is predictive of future $x_{j > i}^{q}$ .
|
| 341 |
+
|
| 342 |
+
One way to fulfill this property for arbitrary probability distributions $X$ , is to divide the probability distribution function $f_{X}$ into $2^{k}$ bins where each bin has equal area and the mid-points of these bins are values $q$ of the quantization map $\mathbf{Q}^{\mathrm{map}}$ . Empirically, this is equivalent to a histogram with $2^{k}$ bins where each bin contains equal number of values.
|
| 343 |
+
|
| 344 |
+
How do we find the mid-points for each histogram bin? This is equivalent to finding the $2^{k}$ nonoverlapping values $x$ for the cumulative distribution function $F_{X}$ with equal probability mass. These values can most easily be found by using its inverse function, the quantile function $Q_{X} = F_{X}^{-1}$ . We can find the mid-points of each of the histogram bins by using the mid-points between $2^{k} + 1$ equally
|
| 345 |
+
|
| 346 |
+
spaced quantiles over the range of probabilities [0, 1]:
|
| 347 |
+
|
| 348 |
+
$$
|
| 349 |
+
q _ {i} = \frac {Q _ {X} \left(\frac {i}{2 ^ {k} + 1}\right) + Q _ {X} \left(\frac {i + 1}{2 ^ {k} + 1}\right)}{2}, \tag {5}
|
| 350 |
+
$$
|
| 351 |
+
|
| 352 |
+
To find $q$ empirically, we can estimate sample quantiles for a tensor $\mathbf{T}$ with unknown distribution $X$ by finding the $2^{k}$ equally spaced sample quantiles via $\mathbf{T}$ 's empirical cumulative distribution function. We refer to this quantization as quantile quantization.
|
| 353 |
+
|
| 354 |
+
To estimate sample quantiles efficiently, we devise a specialized approximate quantile estimation algorithm, SRAM-Quantiles, which is more than $75\mathrm{x}$ faster than other approximate quantile estimation approaches (Govindaraju et al., 2005; Dunning and Ertl, 2019). SRAM-Quantiles uses a divide-and-conquer strategy to perform sorting solely in fast SRAM. More details on this algorithm can be found in the Appendix Section F.
|
| 355 |
+
|
| 356 |
+
# E.3 VISUALIZATION: DYNAMIC VS LINEAR QUANTIZATION VS QUANTILE QUANTIZATION
|
| 357 |
+
|
| 358 |
+
Figure 6 shows the mapping from each to the 255 values of the 8-bit data types to their value normalized in the range [-1, 1]. We can see that most bits in dynamic quantization are allocated for large and small values. Quantile quantization is introduced in Appendix E.2.
|
| 359 |
+
|
| 360 |
+

|
| 361 |
+
Figure 6: Visualization of the quantization maps for the linear, dynamic and quantile quantization. For quantile quantization we use values from the standard normal distribution and normalize them into the range [-1, 1].
|
| 362 |
+
|
| 363 |
+
# F SRAM-QUANTILES: A FAST QUANTILE ESTIMATION ALGORITHM
|
| 364 |
+
|
| 365 |
+
To estimate sample quantiles of a tensor one needs to determine the empirical cumulative distribution function (eCDF) of that tensor. The easiest way to find the eCDF is to sort a given tensor. Once sorted, the quantiles can be found by using the value at index $i = q \times n$ where $i$ is the index into the sorted array, $q$ is the desired quantile and $n$ is the total elements in the tensor. While simple, this process of estimating quantiles is computationally expensive and would render training with quantile quantization too slow to be useful.
|
| 366 |
+
|
| 367 |
+
Similar to other quantile estimation approaches, our GPU algorithm, SRAM-Quantiles, uses a sliding windows over the data for fast, approximate quantile estimation with minimal resources. Greenwald and Khanna (2001)'s quantile estimation algorithm uses dynamic bin histograms over sliding windows to estimate quantiles. Extensions of this algorithm accelerate estimation by using more efficient data structures and estimation algorithms (Dunning and Ertl, 2019) or by using GPUs (Govindaraju et al., 2005). The main difference between this work is that we only compute a limit
|
| 368 |
+
|
| 369 |
+
set of quantiles that are known a priori - 256, to be exact - while previous work focuses on general statistics which help to produce any quantile a posteriori. Thus we can devise a highly specialized algorithm which offers faster estimation.
|
| 370 |
+
|
| 371 |
+
The idea behind our algorithm comes from the fact that sorting is slow because it involves repeated loads and stores from main memory (DRAM) when executing divide-and-conquer sorting algorithms. We can significantly improve performance of quantile estimation if we restructure quantile estimation to respect memory hierarchies of the device on which the algorithm is executed.
|
| 372 |
+
|
| 373 |
+
On a GPU, programmable SRAM – known as shared memory – is 15x faster than DRAM but has a limit size of around 64 kb per core. The SRAM-Quantiles algorithm is simple. Instead of finding the full eCDF we find the eCDF for a subset of values of the tensor that fits into SRAM (about 4096 32-bit values). Once we found the quantiles for each subset, we average the quantiles atomically in DRAM.
|
| 374 |
+
|
| 375 |
+
This algorithm works, because the arithmetic mean is an unbiased estimator for the population mean and samples quantiles estimated via eCDFs are asymptotically unbiased estimators of the population quantile (Chen and Kelton, 2001). Thus the more subset quantiles we average, the better the estimate of the tensor-wide quantiles.
|
| 376 |
+
|
| 377 |
+
For estimating 256 quantiles on a large stream of numbers, our algorithm takes on average 0.064 ns to process one element in the stream, whereas the fastest general algorithms take 300 ns (Govindaraju et al., 2005) and 5 ns (Dunning and Ertl, 2019).
|
| 378 |
+
|
| 379 |
+
# G ADAGRAD COMPARISONS
|
| 380 |
+
|
| 381 |
+
While the main aim in this work is to investigate how the most commonly used optimizers, such as Adam (Kingma and Ba, 2014) and Momentum (Qian, 1999), can be used as 8-bit variants without any further hyperparameter tuning, it can be of interest to consider the behavior of our 8-bit methods under different scenarios. For example, one difference between Adam/Momentum and AdaGrad (Duchi et al., 2011) is that AdaGrad accumulates gradients statistics over the entire course of training while Adam/Momentum use a smoothed exponential decay over time. As such, this could lead to very different 8-bit quantization behavior where there are large differences between the magnitude of different optimizer states. Such large differences could induce a large quantization error and degrade performance of 8-bit optimizers.
|
| 382 |
+
|
| 383 |
+
To investigate this, we train small 209M parameter language models on the RoBERTa corpus (Liu et al., 2019). We use the AdaGrad hyperparameters introduced by Keskar et al. (2019). Results are shown in Table 7. From the results we can see that our 8-bit methods do not work as well for AdaGrad. One hypothesis is that this is due to the the wide range of gradient statistics of AdaGrad which comes from averaging the gradient over the entire course of training. To prevent poor quantization in such scenarios, stochastic rounding proved to be very effective from our initial experiments with other 8-bit optimizer. While we abandoned stochastic rounding because we did not see any benefits for Adam and Momentum, it could be an effective solution for AdaGrad. We leave such improved 8-bit quantization methods for AdaGrad to future work.
|
| 384 |
+
|
| 385 |
+
Table 7: AdaGrad compared to Adam performance for a 209M parameter language model on the RoBERTa corpus. The 8-bit methods use stable embedding layer. AdaGrad hyperparameters are taken from (Keskar et al., 2019).
|
| 386 |
+
|
| 387 |
+
<table><tr><td>Optimizer</td><td>Valid Perplexity</td></tr><tr><td>32-bit Adam</td><td>16.7</td></tr><tr><td>8-bit Adam</td><td>16.4</td></tr><tr><td>32-bit AdaGrad</td><td>19.4</td></tr><tr><td>8-bit AdaGrad</td><td>19.7</td></tr></table>
|
8bitoptimizersviablockwisequantization/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2120a62f33230a5e0bdd72c8046bee7dabfe8ba7a4a38b013c58e67dbacb4102
|
| 3 |
+
size 639027
|
8bitoptimizersviablockwisequantization/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f88aafcb8628d3882ce53416838ea29969bff4f8072aba4bcd9207825d60e4f6
|
| 3 |
+
size 525362
|
abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/f2f4139b-878c-4f54-9232-83badb2fceae_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a78367467c9a7014c8eea948b6d0a23f84db670a5da0c5833fb1bb46a558e40
|
| 3 |
+
size 113985
|
abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/f2f4139b-878c-4f54-9232-83badb2fceae_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:96f0c33edf0ba4f24ee36a328da4d794d35b49ad7088cba18ed4c06e187d37a1
|
| 3 |
+
size 134429
|
abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/f2f4139b-878c-4f54-9232-83badb2fceae_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b623d9beefac5b01297dc40c68f5174e0c3e95d1c1362fa178f2f674f1a9f03b
|
| 3 |
+
size 1402609
|
abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/full.md
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AB-INITIO POTENTIAL ENERGY SURFACES BY PAIRING GNNS WITH NEURAL WAVE FUNCTIONS
|
| 2 |
+
|
| 3 |
+
Nicholas Gao & Stephan Gunnemann
|
| 4 |
+
|
| 5 |
+
Department of Informatics & Munich Data Science Institute
|
| 6 |
+
|
| 7 |
+
Technical University of Munich, Germany
|
| 8 |
+
|
| 9 |
+
{gaoni,guennemann}@in.tum.de
|
| 10 |
+
|
| 11 |
+
# ABSTRACT
|
| 12 |
+
|
| 13 |
+
Solving the Schrödinger equation is key to many quantum mechanical properties. However, an analytical solution is only tractable for single-electron systems. Recently, neural networks succeeded at modeling wave functions of many-electron systems. Together with the variational Monte-Carlo (VMC) framework, this led to solutions on par with the best known classical methods. Still, these neural methods require tremendous amounts of computational resources as one has to train a separate model for each molecular geometry. In this work, we combine a Graph Neural Network (GNN) with a neural wave function to simultaneously solve the Schrödinger equation for multiple geometries via VMC. This enables us to model continuous subsets of the potential energy surface with a single training pass. Compared to existing state-of-the-art networks, our Potential Energy Surface Network (PESNet) speeds up training for multiple geometries by up to 40 times while matching or surpassing their accuracy. This may open the path to accurate and orders of magnitude cheaper quantum mechanical calculations.
|
| 14 |
+
|
| 15 |
+
# 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
In recent years, machine learning gained importance in computational quantum physics and chemistry to accelerate material discovery by approximating quantum mechanical (QM) calculations (Huang & von Lilienfeld, 2021). In particular, a lot of work has gone into building surrogate models to reproduce QM properties, e.g., energies. These models learn from datasets created using classical techniques such as density functional theory (DFT) (Ramakrishnan et al., 2014; Klicpera et al., 2019) or coupled clusters (CCSD) (Chmiela et al., 2018). While this approach has shown great success in recovering the baseline calculations, it suffers from several disadvantages. Firstly, due to the tremendous success of graph neural networks (GNNs) in this area, the regression target quality became the limiting factor for accuracy (Klicpera et al., 2019; Qiao et al., 2021; Batzner et al., 2021), i.e., the network's prediction is closer to the data label than the data label is to the actual QM
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: Schematic of PESNet. For each molecular structure (top row), the MetaGNN takes the nuclei graph and parametrizes the WFModel via $\omega$ and $\omega_{m}$ . Given these, the WFModel evaluates the electronic wave function $\psi (\vec{r})$ .
|
| 21 |
+
|
| 22 |
+
property. Secondly, these surrogate models are subject to the usual difficulties of neural networks such as overconfidence outside the training domain (Pappu & Paige, 2020; Guo et al., 2017).
|
| 23 |
+
|
| 24 |
+
In orthogonal research, neural networks have been used as wave function Ansätze to solve the stationary Schrödinger equation (Kessler et al., 2021; Han et al., 2019). These methods use the variational Monte Carlo (VMC) (McMillan, 1965) framework to iteratively optimize a neural wave function to obtain the ground-state electronic wave function of a given system. Chemists refer to such methods as ab-initio, whereas the machine learning community may refer to this as a form of self-generative learning as no dataset is required. The data (electron positions) are sampled from the
|
| 25 |
+
|
| 26 |
+
wave function itself, and the loss is derived from the Schrödinger equation (Ceperley et al., 1977). This approach has shown great success as multiple authors report results outperforming the traditional 'gold-standard' CCSD on various systems (Pfau et al., 2020; Hermann et al., 2020). However, these techniques require expensive training for each geometry, resulting in high computational requirements and, thus, limiting their application to small sets of configurations.
|
| 27 |
+
|
| 28 |
+
In this work, we accelerate VMC with neural wave functions by proposing an architecture that solves the Schrödinger equation for multiple systems simultaneously. The core idea is to predict a set of parameters such that a given wave function, e.g., FermiNet (Pfau et al., 2020), solves the Schrödinger equation for a specific geometry. Previously, these parameters were obtained by optimizing a separate wave function for each geometry. We improve this procedure by generating the parameters with a GNN, as illustrated in Figure 1. This enables us to capture continuous subsets of the potential energy surface in one training pass, removing the need for costly retraining. Additionally, we take inspiration from supervised surrogate networks and enforce the invariances of the energy to physical symmetries such as translation, rotation, and reflection (Schütt et al., 2018). While these symmetries hold for observable metrics such as energies, the wave function itself may not have these symmetries. We solve this issue by defining a coordinate system that is equivariant to the symmetries of the energy. In our experiments, our Potential Energy Surface Network (PESNet) consistently matches or surpasses the results of the previous best neural wave functions while training less than $\frac{1}{40}$ of the time for high-resolution potential energy surface scans.
|
| 29 |
+
|
| 30 |
+
# 2 RELATED WORK
|
| 31 |
+
|
| 32 |
+
Molecular property prediction has seen a surge in publications in recent years with the goal of predicting QM properties such as the energy of a system. Classically, features were constructed by hand and fed into a machine learning model to predict target properties (Christensen et al., 2020; Behler, 2011; Bartók et al., 2013). Lately, GNNs have proven to be more accurate and took over the field (Yang et al., 2019; Klicpera et al., 2019; Schütt et al., 2018). As GNNs approach the accuracy limit, recent work focuses on improving generalization by integrating calculations from computational chemistry. For instance, QDF (Tsubaki & Mizoguchi, 2020) and EANN (Zhang et al., 2019) approximate the electron density while OrbNet (Qiao et al., 2020) and UNiTE (Qiao et al., 2021) include features taken from QM calculations. Another promising direction is $\Delta$ -ML models, which only predict the delta between a high-accuracy QM calculation and a faster low-accuracy one (Wengert et al., 2021). Despite their success, surrogate models lack reliability. Even if uncertainty estimates are available (Lamb & Paige, 2020; Hirschfeld et al., 2020), generalization outside of the training regime is unpredictable (Guo et al., 2017).
|
| 33 |
+
|
| 34 |
+
While such supervised models are architecturally related, they pursue a fundamentally different objective than PESNet. Where surrogate models approximate QM calculations from data, this work focuses on performing the exact QM calculations from first principles.
|
| 35 |
+
|
| 36 |
+
Neural wave function Ansätze in combination with the VMC framework have recently been proposed as an alternative (Carleo & Troyer, 2017) to classical self-consistent field (SCF) methods such as Hartree-Fock, DFT, or CCSD to solve the Schrödinger equation (Szabo & Ostlund, 2012). However, early works were limited to small systems and low accuracy (Kessler et al., 2021; Han et al., 2019; Choo et al., 2020). Recently, FermiNet (Pfau et al., 2020) and PauliNet(Hermann et al., 2020) presented more scalable approaches and accuracy on par with the best traditional QM computations. To further improve accuracy, Wilson et al. (2021) coupled FermiNet with diffusion Monte-Carlo (DMC). But, all these methods need to be trained for each configuration individually. To address this issue, weight-sharing has been proposed to reduce the time per training, but this was initially limited to non-fermionic systems (Yang et al., 2020). In a concurrent work, Scherbela et al. (2021) extend this idea to electronic wave functions. However, their DeepErwin model still requires separate models for each geometry, does not account for symmetries and achieves lower accuracy, as we show in Section 4.
|
| 37 |
+
|
| 38 |
+
# 3 METHOD
|
| 39 |
+
|
| 40 |
+
To build a model that solves the Schrödinger equation for many geometries simultaneously and accounts for the symmetries of the energy, we use three key ingredients.
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
Figure 2: PESNet's architecture is split into two main components, the MetaGNN and the WFModel. Circles indicate parameter-free and rectangles parametrized functions, $\circ \parallel \circ$ denotes the vector concatenation, $\mathbb{A}^{\uparrow}$ and $\mathbb{A}^{\downarrow}$ denote the index sets of the spin-up and spin-down electrons, respectively. To avoid clutter, we left out residual connections.
|
| 44 |
+
|
| 45 |
+
Firstly, to solve the Schrödinger equation, we leverage the VMC framework, i.e., we iteratively update our wave function model (WFModel) until it converges to the ground-state electronic wave function. The WFModel $\psi_{\theta}(\vec{r}): \mathbb{R}^{N \times 3} \mapsto \mathbb{R}$ is a function parametrized by $\theta$ that maps electron configurations to amplitudes. It must obey the Fermi-Dirac statistics, i.e., the sign of the output must flip under the exchange of two electrons of the same spin. As we cover in Section 3.4, the WFModel is essential for sampling electron configurations and computing energies.
|
| 46 |
+
|
| 47 |
+
Secondly, we extend this to multiple geometries by introducing a GNN that reparametrizes the WF-Model. In reference to meta-learning, we call this the MetaGNN. It takes the nuclei coordinates $\overrightarrow{R}_m$ and charges $Z_{m}$ and outputs subsets $\omega ,\omega_{m}\subset \theta ,m\in \{1,\dots ,M\}$ of WFModel's parameters. Thanks to message passing, the MetaGNN can capture the full 3D geometry of the nuclei graph.
|
| 48 |
+
|
| 49 |
+
Lastly, as we prove in Appendix A, to predict energies invariant to rotations and reflections the wave function needs to be equivariant. We accomplish this by constructing an equivariant coordinate system $\pmb{E} = [\vec{e}_1, \vec{e}_2, \vec{e}_3]$ based on the principle component analysis (PCA).
|
| 50 |
+
|
| 51 |
+
Together, these components form PESNet, whose architecture is shown in Figure 2. Since sampling and energy computations only need the WFModel, a single forward pass of the MetaGNN is sufficient for each geometry during evaluation. Furthermore, its end-to-end differentiability facilitates optimization, see Section 3.4, and we may benefit from better generalization thanks to our equivariant wave function (Elesedy & Zaidi, 2021; Kondor & Trivedi, 2018).
|
| 52 |
+
|
| 53 |
+
Notation. We use bold lower-case letters $\pmb{h}$ for vectors, bold upper-case $\pmb{W}$ letters for matrices, arrows to indicate vectors in 3D, $\overrightarrow{\pmb{r}}_i$ to denote electron coordinates, $\overrightarrow{R}_m, Z_m$ for nuclei coordinates and charge, respectively. $[\circ, \circ]$ and $[\circ]_{i=1}^N$ denote vector concatenations.
|
| 54 |
+
|
| 55 |
+
# 3.1 WAVE FUNCTION MODEL
|
| 56 |
+
|
| 57 |
+
We use the FermiNet (Pfau et al., 2020) architecture and augment it with a new feature construction that is invariant to reindexing nuclei. In the original FermiNet, the inputs to the first layer are simply concatenations of the electron-nuclei distances. This causes the features to permute if nuclei indexing changes. To circumvent this issue, we propose a new feature construction as follows:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\boldsymbol {h} _ {i} ^ {1} = \sum_ {m = 1} ^ {M} \operatorname {M L P} \left(\boldsymbol {W} \left[ (\overrightarrow {\boldsymbol {r}} _ {i} - \overrightarrow {\boldsymbol {R}} _ {m}) \boldsymbol {E}, \| \overrightarrow {\boldsymbol {r}} _ {i} - \overrightarrow {\boldsymbol {R}} _ {m} \| \right] + \boldsymbol {z} _ {m}\right), \tag {1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\boldsymbol {g} _ {i j} ^ {1} = \left(\left(\overrightarrow {\boldsymbol {r}} _ {i} - \overrightarrow {\boldsymbol {r}} _ {j}\right) \boldsymbol {E}, \| \overrightarrow {\boldsymbol {r}} _ {i} - \overrightarrow {\boldsymbol {r}} _ {j} \|\right) \tag {2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where $z_{m}$ is an embedding of the $m$ -th nuclei and $\pmb{E} \in \mathbb{R}^{3 \times 3}$ is our equivariant coordinate system, see Section 3.3. By summing over all nuclei instead of concatenating we obtain the desired invariance. The features are then iteratively updated using the update rule from Wilson et al. (2021)
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\boldsymbol {h} _ {i} ^ {t + 1} = \sigma \left(\boldsymbol {W} _ {\text {s i n g l e}} ^ {t} \left[ \boldsymbol {h} _ {i} ^ {t}, \sum_ {j \in \mathbb {A} ^ {\uparrow}} \boldsymbol {g} _ {i j} ^ {t}, \sum_ {j \in \mathbb {A} ^ {\downarrow}} \boldsymbol {g} _ {i j} ^ {t} \right] + \boldsymbol {b} _ {\text {s i n g l e}} ^ {t} + \boldsymbol {W} _ {\text {g l o b a l}} ^ {t} \left[ \sum_ {j \in \mathbb {A} ^ {\uparrow}} \boldsymbol {h} _ {j} ^ {t}, \sum_ {j \in \mathbb {A} ^ {\downarrow}} \boldsymbol {h} _ {j} ^ {t} \right]\right), \tag {3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\boldsymbol {g} _ {i j} ^ {t + 1} = \sigma \left(\boldsymbol {W} _ {\text {d o u b l e}} ^ {t} \boldsymbol {g} _ {i j} ^ {t} + \boldsymbol {b} _ {\text {d o u b l e}} ^ {t}\right) \tag {4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $\sigma$ is an activation function, $\mathbb{A}^{\uparrow}$ and $\mathbb{A}^{\downarrow}$ are the index sets of the spin-up and spin-down electrons, respectively. We also add skip connections where possible. We chose $\sigma := \tanh$ since it must be at least twice differentiable to compute the energy, see Section 3.4. After $L_{\mathrm{WF}}$ many updates, we take the electron embeddings $h_{i}^{L_{\mathrm{WF}}}$ and construct $K$ orbitals:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\phi_ {i j} ^ {k \alpha} = \left(\boldsymbol {w} _ {i} ^ {k \alpha} \boldsymbol {h} _ {j} ^ {L _ {\mathrm {w F}}} + b _ {\text {o r b i t a l}, i} ^ {k \alpha}\right) \sum_ {m} ^ {M} \pi_ {i m} ^ {k \alpha} \exp \left(- \sigma_ {i m} ^ {k \alpha} \| \vec {\boldsymbol {r}} _ {j} - \vec {\boldsymbol {R}} _ {m} \|\right), \tag {5}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\pi_ {i m} ^ {k \alpha} = \operatorname {S i g m o i d} \left(p _ {i m} ^ {k \alpha}\right), \quad \sigma_ {i m} ^ {k \alpha} = \operatorname {S o f t p l u s} \left(s _ {i m} ^ {k \alpha}\right)
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $k \in \{1, \dots, K\}$ , $\alpha \in \{\uparrow, \downarrow\}$ , $i, j \in \mathbb{A}^{\alpha}$ , and $\pmb{p}_i, \pmb{s}_i$ are free parameters. Here, we use the sigmoid and softplus functions to ensure that the wave function decays to 0 infinitely far away from any nuclei. To satisfy the antisymmetry to the exchange of same-spin electrons, the output is a weighted sum of determinants (Hutter, 2020)
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\psi (\vec {r}) = \sum_ {k = 1} ^ {K} w _ {k} \det \phi^ {k \uparrow} \det \phi^ {k \downarrow}. \tag {6}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
# 3.2 METAGNN
|
| 94 |
+
|
| 95 |
+
The MetaGNN's task is to adapt the WFModel to the geometry at hand. It does so by substituting subsets, $\omega$ and $\omega_{m}$ , of WFModel's parameters. While $\omega_{m}$ contains parameters specific to nuclei $m$ , $\omega$ is a set of nuclei-independent parameters such as biases. To capture the geometry of the nuclei, the GNN embeds the nuclei in a vector space and updates the embeddings via learning message passing. Contrary to surrogate GNNs, we also account for the position in our equivariant coordinate system when initializing the node embeddings to avoid identical embeddings in symmetric structures. Hence, our node embeddings are initialized by
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\boldsymbol {l} _ {m} ^ {1} = \left[ \boldsymbol {G} _ {Z _ {m}}, f _ {\text {p o s}} \left(\overrightarrow {\boldsymbol {R}} _ {m} ^ {\prime} \boldsymbol {E}\right) \right] \tag {7}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $G$ is a matrix of charge embeddings, $Z_{m}\in \mathbb{N}_{+}$ is the charge of nucleus $m$ , $f_{\mathrm{pos}}:\mathbb{R}^3\mapsto \mathbb{R}^{N_{\mathrm{SBF}}\cdot N_{\mathrm{RBF}}}$ is our positional encoding function, and $\vec{\pmb{R}}_m'\pmb {E}$ is the relative position of the $m$ th nucleus in our equivariant coordinate system $\pmb{E}$ (see Section 3.3). As positional encoding function, we use the spherical Fourier-Bessel basis functions $\tilde{a}_{\mathrm{SBF},ln}$ from Klicpera et al. (2019)
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
f _ {\text {p o s}} (\vec {\boldsymbol {x}}) = \sum_ {i = 1} ^ {3} \left[ \tilde {a} _ {\mathrm {S B F}, l n} \left(\| \vec {\boldsymbol {x}} \|, \angle \left(\vec {\boldsymbol {x}}, \vec {\boldsymbol {e}} _ {i}\right)\right) \right] _ {l \in \{0, \dots , N _ {\mathrm {S B F}} - 1 \}, n \in \{1, \dots , N _ {\mathrm {R B F}} \}} \tag {8}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
with $\vec{e}_i$ being the $i$ th axis of our equivariant coordinate system $\pmb{E}$ . Unlike Klicpera et al. (2019), we are working on the fully connected graph and, thus, neither include a cutoff nor the envelope function that decays to 0 at the cutoff.
|
| 108 |
+
|
| 109 |
+
A message passing layer consists of a message function $f_{\mathrm{msg}}$ and an update function $f_{\mathrm{update}}$ . Together, one can compute an update to the embeddings as
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\boldsymbol {l} _ {m} ^ {t + 1} = f _ {\text {u p d a t e}} ^ {l} \left(\boldsymbol {l} _ {m} ^ {t}, \sum_ {n} f _ {\text {m s g}} ^ {t} \left(\boldsymbol {l} _ {m} ^ {t}, \boldsymbol {l} _ {n} ^ {t}, \boldsymbol {e} _ {m n}\right)\right) \tag {9}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $e_{mn}$ is an embedding of the edge between nucleus $m$ and nucleus $n$ . We use Bessel radial basis functions to encode the distances between nuclei (Klicpera et al., 2019). Both $f_{\mathrm{msg}}$ and $f_{\mathrm{update}}$ are realized by simple feed-forward neural networks with residual connections.
|
| 116 |
+
|
| 117 |
+
After $L_{\mathrm{GNN}}$ many message passing steps, we compute WFModel's parameters on two levels. On the global level, $f_{\mathrm{global}}^{\mathrm{out}}$ outputs the biases of the network and, on the node level, $f_{\mathrm{node}}^{\mathrm{out}}$ outputs nuclei specific parameters:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\boldsymbol {\omega} = \left[ \boldsymbol {b} _ {\text {s i n g l e / d o u b l e}} ^ {1}, \dots , \boldsymbol {b} _ {1} ^ {\uparrow / \downarrow}, \dots , \boldsymbol {w} \right] := f _ {\text {g l o b a l}} ^ {\text {o u t}} \left(\left[ \sum_ {m} \boldsymbol {l} _ {m} ^ {t} \right] _ {t = 1} ^ {L _ {\mathrm {G N N}}}\right), \tag {10}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\boldsymbol {\omega} _ {m} = \left[ \boldsymbol {z} _ {m}, \boldsymbol {s} _ {m} ^ {1, \uparrow / \downarrow}, \dots , \boldsymbol {p} _ {m} ^ {1, \uparrow / \downarrow}, \dots \right] := f _ {\text {n o d e}} ^ {\text {o u t}} \left(\left[ \boldsymbol {l} _ {m} ^ {t} \right] _ {t = 1} ^ {L _ {\text {G N N}}}\right).
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
We use distinct feed-forward neural networks with multiple heads for the specific types of parameters estimated to implement $f_{\mathrm{node}}^{\mathrm{out}}$ and $f_{\mathrm{global}}^{\mathrm{out}}$ .
|
| 128 |
+
|
| 129 |
+
# 3.3 EQUIVARIANT COORDINATE SYSTEMS
|
| 130 |
+
|
| 131 |
+
Incorporating symmetries helps to reduce the training space significantly. In GNNs this is done by only operating on inter-nuclei distances without a clear directionality in space, i.e., without $x, y, z$ coordinates. While this works for predicting observable metrics such as energies, it does not work for wave functions. For instance, any such GNN could only describe spherically symmetric wave functions for the hydrogen atom despite all excited states (the real spherical harmonics) not having such symmetries. Unfortunately, as we show in Appendix B, recently proposed equivariant GNNs (Thomas et al., 2018; Batzner et al., 2021) also suffer from the same limitation.
|
| 132 |
+
|
| 133 |
+
To solve this issue, we introduce directionality in the form of a coordinate system that is equivariant to rotations and reflections. The axes of our coordinate system $\pmb{E} = [\vec{e}_1, \vec{e}_2, \vec{e}_3]$ are defined by the principal components of the nuclei coordinates, $\vec{e}_1^{\mathrm{PCA}}$ , $\vec{e}_2^{\mathrm{PCA}}$ , $\vec{e}_3^{\mathrm{PCA}}$ . Using PCA is robust to reindexing nuclei and ensures that the axes rotate with the system and form an orthonormal basis. However, as PCA only returns directions up to a sign, we have to resolve the sign ambiguity. We do this by computing an equivariant vector $\vec{v}$ , i.e., a vector that rotates and reflects with the system, and defining the direction of the axes as
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\vec {e} _ {i} = \left\{ \begin{array}{l l} \vec {e} _ {i} ^ {\mathrm {P C A}} & , \text {i f} \overrightarrow {\boldsymbol {v}} ^ {T} \overrightarrow {e} _ {i} ^ {\mathrm {P C A}} \geq 0, \\ - \overrightarrow {e} _ {i} ^ {\mathrm {P C A}} & , \text {e l s e .} \end{array} \right. \tag {11}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
As equivariant vector we use the difference between a weighted and the regular center of mass
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\overrightarrow {\boldsymbol {v}} := \frac {1}{M} \sum_ {m = 1} ^ {M} \left(\sum_ {n = 1} ^ {M} \left\| \overrightarrow {\boldsymbol {R}} _ {m} - \overrightarrow {\boldsymbol {R}} _ {n} \right\| ^ {2}\right) Z _ {m} \overrightarrow {\boldsymbol {R}} _ {m} ^ {\prime}, \tag {12}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\overrightarrow {\boldsymbol {R}} _ {m} ^ {\prime} = \overrightarrow {\boldsymbol {R}} _ {m} - \frac {1}{M} \sum_ {m = 1} ^ {M} \overrightarrow {\boldsymbol {R}} _ {m}. \tag {13}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
With this construction, we obtain an equivariant coordinate system that defines directionality in space. However, we are aware that PCA may not be robust, e.g., if eigenvalues of the covariance matrix are identical. A detailed discussion on such edge cases can be found in Appendix C.
|
| 150 |
+
|
| 151 |
+
# 3.4 OPTIMIZATION
|
| 152 |
+
|
| 153 |
+
We use the standard VMC optimization procedure (Ceperley et al., 1977) where we seek to minimize the expected energy of a wave function $\psi_{\theta}$ parametrized by $\pmb{\theta}$ :
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\mathcal {L} = \frac {\left\langle \psi_ {\boldsymbol {\theta}} \right| \boldsymbol {H} \left| \psi_ {\boldsymbol {\theta}} \right\rangle}{\left\langle \psi_ {\boldsymbol {\theta}} \mid \psi_ {\boldsymbol {\theta}} \right\rangle} \tag {14}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where $H$ is the Hamiltonian of the Schrodinger equation
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\boldsymbol {H} = - \frac {1}{2} \sum_ {i = 1} ^ {N} \nabla_ {i} ^ {2} + \underbrace {\sum_ {i = 1} ^ {N} \sum_ {j = i} ^ {N} \frac {1}{\left\| \vec {r} _ {i} - \vec {r} _ {j} \right\|}} _ {V (\vec {r})} - \sum_ {i = 1} ^ {N} \sum_ {m = 1} ^ {M} \frac {1}{\left\| \vec {r} _ {i} - \vec {R} _ {m} \right\|} + \sum_ {m = 1} ^ {M} \sum_ {n = m} ^ {M} \frac {Z _ {m} Z _ {n}}{\left\| \vec {R} _ {m} - \vec {R} _ {n} \right\|} \tag {15}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
with $\nabla^2$ being the Laplacian operator and $V(\overrightarrow{r})$ describing the potential energy. Given samples from the probability distribution $\sim \psi_{\theta}^{2}(\overrightarrow{r})$ , one can obtain an unbiased estimate of the gradient
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\begin{array}{l} E _ {\boldsymbol {\theta}} (\vec {r}) = \psi_ {\boldsymbol {\theta}} ^ {- 1} (\vec {r}) \boldsymbol {H} \psi_ {\boldsymbol {\theta}} (\vec {r}) \\ = - \frac {1}{2} \sum_ {i = 1} ^ {N} \sum_ {k = 1} ^ {3} \left[ \frac {\partial^ {2} \log | \psi_ {\boldsymbol {\theta}} (\vec {\boldsymbol {r}}) |}{\partial \vec {\boldsymbol {r}} _ {i k} ^ {2}} + \frac {\partial \log | \psi_ {\boldsymbol {\theta}} (\vec {\boldsymbol {r}}) |}{\partial \vec {\boldsymbol {r}} _ {i k}} ^ {2} \right] + V (\vec {\boldsymbol {r}}), \tag {16} \\ \end{array}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\nabla_ {\boldsymbol {\theta}} \mathcal {L} = \mathbb {E} _ {\vec {\boldsymbol {r}} \sim \psi_ {\boldsymbol {\theta}} ^ {2}} \left[ \left(E _ {\boldsymbol {\theta}} (\vec {\boldsymbol {r}}) - \mathbb {E} _ {\vec {\boldsymbol {r}} \sim \psi_ {\boldsymbol {\theta}} ^ {2}} [ E _ {\boldsymbol {\theta}} (\vec {\boldsymbol {r}}) ]\right) \nabla_ {\boldsymbol {\theta}} \log | \psi_ {\boldsymbol {\theta}} (\vec {\boldsymbol {r}}) | \right] \tag {17}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where $E_{\theta}(\vec{r})$ denotes the local energy of the WFModel with parameters $\theta$ for the electron configuration $\vec{r}$ . One can see that for the energy computation, we only need the derivative of the wave function w.r.t. the electron coordinates. As these are no inputs to the MetaGNN, we do not have to differentiate through the MetaGNN to obtain the local energies. We clip the local energy as in Pfau et al. (2020) and obtain samples from $\sim \psi_{\theta}^{2}(\vec{r})$ via Metropolis-Hastings. The gradients for the MetaGNN are computed jointly with those of the WFModel by altering Equation 17:
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\nabla_ {\Theta} \mathcal {L} = \mathbb {E} _ {\vec {r} ^ {\prime} \sim \psi_ {\theta} ^ {2}} \left[ \left(E _ {\boldsymbol {\theta}} (\vec {r}) - \mathbb {E} _ {\vec {r} ^ {\prime} \sim \psi_ {\boldsymbol {\theta}} ^ {2}} [ E _ {\boldsymbol {\theta}} (\vec {r}) ]\right) \nabla_ {\Theta} \log | \psi_ {\Theta} (\vec {r}) | \right] \tag {18}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where $\Theta$ is the joint set of WFModel and MetaGNN parameters. To obtain the gradient for multiple geometries, we compute the gradient as in Equation 18 multiple times and average. This joint gradient of the WFModel and the MetaGNN enables us to use a single training pass to simultaneously solve multiple Schrödinger equations.
|
| 182 |
+
|
| 183 |
+
While Equation 18 provides us with a raw estimate of the gradient, different techniques have been used to construct proper updates to the parameters (Hermann et al., 2020; Pfau et al., 2020). Here, we use natural gradient descent to enable the use of larger learning rates. So, instead of doing a regular gradient descent step in the form of $\Theta^{t + 1} = \Theta^t -\eta \nabla_\Theta \mathcal{L}$ , where $\eta$ is the learning rate, we add the inverse of the Fisher information matrix as a preconditioner
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\Theta^ {t + 1} = \Theta^ {t} - \eta \boldsymbol {F} ^ {- 1} \nabla_ {\Theta} \mathcal {L}, \tag {19}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\boldsymbol {F} = \mathbb {E} _ {\vec {\boldsymbol {r}} \sim \psi_ {\boldsymbol {\theta}} ^ {2}} \left[ \nabla_ {\Theta} \log | \psi_ {\Theta} (\overrightarrow {\boldsymbol {r}}) | \nabla_ {\Theta} \log | \psi_ {\Theta} (\overrightarrow {\boldsymbol {r}}) | ^ {T} \right]. \tag {20}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
Since the Fisher $\pmb{F}$ scales quadratically with the numbers of parameters, we approximate $F^{-1}\nabla_{\Theta}\mathcal{L}$ via the conjugate gradient (CG) method (Neuscamman et al., 2012). To determine the convergence of the CG method, we follow Martens (2010) and stop based on the quadratic error. To avoid tuning the learning rate $\eta$ , we clip the norm of the preconditioned gradient $F^{-1}\nabla_{\Theta}\mathcal{L}$ (Pascanu et al., 2013) and use a fixed learning rate for all systems.
|
| 194 |
+
|
| 195 |
+
We pretrain all networks with the Lamb optimizer (You et al., 2020) on Hartree-Fock orbitals, i.e., we match each of the $K$ orbitals to a Hartree-Fock orbital of a different configuration. During pretraining, only the WFModel and the final biases of the MetaGNN are optimized.
|
| 196 |
+
|
| 197 |
+
# 3.5 LIMITATIONS
|
| 198 |
+
|
| 199 |
+
While PESNet is capable of accurately modeling complex potential energy surfaces, we have not focused on architecture search yet. Furthermore, as we discuss in Section 4, PauliNet (Hermann et al., 2020) still offers a better initialization and converges in fewer iterations than our network. Lastly, PESNet is limited to geometries of the same set of nuclei with identical electron spin configurations, i.e., to access properties like the electron affinity one still needs to train two models.
|
| 200 |
+
|
| 201 |
+
# 4 EXPERIMENTS
|
| 202 |
+
|
| 203 |
+
To investigate PESNet's accuracy and training time benefit, we compare it to FermiNet (Pfau et al., 2020; Spencer et al., 2020), PauliNet (Hermann et al., 2020), and DeepErwin (Scherbela et al., 2021) on diverse systems ranging from 3 to 28 electrons. Note, the concurrently developed DeepErwin was only recently released as a pre-print and still requires separate models and training for each configuration. While viewing the results on energies one should be aware that, except for PESNet, all methods must be trained separately for each configuration resulting in significantly higher training times, as discussed in Section 4.1.
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
Figure 3: The energy of $\mathrm{H}_4^+$ along the first reaction path (Alijah & Varandas, 2008). While PESNet and DeepErwin match the barrier height estimate of the MRCI-D-F12 calculation, PESNet estimates $\approx 0.27\mathrm{m}E_{\mathrm{h}}$ lower energies. Reference data is taken from Scherbela et al. (2021).
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
Figure 4: Potential energy surface scan of the hydrogen rectangle. Similar to FermiNet, PESNet does not produce the fake minimum at $90^{\circ}$ . Since PESNet respects the symmetries of the energy, we only trained on half of the config space. Reference data is taken from Pfau et al. (2020).
|
| 210 |
+
|
| 211 |
+
Evaluation of ab-initio methods is still a challenge as true energies are rarely known, and experimental data are subject to uncertainties. In addition, many energy differences may seem small due to their large absolute values, but chemists set the threshold for chemical accuracy to $1\mathrm{kcalmol}^{-1}\approx 1.6\mathrm{mE_h}$ . Thus, seemingly small differences in energy are significant. Therefore, to put all results into perspective, we always include highly accurate classical reference calculations. When comparing VMC methods such as PESNet, FermiNet, PauliNet, and DeepErwin, interpretation is simpler: lower is always better as VMC energies are upper bounds (Szabo & Ostlund, 2012).
|
| 212 |
+
|
| 213 |
+
To analyze PESNet's ability to capture continuous subsets of the potential energy surface, we train on the continuous energy surface rather than on a discrete set of configurations for potential energy surface scans. The exact procedure and the general experimental setup are described in Appendix D. Additional ablation studies are available in Appendix E.
|
| 214 |
+
|
| 215 |
+
Transition path of $\mathbf{H}_4^+$ and weight sharing. Scherbela et al. (2021) use the first transition path of $\mathrm{H}_4^+$ (Alijah & Varandas, 2008) to demonstrate the acceleration gained by weight-sharing. But, they found their weight-sharing scheme to be too restrictive and additionally optimized each wave function separately. Unlike DeepErwin, our novel PESNet is flexible enough such that we do not need any extra optimization. In Figure 3, we see the DeepErwin results after their multi-step optimization and the energies of a single PESNet. We notice that while both methods estimate similar transition barriers PESNet results in $\approx 0.27\mathrm{m}E_{\mathrm{h}}$ smaller energies which match the very accurate MRCI-D-F12 results ( $\approx 0.015\mathrm{m}E_{\mathrm{h}}$ ).
|
| 216 |
+
|
| 217 |
+
Hydrogen rectangle and symmetries. The Hydrogen rectangle is a known failure case for CCSD and CCSD(T). While the exact solution, FCI, indicates a local maximum at $\theta = 90^{\circ}$ , both, CCSD and CCSD(T), predict local minima. Figure 4 shows that VMC methods such as FermiNet and our PESNet do not suffer from the same issue. PESNet's energies are identical to FermiNet's ( $\approx$ $0.014\mathrm{m}E_{\mathrm{h}}$ ) despite training only a single network on half of the configuration space thanks to our equivariant coordinate system.
|
| 218 |
+
|
| 219 |
+
The hydrogen chain is a very common benchmark geometry that allows us to compare our method to a range of classical methods (Motta et al., 2017) as well as to FermiNet, PauliNet, and DeepErwin. Figure 5 shows the potential energy surface of the hydrogen chain computed by a range of methods. While our PESNet generally performs identical to FermiNet, we predict on average $0.31\mathrm{m}E_{\mathrm{h}}$ lower energies. Further, we notice that our results are consistently better than PauliNet and DeepErwin despite only training a single model.
|
| 220 |
+
|
| 221 |
+
The nitrogen molecule poses a challenge as classical methods such as CCSD or CCSD(T) fail to reproduce the experimental results (Lyakh et al., 2012; Le Roy et al., 2006). While the accurate r12-MR-ACPF method more closely matches the experimental results, it scales factorially (Gdanitz, 1998). After Pfau et al. (2020) have shown that FermiNet is capable of modeling such complex triple bonds, we are interested in PESNet's performance. To better represent both methods, we
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
Figure 5: Potential energy surface scan of the hydrogen chain with 10 atoms. We find our PESNet to outperform PauliNet and DeepErwin strictly while matching the results of FermiNet across all configurations. Reference data is taken from Hermann et al. (2020); Pfau et al. (2020); Scherbela et al. (2021); Motta et al. (2017).
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Figure 6: Potential energy surface scan of the nitrogen molecule. PESNet yields very similar but slightly higher $(\approx 0.37\mathrm{m}E_{\mathrm{h}})$ energies than FermiNet. Without the MetaGNN the accuracy drops significantly by $\approx 4.3\mathrm{m}E_{\mathrm{h}}$ on average. Reference data is taken from Le Roy et al. (2006); Gdanitz (1998); Pfau et al. (2020).
|
| 228 |
+
|
| 229 |
+
decided to compare both FermiNet as well as PESNet with 32 determinants due to a substantial performance gain for both methods. The results in Figure 6 show that PESNet agrees very well with FermiNet and is on average just $0.37\mathrm{m}E_{\mathrm{h}}$ higher, despite training only a single model for less than $\frac{1}{47}$ of FermiNet's training time, see Section 4.1. In addition, the ablation of PESNet without the MetaGNN shows a significant loss of accuracy of $4.3\mathrm{m}E_{\mathrm{h}}$ on average.
|
| 230 |
+
|
| 231 |
+
Cyclobutadiene and the MetaGNN. The automerization of cyclobutadiene is challenging due to its multi-reference nature, i.e., single reference methods such as CCSD(T) overestimate the transition barrier (Lyakh et al., 2012). In contrast, PauliNet and FermiNet had success at modelling this challenging system. Naturally, we are interested in how well PESNet can estimate the transition barrier. To be comparable to Spencer et al. (2020), we increased the number of determinants to 32 and the single-stream size to 512. Similar to PauliNet Hermann et al. (2020), we found PESNet to occasionally converge to a higher energy for the transition state depending on the initialization and pretraining. To avoid this, we pick a well-initialized model by training 5 models for 1000 iterations and then continue the rest of the optimization with the model yielding the lowest energy.
|
| 232 |
+
|
| 233 |
+
As shown in Figure 7, all neural methods converge to the same transition barrier which aligns with the highest MR-CC results at the upper end of the experimental range. But, they require different numbers of training steps and result in different total energies. PauliNet generally converges fastest, but results in the highest energies, whereas FermiNet's transition barrier converges slower but its energies are $70\mathrm{m}E_{\mathrm{h}}$ smaller. Lastly, PESNet's transition barrier converges similar PauliNet's, but its energies are $54\mathrm{m}E_{\mathrm{h}}$ lower than PauliNet's, placing it closer to FermiNet than PauliNet in terms of accuracy. Considering that PESNet has only been trained for $\approx \frac{1}{6}$ of FermiNet's time (see Section 4.1), we are confident that additional optimization would further narrow the gap to FermiNet.
|
| 234 |
+
|
| 235 |
+
In an additional ablation study, we compare to PESNet without the MetaGNN, i.e., we still train a single model for both states of cyclobutadiene but without weight adaptation. While the results in Figure 7 show that the truncated network's energies continuously decrease, it fails to reproduce the same transition barrier and its energies are $18\mathrm{m}E_{\mathrm{h}}$ worse compared to the full PESNet.
|
| 236 |
+
|
| 237 |
+
# 4.1 TRAINING TIME
|
| 238 |
+
|
| 239 |
+
While the previous experiments have shown that our model's accuracy is on par with FermiNet, PESNet's main appeal is its capability to fit multiple geometries simultaneously. Here, we study the training times for all systems from the previous section. We compare the official JAX (Bradbury et al., 2018) implementation of FermiNet (Spencer et al., 2020), the official PyTorch implementation of PauliNet (Hermann et al., 2020), the official TensorFlow implementation of DeepErwin (Scherbela et al., 2021), and our JAX implementation of PESNet. We use the same hyperparameters as in the experiments or the defaults from the respective works. All measurements have been conducted on a machine with 16 AMD EPYC 7543 cores and a single Nvidia A100 GPU. Here, we only mea
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
Figure 7: Comparison between the ground and transition states of cyclobutadiene. The top figure shows the total energy plotted in log scale zeroed at $-154.68E_{\mathrm{h}}$ with light colors for the ground state and darker colors for the transition state. The bottom figure shows the estimate of the transition barrier. Both figures use a logarithmic x-axis. All neural methods estimate the same transition barriers in line with the highest MR-CC results at the upper end of the experimental data. Reference energies are taken from Hermann et al. (2020); Spencer et al. (2020); Shen & Piecuch (2012).
|
| 243 |
+
|
| 244 |
+
<table><tr><td></td><td>H4+ (Fig. 3)</td><td>H4 (Fig. 4)</td><td>H10 (Fig. 5)</td><td>N2 (Fig. 6)</td><td>Cyclobutadiene (Fig. 7)</td></tr><tr><td>PauliNet</td><td>43h*</td><td>34h*</td><td>153h</td><td>854h*</td><td>437h</td></tr><tr><td>DeepErwin</td><td>34h</td><td>27h*</td><td>111h</td><td>—</td><td>—</td></tr><tr><td>FermiNet</td><td>127h*</td><td>118h</td><td>594h</td><td>4196h</td><td>2309h</td></tr><tr><td>PESNet</td><td>20h</td><td>24h</td><td>65h</td><td>89h</td><td>381h</td></tr></table>
|
| 245 |
+
|
| 246 |
+
Table 1: Total GPU (A100) hours to train all models of the respective figures. *Experiments are not included in the original works and timings are measured with the default parameters for the respective models. — Larger molecules did not work with DeepErwin.
|
| 247 |
+
|
| 248 |
+
sure the VMC training time and explicitly exclude the time to perform the SCFs calculations or any pretraining as these take up less than $1\%$ of the total training time for all methods. Furthermore, note that the timings refer to training all models of the respective experiments.
|
| 249 |
+
|
| 250 |
+
Table 1 shows the GPU hours to train the models of the last section. It is apparent that PESNet used the fewest GPU hours across all systems. Compared to the similar accurate FermiNet, PESNet is up to 47 times faster to train. This speedup is especially noticeable if many configurations shall be evaluated, e.g., 39 nitrogen geometries. Compared to the less accurate PauliNet and DeepErwin, PESNet's speed gain shrinks, but our training times are still consistently lower while achieving significantly better results. Additionally, for H4, H10, and N2, PESNet is not fitted to the plotted discrete set of configurations but instead on a continuous subset of the energy surface. Thus, if one is interested in additional configurations, PESNet's speedup grows linearly in the number of configurations. Still, the numbers in this section do not tell the whole story, thus, we would like to refer the reader to Appendix F and G for additional discussion on training and convergence.
|
| 251 |
+
|
| 252 |
+
# 5 DISCUSSION
|
| 253 |
+
|
| 254 |
+
We presented a novel architecture that can simultaneously solve the Schrödinger equation for multiple geometries. Compared to the existing state-of-the-art networks, our PESNet accelerates the training for many configurations by up to 40 times while often achieving better accuracy. The integration of physical symmetries enables us to reduce our training space. Finally, our results show that a single model can capture a continuous subset of the potential energy surface. This acceleration of neural wave functions opens access to accurate quantum mechanical calculations to a broader audience. For instance, it may enable significantly higher-resolution analyses of complex potential energy surfaces with foreseeable applications in generating new datasets with unprecedented accuracy as well as possible integration into molecular dynamics simulations.
|
| 255 |
+
|
| 256 |
+
Ethics and reproducibility. Advanced computational chemistry tools may have a positive impact in chemistry research, for instance in material discovery. However, they also pose the risk of misuse, e.g., for the development of chemical weapons. To the best of our knowledge, our work does not promote misuse any more than general computational chemistry research. To reduce the likelihood of such misuse, we publish our source code under the Hippocratic license (Ehmke, 2019)<sup>1</sup>. To facilitate reproducibility, the source code includes simple scripts to reproduce all experiments from Section 4. Furthermore, we provide a detailed schematic of the computational graph in Figure 2 and additional details on the experimental setup including all hyperparameters in Appendix D.
|
| 257 |
+
|
| 258 |
+
Acknowledgements. We thank David Pfau, James Spencer, Jan Hermann, and Rafael Reisenhofer for providing their results and data, Johannes Marggraf, Max Wilson and Christoph Scheurer for helpful discussions, and Johannes Klicpera and Leon Hetzel for their valuable feedback.
|
| 259 |
+
|
| 260 |
+
Funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the Federal Government and the Länder.
|
| 261 |
+
|
| 262 |
+
# REFERENCES
|
| 263 |
+
|
| 264 |
+
Alexander Elijah and António J. C. Varandas. H4+: What do we know about it? The Journal of Chemical Physics, 129(3):034303, July 2008. ISSN 0021-9606, 1089-7690. doi: 10.1063/1.2953571.
|
| 265 |
+
Albert P. Bartók, Risi Kondor, and Gábor Csányi. On representing chemical environments. *Physical Review B*, 87(18):184115, May 2013. ISSN 1098-0121, 1550-235X. doi: 10.1103/PhysRevB.87.184115.
|
| 266 |
+
Simon Batzner, Tess E. Smidt, Lixin Sun, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, and Boris Kozinsky. SE(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials. arXiv:2101.03164 [cond-mat, physics:physics], January 2021.
|
| 267 |
+
Jörg Behler. Atom-centered symmetry functions for constructing high-dimensional neural network potentials. The Journal of Chemical Physics, 134(7):074106, February 2011. ISSN 0021-9606, 1089-7690. doi: 10.1063/1.3553717.
|
| 268 |
+
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: Composable transformations of Python+NumPy programs, 2018.
|
| 269 |
+
Giuseppe Carleo and Matthias Troyer. Solving the Quantum Many-Body Problem with Artificial Neural Networks. Science, 355(6325):602-606, February 2017. ISSN 0036-8075, 1095-9203. doi: 10.1126/science.aag2302.
|
| 270 |
+
D. Ceperley, G. V. Chester, and M. H. Kalos. Monte Carlo simulation of a many-fermion study. Physical Review B, 16(7):3081-3099, October 1977. doi: 10.1103/PhysRevB.16.3081.
|
| 271 |
+
Stefan Chmiela, Huziel E. Sauceda, Klaus-Robert Müller, and Alexandre Tkatchenko. Towards exact molecular dynamics simulations with machine-learned force fields. Nature Communications, 9(1):3887, September 2018. ISSN 2041-1723. doi: 10.1038/s41467-018-06169-2.
|
| 272 |
+
Kenny Choo, Antonio Mezzacapo, and Giuseppe Carleo. Fermionic neural-network states for ab-initio electronic structure. Nature Communications, 11(1):2368, December 2020. ISSN 2041-1723. doi: 10.1038/s41467-020-15724-9.
|
| 273 |
+
Anders S. Christensen, Lars A. Bratholm, Felix A. Faber, and O. Anatole von Lilienfeld. FCHL revisited: Faster and more accurate quantum machine learning. The Journal of chemical physics, 152(4):044107, 2020.
|
| 274 |
+
Coraline Ada Ehmke. The Hippocratic License 2.1: An Ethical License for Open Source. https://firstdonoharm.dev, 2019.
|
| 275 |
+
|
| 276 |
+
Bryn Elesedy and Sheheryar Zaidi. Provably Strict Generalisation Benefit for Equivariant Models. In Proceedings of the 38th International Conference on Machine Learning, pp. 2959-2969. PMLR, July 2021.
|
| 277 |
+
Robert J. Gdanitz. Accurately solving the electronic Schrödinger equation of atoms and molecules using explicitly correlated (r12-)MR-CI: The ground state potential energy curve of N2. Chemical Physics Letters, 283(5):253-261, February 1998. ISSN 0009-2614. doi: 10.1016/S0009-2614(97)01392-4.
|
| 278 |
+
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On Calibration of Modern Neural Networks. In Proceedings of the 34th International Conference on Machine Learning, pp. 1321-1330. PMLR, July 2017.
|
| 279 |
+
Jiequn Han, Linfeng Zhang, and Weinan E. Solving many-electron Schrödinger equation using deep neural networks. Journal of Computational Physics, 399:108929, December 2019. ISSN 0021-9991. doi: 10.1016/j.jcp.2019.108929.
|
| 280 |
+
Jan Hermann, Zeno Schatzle, and Frank Noé. Deep-neural-network solution of the electronic Schrödinger equation. Nature Chemistry, 12(10):891-897, October 2020. ISSN 1755-4330, 1755-4349. doi: 10.1038/s41557-020-0544-y.
|
| 281 |
+
Lior Hirschfeld, Kyle Swanson, Kevin Yang, Regina Barzilay, and Connor W. Coley. Uncertainty Quantification Using Neural Networks for Molecular Property Prediction. Journal of Chemical Information and Modeling, 60(8):3770-3780, August 2020. ISSN 1549-9596. doi: 10.1021/acs.jcim.0c00502.
|
| 282 |
+
Bing Huang and O. Anatole von Lilienfeld. Ab Initio Machine Learning in Chemical Compound Space. Chemical Reviews, 121(16):10001-10036, August 2021. ISSN 0009-2665. doi: 10.1021/acs.chemrev.0c01303.
|
| 283 |
+
Marcus Hutter. On Representing (Anti)Symmetric Functions. arXiv:2007.15298 [quant-ph], July 2020.
|
| 284 |
+
Jan Kessler, Francesco Calcavecchia, and Thomas D. Kühne. Artificial Neural Networks as Trial Wave Functions for Quantum Monte Carlo. Advanced Theory and Simulations, 4(4):2000269, 2021. ISSN 2513-0390. doi: 10.1002/adts.202000269.
|
| 285 |
+
Armagan Kinal and Piotr Piecuch. Computational Investigation of the Conrotatory and Disrotatory Isomerization Channels of Bicyclo[1.1.0]butane to Buta-1,3-diene: A Completely Renormalized Coupled-Cluster Study. The Journal of Physical Chemistry A, 111(4):734-742, February 2007. ISSN 1089-5639, 1520-5215. doi: 10.1021/jp065721k.
|
| 286 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In 3rd International Conference for Learning Representations, December 2014.
|
| 287 |
+
Johannes Klicpera, Janek Groß, and Stephan Gunnemann. Directional Message Passing for Molecular Graphs. In International Conference on Learning Representations, September 2019.
|
| 288 |
+
Risi Kondor and Shubhendu Trivedi. On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups. In International Conference on Machine Learning, pp. 2747-2755. PMLR, July 2018.
|
| 289 |
+
George Lamb and Brooks Paige. Bayesian Graph Neural Networks for Molecular Property Prediction. arXiv:2012.02089 [cs, q-bio], November 2020.
|
| 290 |
+
Robert J. Le Roy, Yiye Huang, and Calvin Jary. An accurate analytic potential function for ground-state N2 from a direct-potential-fit analysis of spectroscopic data. The Journal of Chemical Physics, 125(16):164310, October 2006. ISSN 0021-9606, 1089-7690. doi: 10.1063/1.2354502.
|
| 291 |
+
Dmitry I. Lyakh, Monika Musial, Victor F. Lotrich, and Rodney J. Bartlett. Multireference Nature of Chemistry: The Coupled-Cluster View. Chemical Reviews, 112(1):182-243, January 2012. ISSN 0009-2665, 1520-6890. doi: 10.1021/cr2001417.
|
| 292 |
+
|
| 293 |
+
James Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML'10, pp. 735-742, Madison, WI, USA, June 2010. Omnipress. ISBN 978-1-60558-907-7.
|
| 294 |
+
James Martens and Roger Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume 37, pp. 2408-2417, 2015.
|
| 295 |
+
W. L. McMillan. Ground State of Liquid He4. Physical Review, 138(2A):A442-A451, April 1965. doi: 10.1103/PhysRev.138.A442.
|
| 296 |
+
Mario Motta, David M. Ceperley, Garnet Kin-Lic Chan, John A. Gomez, Emanuel Gull, Sheng Guo, Carlos A. Jimenez-Hoyos, Tran Nguyen Lan, Jia Li, Fengjie Ma, Andrew J. Millis, Nikolay V. Prokof'ev, Ushnish Ray, Gustavo E. Scuseria, Sandro Sorella, Edwin M. Stoudenmire, Qiming Sun, Igor S. Tupitsyn, Steven R. White, Dominika Zgid, Shiwei Zhang, and Simons Collaboration on the Many-Electron Problem. Towards the Solution of the Many-Electron Problem in Real Materials: Equation of State of the Hydrogen Chain with State-of-the-Art Many-Body Methods. Physical Review X, 7(3):031059, September 2017. ISSN 2160-3308. doi: 10.1103/PhysRevX.7.031059.
|
| 297 |
+
Eric Neuscamman, C. J. Umrigar, and Garnet Kin-Lic Chan. Optimizing large parameter sets in variational quantum Monte Carlo. Physical Review B, 85(4):045103, January 2012. ISSN 1098-0121, 1550-235X. doi: 10.1103/PhysRevB.85.045103.
|
| 298 |
+
Aneesh Pappu and Brooks Paige. Making Graph Neural Networks Worth It for Low-Data Molecular Machine Learning. arXiv:2011.12203 [cs], November 2020.
|
| 299 |
+
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310-1318. PMLR, 2013.
|
| 300 |
+
David Pfau, James S. Spencer, Alexander G. D. G. Matthews, and W. M. C. Foulkes. Ab initio solution of the many-electron Schrödinger equation with deep neural networks. Physical Review Research, 2(3):033429, September 2020. doi: 10.1103/PhysRevResearch.2.033429.
|
| 301 |
+
Zhuoran Qiao, Matthew Welborn, Animashree Anandkumar, Frederick R. Manby, and Thomas F. Miller III. OrbNet: Deep Learning for Quantum Chemistry Using Symmetry-Adapted Atomic-Orbital Features. The Journal of Chemical Physics, 153(12):124111, September 2020. ISSN 0021-9606, 1089-7690. doi: 10.1063/5.0021955.
|
| 302 |
+
Zhuoran Qiao, Anders S. Christensen, Frederick R. Manby, Matthew Welborn, Anima Anandkumar, and Thomas F. Miller III. UNiTE: Unitary N-body Tensor Equivariant Network with Applications to Quantum Chemistry. arXiv:2105.14655 [physics], May 2021.
|
| 303 |
+
Raghunathan Ramakrishnan, Pavlo O. Dral, Matthias Rupp, and O. Anatole von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data, 1(1):140022, December 2014. ISSN 2052-4463. doi: 10.1038/sdata.2014.22.
|
| 304 |
+
Michael Scherbela, Rafael Reisenhofer, Leon Gerard, Philipp Marquetand, and Philipp Grohs. Solving the electronic Schrödinger equation for multiple nuclear geometries with weight-sharing deep neural networks. arXiv:2105.08351 [physics], May 2021.
|
| 305 |
+
K. T. Schütt, H. E. Sauceda, P.-J. Kindermans, A. Tkatchenko, and K.-R. Müller. SchNet - A deep learning architecture for molecules and materials. The Journal of Chemical Physics, 148(24): 241722, June 2018. ISSN 0021-9606, 1089-7690. doi: 10.1063/1.5019779.
|
| 306 |
+
Jun Shen and Piotr Piecuch. Combining active-space coupled-cluster methods with moment energy corrections via the CC( $P;Q$ ) methodology, with benchmark calculations for biradical transition states. The Journal of Chemical Physics, 136(14):144104, April 2012. ISSN 0021-9606, 1089-7690. doi: 10.1063/1.3700802.
|
| 307 |
+
James S. Spencer, David Pfau, Aleksandar Botev, and W. M. C. Foulkes. Better, Faster Fermionic Neural Networks. 3rd NeurIPS Workshop on Machine Learning and Physical Science, November 2020.
|
| 308 |
+
|
| 309 |
+
Attila Szabo and Neil S. Ostlund. Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory. Courier Corporation, 2012.
|
| 310 |
+
Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds. arXiv:1802.08219 [cs], May 2018.
|
| 311 |
+
Masashi Tsubaki and Teruyasu Mizoguchi. Quantum Deep Field: Data-Driven Wave Function, Electron Density Generation, and Atomization Energy Prediction and Extrapolation with Machine Learning. Physical Review Letters, pp. 6, 2020.
|
| 312 |
+
Simon Wengert, Gábor Csányi, Karsten Reuter, and Johannes T. Marggraf. Data-efficient machine learning for molecular crystal structure prediction. Chemical Science, pp. 10.1039/D0SC05765G, 2021. ISSN 2041-6520, 2041-6539. doi: 10.1039/D0SC05765G.
|
| 313 |
+
Max Wilson, Nicholas Gao, Filip Wudarski, Eleanor Rieffel, and Norm M. Tubman. Simulations of state-of-the-art fermionic neural network wave functions with diffusion Monte Carlo. arXiv:2103.12570 [physics, physics:quant-ph], March 2021.
|
| 314 |
+
Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, Andrew Palmer, Volker Settels, Tommi Jaakkola, Klavs Jensen, and Regina Barzilay. Analyzing Learned Molecular Representations for Property Prediction. Journal of Chemical Information and Modeling, 59(8):3370-3388, August 2019. ISSN 1549-9596, 1549-960X. doi: 10.1021/acs.jcim.9b00237.
|
| 315 |
+
Li Yang, Wenjun Hu, and Li Li. Scalable variational Monte Carlo with graph neural ansatz. In NeurIPS Workshop on Machine Learning and the Physical Sciences, November 2020.
|
| 316 |
+
Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. In Eighth International Conference on Learning Representations, April 2020.
|
| 317 |
+
Yaolong Zhang, Ce Hu, and Bin Jiang. Embedded Atom Neural Network Potentials: Efficient and Accurate Machine Learning with a Physically Inspired Representation. The Journal of Physical Chemistry Letters, 10(17):4962-4967, September 2019. ISSN 1948-7185, 1948-7185. doi: 10. 1021/acs.jpclett.9b02037.
|
| 318 |
+
|
| 319 |
+
# A INVARIANT ENERGIES REQUIRE EQUIVARIANT WAVE FUNCTIONS
|
| 320 |
+
|
| 321 |
+
Here, we prove that a wave function needs to be equivariant with respect to rotations and reflections if the energy is invariant. Recall, our goal is to solve the stationary Schrödinger equation
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\boldsymbol {H} \psi = E \psi \tag {21}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
where $\psi : \mathbb{R}^{3N} \mapsto \mathbb{R}$ is the electronic wave function. Since the Hamiltonian $\pmb{H}$ encodes the molecular structure via its potential energy term, see Equation 15, a rotation or reflection $\pmb{U} \in O(3)$ of the system results in a unitary transformation applied to the Hamiltonian $\pmb{H} \rightarrow \pmb{U}\pmb{H}\pmb{U}^{\dagger}$ . Since the energy $E$ is invariant to rotation and reflection, we obtain the transformed equation
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\boldsymbol {U} \boldsymbol {H} \boldsymbol {U} ^ {\dagger} \psi^ {\prime} = E \psi^ {\prime}. \tag {22}
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
One can see that if $\psi$ is a solution to Equation 21, the equivariantly transformed $\psi \rightarrow U\psi$ solves Equation 22
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
\boldsymbol {U} \boldsymbol {H} \boldsymbol {U} ^ {\dagger} \boldsymbol {U} \psi = E \boldsymbol {U} \psi , \tag {23}
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\boldsymbol {U} \boldsymbol {H} \psi = E \boldsymbol {U} \psi , \tag {24}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
\boldsymbol {H} \psi = E \psi \tag {25}
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
with the same energy.
|
| 348 |
+
|
| 349 |
+

|
| 350 |
+
|
| 351 |
+
# B EQUIVARIANT NEURAL NETWORKS AS WAVE FUNCTIONS
|
| 352 |
+
|
| 353 |
+
Here, we want to briefly discuss why equivariant neural networks as proposed by Thomas et al. (2018) or Batzner et al. (2021) are no alternative to our equivariant coordinate system. The issue is the same as for regular GNNs (Klicpera et al., 2019), namely that such networks can only represent spherically symmetric functions for atomic systems which, as discussed in Section 3.3, is insufficient for wave functions. While this is obvious for regular GNNs, as they operate only on inter-particle distances rather than vectors, equivariant neural networks take advantage of higher SO(3) representations. However, if one would construct the orbitals $\phi (\vec{r}) = [\phi_1(\vec{r}),\dots ,\phi_N(\vec{r})]$ by concatenating $E$ equivariant SO(3) representations $\phi (\vec{r}) = [\phi_{1}(\vec{r}),\ldots ,\phi_{E}(\vec{r})]$ with $\sum_{e = 1}^{E}\dim (\phi_{e}(\vec{r}_{i})) = N$ , any resulting real-valued wave function $\psi (\vec{r}) = \operatorname *{det}\phi (\vec{r})$ would be spherically symmetric, i.e., $\psi (\vec{r} R) = \psi (\vec{r})$ , $\forall R\in SO(3)$ .
|
| 354 |
+
|
| 355 |
+
The proof is as follows: If one rotates the electrons $\vec{r} \in \mathbb{R}^{N \times 3}$ by any rotation matrix $R \in SO(3)$ , the orbital matrix changes as
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\phi (\overrightarrow {\boldsymbol {r}} R) = \phi (\overrightarrow {\boldsymbol {r}}) \boldsymbol {D} ^ {R}, \tag {26}
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
\boldsymbol {D} ^ {R} = \operatorname {d i a g} \left(\boldsymbol {D} _ {1} ^ {R}, \dots , \boldsymbol {D} _ {E} ^ {R}\right) \tag {27}
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
where $D^{R} \in \mathbb{R}^{N \times N}$ is a block-diagonal matrix and $D_{e}^{R}$ is the Wigner-D matrix induced by rotation $R$ corresponding to the $e$ -th SO(3) representation. Since Wigner-D matrices are unitary and we restrict our wave function to real-valued
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
\begin{array}{l} \psi (\overrightarrow {\boldsymbol {r}} R) = \det \phi (\overrightarrow {\boldsymbol {r}} R) \\ = \det (\phi (\vec {r}) D ^ {R}) \\ = \det \phi (\vec {r}) \det \boldsymbol {D} ^ {R} \\ = \det \phi (\vec {r}) \prod_ {e = 1} ^ {E} \det D _ {e} ^ {R} \tag {28} \\ = \det \phi (\overrightarrow {\boldsymbol {r}}) \\ = \psi (\vec {r}). \\ \end{array}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+
# C EDGE CASES OF THE EQUIVARIANT COORDINATE SYSTEM
|
| 374 |
+
|
| 375 |
+
Figure 8: Edge cases in the construction of our equivariant coordinate system. Circles indicate nuclei and the numbers their charges.
|
| 376 |
+

|
| 377 |
+
(a) Example of a regular polygon. For any regular polygon on a plane, two eigenvalues of the covariance matrix are going to be identical.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
(b) Example of $\vec{v} = 0$
|
| 381 |
+
|
| 382 |
+

|
| 383 |
+
(c) Example why one can not construct a unique equivariant coordinate system that changes smoothly with changes in the geometry.
|
| 384 |
+
|
| 385 |
+
While the definition of the coordinate system in Section 3.3 works in most cases, there still exist edge cases where the coordinate system may not be unique. To maximize transparency, we discuss some of these cases, when they occur, how we handle them, and what their implications are.
|
| 386 |
+
|
| 387 |
+
For certain geometries, two eigenvalues of the nuclei covariance matrix might be identical. If that is the case the PCA axes are not unique. This occurs for any geometry where the nuclei coordinates are distributed regularly on a sphere around the center of mass. Examples of such geometries are regular polygons such as the pentagram depicted in Figure 8a. In such cases, we compute the PCA on pseudo coordinates which we obtain by stretching the graph in the direction of the largest Coulomb potential $\frac{Z_m Z_n}{\|\vec{R}_m - \vec{R}_n\|_2}$ . In the example from Figure 8a, this is equivalent to stretching the graph along one of the outer edges as they are all of the same lengths. The actual direction does not matter, as it is simply a rotation or reflection of the whole system. While regular spherical patterns are not the only case where this issue arises they are the most common cause.
|
| 388 |
+
|
| 389 |
+
Another potential issue arises if $\overrightarrow{v} = 0$ , this occurs for any geometry which is point symmetric in the center of mass such as the pentagram in Figure 8a. In such cases, the signs of the axes do not matter as reflections result in identical geometries. However, there also exist other geometries for which Equation 12 is 0. An example is depicted in Figure 8b. But these cases are rare and can be resolved by applying a nonlinear function on the distances in Equation 12.
|
| 390 |
+
|
| 391 |
+
The occurrence of these edge cases leads us to question: 'Why can we not design a unique coordinate system for each geometry?' While we ideally would want an equivariant coordinate system that changes smoothly with changes in the geometry, it is impossible. We show a counter example of this in Figure 8c where we see a system of 3 nuclei. In their starting configuration, one can uniquely define two axes as indicated by the colored arrows. But, when moving the leftmost nuclei such that it is in line with the other two nuclei one is left with only one uniquely defined direction as there is no way to differentiate between any orthogonal vector and the blue one. By moving the center nuclei to the right, we again can define two axes for this system, though, one axis is flipped compared to the initial state. So, we neither can define a smoothly changing coordinate system nor a unique one for every system. However, in practice, we do not need a smoothly changing one but only a unique one. While this is already unattainable as shown by the central figure, we also want to stress that any arbitrary orthogonal vector is equivalent due to the symmetries of the system. Considering these aspects, we believe that our coordinate system definition is sufficient in most scenarios.
|
| 392 |
+
|
| 393 |
+
# D EXPERIMENTAL SETUP
|
| 394 |
+
|
| 395 |
+
Hyperparameters. If not otherwise specified we used the hyperparameters from Table 2. These result in a similarly sized WFModel to FermiNet from Pfau et al. (2020). For cyclobutadiene, we did not train for the full 60000 iterations but stopped the training after 2 weeks.
|
| 396 |
+
|
| 397 |
+
Numerical Stability To stabilize the optimization, we initialize the last layers of the MetaGNN $f_{\mathrm{node}}^{\mathrm{out}}$ and $f_{\mathrm{global}}^{\mathrm{out}}$ such that the biases play the dominant role. Furthermore, we compute the final output of the WFModel in log-domain and use the log-sum-exp trick.
|
| 398 |
+
|
| 399 |
+
Learning continuous subsets. To demonstrate PESNet's ability to capture continuous subsets of the potential energy surface, we train PESNet on a dynamic set of configurations along the energy surface. Specifically, we subdivide the potential energy surface into even-sized bins and place a random walker within each bin. These walkers slightly alter the molecular structure after each step by moving within their bin. This procedure ensures that our model is not restricted to a discrete set of configurations. Therefore, after training, our model can be evaluated at any arbitrary configuration within the training domain without retraining. But, we only evaluate PESNet on configurations where reference calculations are available. This procedure is done for the hydrogen rectangle, the hydrogen chain, and the nitrogen molecule. For $\mathrm{H}_4^+$ and cyclobutadiene, we train on discrete sets of geometries from the literature (Scherbela et al., 2021; Kinal & Piecuch, 2007).
|
| 400 |
+
|
| 401 |
+
Convergence is easy to detect in cases where we optimize for a fixed set of configurations as the energy will slowly converge to an optimal value, specifically, the average of the optimal energies. However, in cases where we optimize for a continuous set of configurations, the optimal energy value depends on the current batch of geometries. To still access convergence, we use the fact that the local energy $E_{L}$ of any eigenfunction (including the ground-state) has 0 variance. So, our convergence criteria is the expected variance of the local energy $\mathbb{E}_{\vec{R} \sim \mathrm{PES}}\left[\mathbb{E}_{\vec{r} \sim \psi_{\vec{R}}^2}\left[\left(E_L - \mathbb{E}_{\vec{r} \sim \psi_{\vec{R}}^2}[E_L]\right)^2\right]\right]$ where the optimal value is 0.
|
| 402 |
+
|
| 403 |
+
<table><tr><td></td><td>Parameter</td><td>Value</td></tr><tr><td>Optimization</td><td>Local energy clipping</td><td>5.0</td></tr><tr><td>Optimization</td><td>Batch size</td><td>4096</td></tr><tr><td>Optimization</td><td>Iterations</td><td>60000</td></tr><tr><td>Optimization</td><td>#Geometry random walker</td><td>16</td></tr><tr><td>Optimization</td><td>Learning rate η</td><td>0.1/(1+t/1000)</td></tr><tr><td>Natural Gradient</td><td>Damping</td><td>10-4·Std[EL]</td></tr><tr><td>Natural Gradient</td><td>CG max steps</td><td>100</td></tr><tr><td>WFModel</td><td>Nuclei embedding dim</td><td>64</td></tr><tr><td>WFModel</td><td>Single-stream width</td><td>256</td></tr><tr><td>WFModel</td><td>Double-stream width</td><td>32</td></tr><tr><td>WFModel</td><td>#Update layers</td><td>4</td></tr><tr><td>WFModel</td><td>#Determinants</td><td>16</td></tr><tr><td>MetaGNN</td><td>#Message passings</td><td>2</td></tr><tr><td>MetaGNN</td><td>Embedding dim</td><td>64</td></tr><tr><td>MetaGNN</td><td>Message dim</td><td>32</td></tr><tr><td>MetaGNN</td><td>NSBF</td><td>7</td></tr><tr><td>MetaGNN</td><td>NRBF</td><td>6</td></tr><tr><td>MetaGNN</td><td>MLP depth</td><td>2</td></tr><tr><td>MCMC</td><td>Proposal step size</td><td>0.02</td></tr><tr><td>MCMC</td><td>Steps between updates</td><td>40</td></tr><tr><td>Pretraining</td><td>Iterations</td><td>2000</td></tr><tr><td>Pretraining</td><td>Learning rate</td><td>0.003</td></tr><tr><td>Pretraining</td><td>Method</td><td>UHF</td></tr><tr><td>Pretraining</td><td>Basis set</td><td>STO-6G</td></tr><tr><td>Evaluation</td><td>#Samples</td><td>10^6</td></tr><tr><td>Evaluation</td><td>MCMC Steps</td><td>200</td></tr></table>
|
| 404 |
+
|
| 405 |
+
Table 2: Default hyperparameters.
|
| 406 |
+
|
| 407 |
+
<table><tr><td></td><td>H+4</td><td>H4</td><td>H10</td><td>N2</td><td>Cyclobutadiene</td></tr><tr><td>no MetaGNN</td><td>-1.849286(9)</td><td>-2.016199(5)</td><td>-5.328944(14)</td><td>-109.28322(9)</td><td>-154.64419(31)</td></tr><tr><td>MetaGNN</td><td>-1.849363(6)</td><td>-2.016208(5)</td><td>-5.328916(15)</td><td>-109.28570(7)</td><td>-154.65469(27)</td></tr><tr><td>ΔE</td><td>0.000077(11)</td><td>0.000009(7)</td><td>-0.000028(21)</td><td>0.00248(11)</td><td>0.0105(4)</td></tr></table>
|
| 408 |
+
|
| 409 |
+
# E ABLATION STUDIES
|
| 410 |
+
|
| 411 |
+
As hyperparameters often play a significant in the machine learning community, we present some ablation studies in this appendix. All the following experiments only alter one variable at a time, while the rest is fixed as in Table 2. The results in the tables are averaged over the same configurations as in the main body.
|
| 412 |
+
|
| 413 |
+
Table 3 presents results with and without the MetaGNN. It is noticeable that the gain of the MetaGNN is little to none for small molecules consisting of simple hydrogen atoms. But, for more
|
| 414 |
+
|
| 415 |
+
Table 3: Energies in $E_{h}$ averaged over the PES for PESNets with and without the MetaGNN. Numbers in brackets indicate the standard error at the last digit(s). In cases without the MetaGNN, we still train a single model for all configurations of a system.
|
| 416 |
+
|
| 417 |
+
<table><tr><td>#Dets</td><td colspan="2">H4+</td><td>H4</td><td>H10</td><td>N2</td><td>Cyclobutadiene</td></tr><tr><td>16</td><td>-1.849363(6)</td><td>-2.016208(5)</td><td>-5.328916(15)</td><td>-109.28570(7)</td><td>-154.65469(27)</td><td></td></tr><tr><td>32</td><td>-1.849342(4)</td><td>-2.016188(6)</td><td>-5.328999(13)</td><td>-109.28706(7)</td><td>-154.65322(27)</td><td></td></tr><tr><td>ΔE</td><td>-0.000021(7)</td><td>-0.000019(8)</td><td>0.000083(20)</td><td>0.00136(10)</td><td>-0.0015(4)</td><td></td></tr></table>
|
| 418 |
+
|
| 419 |
+
Table 4: Energies in ${E}_{h}$ averaged over the PES for different number of determinants in our PESNet model. Numbers in brackets indicate the standard error at the last digit(s).
|
| 420 |
+
|
| 421 |
+
<table><tr><td>dim(hi)</td><td>H4+</td><td>H4</td><td>H10</td><td>N2</td><td>Cyclobutadiene</td></tr><tr><td>256</td><td>-1.849363(6)</td><td>-2.016208(5)</td><td>-5.328916(15)</td><td>-109.28570(7)</td><td>-154.65469(27)</td></tr><tr><td>512</td><td>-1.8493543(28)</td><td>-2.016190(7)</td><td>-5.328794(17)</td><td>-109.28662(6)</td><td>-154.65042(28)</td></tr><tr><td>ΔE</td><td>-0.000009(7)</td><td>-0.000017(8)</td><td>-0.000122(23)</td><td>0.00092(9)</td><td>-0.0043(4)</td></tr></table>
|
| 422 |
+
|
| 423 |
+
Table 5: Energies in $E_{h}$ averaged over the PES for different single-stream sizes in our PESNet model. Numbers in brackets indicate the standard error at the last digit(s).
|
| 424 |
+
|
| 425 |
+

|
| 426 |
+
Figure 9: Comparison of different PESNet configurations on cyclobutadiene. The configurations are named (#determinant/single-stream width) with light colors for the ground state and darker colors for the transition state.
|
| 427 |
+
|
| 428 |
+
complex molecules such as nitrogen and cyclobutadiene, we notice significant improvements of $2.5\mathrm{m}E_{\mathrm{h}}$ and $10.5\mathrm{m}E_{\mathrm{h}}$ , respectively. Moreover, the MetaGNN enables us to account for symmetries of the energy while the WFModel itself is only invariant w.r.t. to translation but not to rotation, reflection, and reindexing of nuclei.
|
| 429 |
+
|
| 430 |
+
Table 4 shows the impact of the number of determinants on the average energy for the systems from Section 4. For small hydrogen-based systems, the number of determinants is mostly irrelevant, while larger numbers of determinants improve performance for nitrogen. But, this does not seem to carry over to cyclobutadiene. While the total estimated energy is higher for the larger model, we noticed that it is significantly faster at converging the transition barrier. Figure 9 illustrates this by comparing the convergence of the 16 and 32 determinant models.
|
| 431 |
+
|
| 432 |
+
Increasing the single-stream size does not seem to result in any benefit for most models as Table 5 shows. Again, the hydrogen systems are mostly unaffected by this hyperparameter while nitrogen benefits from larger hidden dimensions but cyclobutadiene converges worse. We suspect that this is due to the optimization problem becoming significantly harder. Firstly, increasing the WFModel increases the number of parameters the MetaGNN has to predict. Secondly, we estimate the inverse of the Fisher with a finite fixed-sized batch but the Fisher grows quadratically with the number of parameters which also grow quadratically with the single size stream.
|
| 433 |
+
|
| 434 |
+
# F TIME PER ITERATION
|
| 435 |
+
|
| 436 |
+
While the main document already covers the time it took to reproduce the results from the figures, we want to use this appendix to provide more details on this. Table 6 lists the time per iteration for a single model instead of the whole training time to reproduce the potential energy surfaces as in Table 1. While we find these numbers to be misleading we still want to disclose them to support open research. The main issue with these numbers is that they do not take the quality of the update into account, e.g., PauliNet and DeepErwin are trained with Adam (Kingma & Ba, 2014), FermiNet with K-FAC (Martens & Grosse, 2015), and PESNet with CG-computed natural gradient (Neuscamman et al., 2012). This has implications on the number of iterations one has to train. For Table 1 in the main body, we assumed that PauliNet is trained for 10000 iterations (Hermann et al., 2020), DeepErwin for 7000 iterations (Scherbela et al., 2021), FermiNet for 200000 iterations (Pfau et al., 2020),
|
| 437 |
+
|
| 438 |
+
<table><tr><td></td><td>H4+</td><td>H4</td><td>H10</td><td>N2</td><td>Cyclobutadiene</td></tr><tr><td>PauliNet</td><td>0.83s</td><td>1.13s</td><td>5.51s</td><td>8.09s</td><td>175s</td></tr><tr><td>DeepErwin</td><td>0.92s</td><td>1.28s</td><td>4.88s</td><td>—</td><td>—</td></tr><tr><td>FermiNet</td><td>0.12s</td><td>0.19s</td><td>1.07s</td><td>1.99s</td><td>20.8s</td></tr><tr><td>PESNet</td><td>1.19s</td><td>1.42s</td><td>3.91s</td><td>5.32s</td><td>33.2s</td></tr></table>
|
| 439 |
+
|
| 440 |
+
Table 6: Time per training step.
|
| 441 |
+
|
| 442 |
+

|
| 443 |
+
Figure 10: Convergence behavior of PESNet. Error bars indicate the standard error of the mean.
|
| 444 |
+
|
| 445 |
+
and PESNet for 60000 iterations. Though, one can assume that neither PauliNet nor DeepErwin are going to produce similar results to FermiNet and PESNet on the more complex nitrogen molecule or cyclobutadiene in 10000 or 7000 iterations. This is further discussed in Appendix G. When viewing these results, one also has to keep in mind the different quality of the results, e.g., FermiNet and PESNet strictly outperform PauliNet and DeepErwin on all systems. Another potential issue arises due to the choice of deep learning framework the models are implemented in. It has been shown that JAX works very well for computing the kinetic energy (Spencer et al., 2020) which usually is the largest workload when computing an update.
|
| 446 |
+
|
| 447 |
+
# G CONVERGENCE
|
| 448 |
+
|
| 449 |
+
When choosing a method to solve the Schrödinger equation one has to pick between accuracy and speed. For classical methods, this might be the difference between choosing DFT, CCSD(T), or FCI. In the neural VMC setting, one way to reflect this is by choosing the number of training steps. To better investigate this, we present convergence graphs for H4, $\mathrm{H}_4^+$ , H10, and N2 in Figure 10. For cyclobutadiene, we can see the convergence of different configurations of PESNet in Figure 9. One can see that our network converges quite quickly on hydrogen-based systems such as the hydrogen rectangle, $\mathrm{H}_4^+$ , or the hydrogen chain. For these small systems, PESNet surpasses PauliNet's and DeepErwin's accuracy in less than 8000 training steps, reducing the training times to 2.9h, 3.2h, and 8.7h, respectively. In less than 16000 steps, about 17.4h of training, PESNet surpasses FermiNet on the hydrogen chain. For more complicated molecules such as N2 and cyclobutadiene, the training requires more iterations to converge. So, we may also expect the extrapolated numbers from Table 1 for PauliNet to be an optimistic estimate.
|
abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:158f29269c0c060384dd0a6482c0953a018820bc28e8a8cbac757dbdb4c3c3dc
|
| 3 |
+
size 807455
|
abinitiopotentialenergysurfacesbypairinggnnswithneuralwavefunctions/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c9ac4268fb065c0ce0bba363220d77918f31e5e53f34b550c8b09f764f2e094b
|
| 3 |
+
size 538675
|
adarlwhatwhereandhowtoadaptintransferreinforcementlearning/4a7f7977-6d9e-4d6f-8994-382f57f08d13_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:82d2d9c3b61fca06261c7f992bd3fd83b56216597fa10fc23ad9b5ca3aedaaaf
|
| 3 |
+
size 227551
|
adarlwhatwhereandhowtoadaptintransferreinforcementlearning/4a7f7977-6d9e-4d6f-8994-382f57f08d13_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e01c353218e99d6dc4dac99767dabb9587862c7fae96a15266d1f47a95a6b168
|
| 3 |
+
size 268593
|
adarlwhatwhereandhowtoadaptintransferreinforcementlearning/4a7f7977-6d9e-4d6f-8994-382f57f08d13_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e0c9b16c99166cfe5b55ae3af87f84014e7eeb10e94040011324289ed5b3691
|
| 3 |
+
size 21331513
|
adarlwhatwhereandhowtoadaptintransferreinforcementlearning/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
adarlwhatwhereandhowtoadaptintransferreinforcementlearning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:38998fbf8f4b3aef73639b6abe05e7665f2b4d746dd475d54470b06c1fa16e4d
|
| 3 |
+
size 2669040
|
adarlwhatwhereandhowtoadaptintransferreinforcementlearning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9d66037bddb57b830ab32987e419c3e62cecf4f8b9bd384bc2c4fef6605ead25
|
| 3 |
+
size 1557614
|
adversarialsupportalignment/bdd1c5f0-c535-4607-b0f0-86237a14a8df_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4fc88ea346a9d3b464af0b41cd30d1013e69d9695870d2604d5aaa0312f0d674
|
| 3 |
+
size 200094
|
adversarialsupportalignment/bdd1c5f0-c535-4607-b0f0-86237a14a8df_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f34698fadf107de43f6ef2296d258760b586f1a8679808ceb084506b66b1e6b4
|
| 3 |
+
size 237579
|
adversarialsupportalignment/bdd1c5f0-c535-4607-b0f0-86237a14a8df_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2b778d7b24c6fcec5c69862078f1e65b592d025615523e7939546cacd3412b54
|
| 3 |
+
size 981555
|
adversarialsupportalignment/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
adversarialsupportalignment/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a665fed76374db4ff31c9578cbc75fd36990b8b376fec4a898e36134caefe29a
|
| 3 |
+
size 1471180
|
adversarialsupportalignment/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a0c0ac56d8f2e8ce5b0c06e5ea26501ee5ccc7fd417d95c9552a9a92a8dfb28
|
| 3 |
+
size 1317626
|
ageneralanalysisofexampleselectionforstochasticgradientdescent/8e3a5072-3fc7-4170-9d70-10ef51362aa8_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5e5f31ca78b750bc73f272f4ec305b5a2089a54897766067f03181dc27b2b72a
|
| 3 |
+
size 298902
|
ageneralanalysisofexampleselectionforstochasticgradientdescent/8e3a5072-3fc7-4170-9d70-10ef51362aa8_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d03b833100dbe4b31c8f947acd8b30986c4f90979712a59c52ad7a80600d816b
|
| 3 |
+
size 344894
|
ageneralanalysisofexampleselectionforstochasticgradientdescent/8e3a5072-3fc7-4170-9d70-10ef51362aa8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e1a8203386e78e2e2435733434e22abd992030191c63dd8ba5d09ff5013fdbaf
|
| 3 |
+
size 2815654
|
ageneralanalysisofexampleselectionforstochasticgradientdescent/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
ageneralanalysisofexampleselectionforstochasticgradientdescent/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:add0939c130dab7b6a8f44309dc7ed4fb3e520a4f273a01f0aa263faa307211d
|
| 3 |
+
size 2739051
|
ageneralanalysisofexampleselectionforstochasticgradientdescent/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:39453644f95e6a6bfcf3ed9eeeeb95b889ee297415a8888e81ca481a6dfe5dc0
|
| 3 |
+
size 1815260
|
amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/6804f67d-cd8c-4775-9ac5-7d50279291ac_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ccd735467ad2b8d70700f77688777ead81e36179a75c985b4c6a40900725d89d
|
| 3 |
+
size 118010
|
amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/6804f67d-cd8c-4775-9ac5-7d50279291ac_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9eab86c432f6e5ae49add34d7ca2425f8ac3fed00a7912568eccf39b2cfc60f2
|
| 3 |
+
size 142581
|
amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/6804f67d-cd8c-4775-9ac5-7d50279291ac_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e5b5bd99d13b42a0f82c48c91cd6e5fec3939ac58f9a4f401763be4ec866b5f4
|
| 3 |
+
size 3180997
|
amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/full.md
ADDED
|
@@ -0,0 +1,471 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AMORTIZED TREE GENERATION FOR BOTTOM-UP SYNTHESIS PLANNING AND SYNTHESIZABLE MOLECULAR DESIGN
|
| 2 |
+
|
| 3 |
+
Wenhao Gao<sup>1</sup>, Rocio Mercado<sup>1</sup> & Connor W. Coley<sup>1,2</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Chemical Engineering $^{2}$ Department of Electrical Engineering and Computer Science
|
| 6 |
+
|
| 7 |
+
Massachusetts Institute of Technology
|
| 8 |
+
|
| 9 |
+
Cambridge, MA 02142, USA
|
| 10 |
+
|
| 11 |
+
{whgao, rociomer, ccoley}@mit.edu
|
| 12 |
+
|
| 13 |
+
# ABSTRACT
|
| 14 |
+
|
| 15 |
+
Molecular design and synthesis planning are two critical steps in the process of molecular discovery that we propose to formulate as a single shared task of conditional synthetic pathway generation. We report an amortized approach to generate synthetic pathways as a Markov decision process conditioned on a target molecular embedding. This approach allows us to conduct synthesis planning in a bottom-up manner and design synthesizable molecules by decoding from optimized conditional codes, demonstrating the potential to solve both problems of design and synthesis simultaneously. The approach leverages neural networks to probabilistically model the synthetic trees, one reaction step at a time, according to reactivity rules encoded in a discrete action space of reaction templates. We train these networks on hundreds of thousands of artificial pathways generated from a pool of searchable compounds and a list of expert-curated templates. We validate our method with (a) the recovery of molecules using conditional generation, (b) the identification of synthesizable structural analogs, and (c) the optimization of molecular structures given oracle functions relevant to drug discovery.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Designing new functional materials, such as energy storage materials (Hachmann et al., 2011; Janet et al., 2020), therapeutic molecules (Zhavoronkov et al., 2019; Lyu et al., 2019), and environmentally friendly materials (Zimmerman et al., 2020; Yao et al., 2021), is key to many societal and technological challenges and is a central task of chemical science and engineering. However, traditional molecular design processes are not only expensive and time-consuming, but also rely heavily on chance and brute-force trial and error (Sanchez-Lengeling & Aspuru-Guzik, 2018). Thus, a systematic approach to molecular design that can leverage data and minimize the number of costly experiments is of great interest to the field and is a prerequisite for autonomous molecular discovery (Coley et al., 2020a;b).
|
| 20 |
+
|
| 21 |
+
The core of computer-aided molecular discovery is molecular design. The objective of the task is to identify novel molecules with desirable properties through de novo generation or to identify known molecules through virtual screening. There has been a growing interest in applying machine learning methods to tackle this task in recent years (Gómez-Bombarelli et al., 2018; Jin et al., 2018; You et al., 2018; Bradshaw et al., 2019; 2020; Jin et al., 2020; Fu et al., 2021), which has been the subject of many reviews (Elton et al., 2019; Schwalbe-Koda & Gómez-Bombarelli, 2020; Vanhaelen et al., 2020). Despite the large number of models developed, there are few examples that have proceeded to experimental validation or been used in a realistic discovery scenario (Zhavoronkov et al., 2019; Schneider & Clark, 2019). One major barrier to the deployment of these algorithms is that they lack considerations of synthesizability (Gao & Coley, 2020; Huang et al., 2021); Gao & Coley (2020) have demonstrated that when applied to goal-directed optimization tasks, de novo molecular design algorithms can propose a high proportion of molecules for which no synthetic plan can be found algorithmically.
|
| 22 |
+
|
| 23 |
+
Planning and executing a practical synthetic route for a hypothetical molecular structure is a bottleneck that hinders the experimental validation of molecular design algorithms. The goal of computer-assisted synthesis planning (CASP) is to identify a series of chemically plausible reaction steps beginning from available starting materials to synthesize a target chemical compound. Machine learning methods have been applied to improve CASP model performance (Segler et al., 2018; Coley et al., 2018; 2019b; Schwaller et al., 2020; Genheden et al., 2020), and experimental execution has validated significant advances in recent years (Klucznik et al., 2018; Coley et al., 2019c). However, most current algorithms require tens of seconds or minutes to plan a synthetic route for one target compound due to the combinatorial complexity of the tree search. This cost makes a post hoc filtering strategy impractical in molecular design workflows that decouple de novo design and synthesis planning (Gao & Coley, 2020). However, synthesizability-constrained generation has emerged as a promising alternative to this two-step pipeline (Section 2.1).
|
| 24 |
+
|
| 25 |
+
In this paper, we report a strategy to generate synthetic pathways as trees conditioned on a target molecular embedding as a means of simultaneously addressing the problems of design and synthesis. Proposed pathways are guaranteed to make use ofasurable starting materials and are required to follow the "rules of chemistry" as codified by expert-curated reaction templates, which can be made more or less conservative depending on the application. When applied to synthesis planning, we ask the model to generate synthetic trees conditioned on the target molecule. When applied to synthesizable molecular design, we optimize the fixed-length embedding vector using a numerical optimization algorithm; then, we decode the optimized embedding to obtain the corresponding synthetic tree whose root molecule is the output. The idea builds on the work of Bradshaw et al. (2019) and Bradshaw et al. (2020); however, these methods failed to recover multi-step synthetic paths for any target molecules and were thus only applied to the task of synthesizable analog recommendation. In contrast, the method presented here can successfully recover multi-step retrosynthetic pathways in an amortized manner, in addition to being used for synthesizable analog recommendation.
|
| 26 |
+
|
| 27 |
+
The main contributions of this paper can be summarized as:
|
| 28 |
+
|
| 29 |
+
- We formulate a Markov decision process to model the generation of synthetic trees, allowing the generation of multi-step and convergent (i.e., nonlinear) synthetic pathways.
|
| 30 |
+
- We propose a model that is capable of (1) rapid bottom-up synthesis planning and (2) constrained molecular optimization that can explore a chemical space defined by a discrete action space of reaction templates and purchasable starting materials.
|
| 31 |
+
- We show the first successful attempt to amortized multi-step synthesis planning of complex organic molecules, achieving relatively high reconstruction accuracy on test molecules.
|
| 32 |
+
- We demonstrate encouraging results on de novo molecular optimization with multiple objective functions relevant to bioactive molecule design and drug discovery.
|
| 33 |
+
|
| 34 |
+
# 2 RELATED WORK
|
| 35 |
+
|
| 36 |
+
# 2.1 SYNTHESIZABLE MOLECULAR DESIGN
|
| 37 |
+
|
| 38 |
+
While most molecular generative models focus on the generation of valid molecules with desired properties, there is growing interest in the generation of synthesizable molecules, as not all chemically valid molecules are synthetically accessible. MoleculeChef (Bradshaw et al., 2019) was one of the first neural models to cast the problem of molecular generation as the generation of one-step synthetic pathways, thus ensuring synthesizability, by selecting a bag of purchased reactants and using a data-driven reaction predictor to enumerate possible product molecules. ChemBO (Korovina et al., 2020) extends constrained generation to the multi-step case, but is a stochastic algorithm that generates synthetic pathways iteratively using random selections of reactants as input to another data-driven reaction predictor. While MoleculeChef and ChemBO use neural models for reaction outcome prediction as the ground truth for chemical reactivity (Coley et al., 2019b; Schwaller et al., 2019), reaction templates provide an alternate means of defining allowable chemical steps, algorithmically (Coley et al., 2019a) or by hand-encoding domain expertise (Molga et al., 2019). PGFS (Gottipati et al., 2020) and REACTOR (Horwood & Noutahi, 2020) both use discrete reaction templates and formulate the generation of multi-step synthetic pathways as a Markov decision process and optimize molecules with reinforcement learning. Both are limited to linear synthetic pathways, where intermediates can only react withasurable compounds and no reaction can occur between
|
| 39 |
+
|
| 40 |
+
two intermediates. Their inability to design convergent syntheses limits the chemical space accessible to the model. It is worth noting that there also exist previously reported methods for non-neural synthesizability-constrained molecular design, such as SYNOPSIS (Vinkers et al., 2003) and DOGS (Hartenfeller et al., 2012), which pre-date deep molecular generation.
|
| 41 |
+
|
| 42 |
+
Most recently, Bradshaw et al. (2020) introduced the DoG-AE/DoG-Gen model, which treats synthetic pathways as directed acyclic graphs (DAGs). DoG-Gen serializes the construction of the DAGs and uses a recurrent neural network for autoregressive generation. Dai Nguyen & Tsuda (2021) also employ an autoencoder (AE) framework, jointly trained with a junction tree variational autoencoder (JT-VAE) (Jin et al., 2018). However, none of the previous methods for synthesizable molecular generation have succeeded in achieving high reconstruction accuracy.
|
| 43 |
+
|
| 44 |
+
# 2.2 SYNTHESIS PLANNING
|
| 45 |
+
|
| 46 |
+
Algorithms and models for synthesis planning have been in development since the 1960s when retrosynthesis was first formalized (Corey & Wipke, 1969). Various data-driven approaches have been introduced in recent years (Segler et al., 2018; Coley et al., 2018; 2019b; Schwaller et al., 2020; Genheden et al., 2020), although expert methods with human-encoded "rules of chemistry" have arguably achieved greater success in practice (Klucznik et al., 2018; Mikulak-Klucznik et al., 2020). The primary distinction between these methods is how allowable single-step chemical transformations are defined to mimic physical reality as closely as possible; they can all make use of similar tree search algorithms. While these tools can be used to plan routes to target molecules and filter compounds from de novo generation for which no pathway is found, none of them can be directly used for molecular generation. Moreover, they all approach synthesis planning retrosynthetically, working recursively from the target molecule towards purchasable starting materials (i.e. in a top-down manner), whereas we propose a bottom-up approach that has the potential to be more computationally efficient by mitigating the need for a tree search.
|
| 47 |
+
|
| 48 |
+
# 2.3 COMBINING SYNTHESIZABLE DESIGN AND SYNTHESIS PLANNING
|
| 49 |
+
|
| 50 |
+
Our method can be used for synthesizable molecular design and synthesis planning. While our approach is most similar to Bradshaw et al. (2020)'s RetroDoG model, their model was only applied to structural analog generation, and could not demonstrate the successful recovery of target molecules. Bradshaw et al. (2019) showed some examples of successful recovery, but their formulation restricts the search to single-step synthetic pathways, which severely limits its practical utility for both tasks. In contrast, our model can successfully handle multi-step reactions.
|
| 51 |
+
|
| 52 |
+
# 3 METHOD
|
| 53 |
+
|
| 54 |
+
# 3.1 PROBLEM DEFINITION
|
| 55 |
+
|
| 56 |
+
We model synthetic pathways as tree structures called synthetic trees (Figure 6A in Appendix A). A valid synthetic tree has one root node (the final product molecule) linked to searchable building blocks via feasible reactions according to a list of discrete reaction templates. A reaction template is a pattern defining a structural transformation on molecules that is intended to represent a valid chemical reaction, usually encoded as a SMARTS string (Figure 6B&C). We use a list of reaction templates to define feasible chemical reactions instead of a data-driven reaction predictor so that the practical utility of the model can be improved by refining or expanding this set without changing its architecture. Given a list of reaction templates, $\mathcal{R}$ , and a list of searchable compounds, $\mathcal{C}$ , our goal is to generate a valid synthetic tree, $T$ , that produces a root molecule with a desired structure or function. The product molecule and intermediate molecules in the tree are not themselves generated by the model, but are implicitly defined by the application of reaction templates to reactant molecules. Compared to the generation of molecular graphs, the generation of synthetic trees is more difficult because of the additional constraints of enforcing chemical reaction rules and the commercial availability of starting materials.
|
| 57 |
+
|
| 58 |
+
Synthesis Planning This task is to infer the synthetic pathway to a given target molecule. We formulate this problem as generating a synthetic tree, $T$ , such that the product molecule it produces (molecule at the root node), $M_{\text{product}}$ , matches the desired target molecule, $M_{\text{target}}$ .
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
Figure 1: An illustration of the iterative generation procedure. Our model constructs the synthetic tree in a bottom-up manner, starting from the available building blocks and building up to progressively more complex molecules. Generation is conditioned on an embedding for a target molecule. If the target molecule is in the chemical space reachable by our template set and building blocks, the final root molecule should match or at least be similar to the input target molecule.
|
| 62 |
+
|
| 63 |
+
Synthesizable Molecular Design This task is to optimize a molecular structure with respect to an oracle function, while ensuring the synthetic accessibility of the molecules. We formulate this problem as optimizing the structure of a synthetic tree, $T$ , with respect to the desired properties of the product molecule it produces, $M_{\text{product}}$ .
|
| 64 |
+
|
| 65 |
+
# 3.2 SYNTHETIC TREE GENERATION AS A MARKOV DECISION PROCESS
|
| 66 |
+
|
| 67 |
+
We propose an amortized approach to tackle the probabilistic modeling of synthetic trees. In this approach, we model the construction of a synthetic tree as a Markov decision process (MDP), which requires that the state transition satisfies the Markov property: $p(S^{(t + 1)}|S^{(t)},\dots ,S^{(0)}) = p(S^{(t + 1)}|S^{(t)})$ . This property is naturally satisfied by synthetic trees: upon obtaining a specific compound (an intermediate in a synthetic route), subsequent reaction steps can be inferred entirely from the intermediate compound, and do not depend on the pathway used to get to said compound when conditioned on the target molecule. Below, we first introduce an MDP framework for synthetic tree generation, and then introduce the model to solve it. In our framework, we only allow uni- and bi-molecular reactions.
|
| 68 |
+
|
| 69 |
+
At a high level, we construct a synthetic tree one reaction step at a time in a bottom-up manner. Figure 1 illustrates a generation process for synthesis planning purposes. We enforce that the generation process happens in a reverse depth-first order, and that no more than two disconnected sub-trees are generated simultaneously.
|
| 70 |
+
|
| 71 |
+
State Space We define the state, $S^{(t)}$ , as the root molecule(s) of an intermediate synthetic tree, $T^{(t)}$ at step $t$ . Because we enforce that at most two sub-trees can occur simultaneously, there can be at most two root molecules. All root nodes are generated in a reverse depth-first fashion; additionally, we enforce that the synthetic tree always expands from the most recently added node ( $M_{\mathrm{most\_recent}}$ ), and that any merging always happens between two root nodes. The state embedding of a synthetic tree is thus computed by concatenating the embeddings for the two root molecules.
|
| 72 |
+
|
| 73 |
+
Action Space We decompose the action taken at each iteration into four components: (1) the action type, $a_{\mathrm{act}}$ , which samples from possible actions "Add", "Expand", "Merge", and "End"; (2) the first reactant, $a_{\mathrm{rt1}}$ , which samples from either $\mathcal{C}$ or $M_{\mathrm{most\_recent}}$ ; (3) the reaction template, $a_{\mathrm{rxn}}$ , which samples from $\mathcal{R}$ ; and (4) the second reactant, $a_{\mathrm{rt2}}$ , which samples from $\mathcal{C}$ .
|
| 74 |
+
|
| 75 |
+
(a) If $a_{\mathrm{act}} =$ "Add", one or two new reactant nodes will be added to $T^{(t)}$ , as well as a new node corresponding to their product given a specific reaction template. This is always the first action used in building a synthetic tree and leads to an additional sub-tree.
|
| 76 |
+
(b) If $a_{\mathrm{act}} =$ "Expand", the most recent molecule, $M_{\mathrm{most\_recent}}$ , is used as the first reactant, and a second reactant is selected if $a_{\mathrm{rxn}}$ is a bi-molecular reaction template. If $a_{\mathrm{rxn}}$ is a uni-molecular reaction template, only a new product node is added to $T^{(t)}$ . If $a_{\mathrm{rxn}}$ is a bi-molecular reaction template, both a new product node and a new reactant node are added.
|
| 77 |
+
(c) If $a_{\mathrm{act}} =$ "Merge", the two root nodes are used as the reactants in a bi-molecular reaction. In this case, a new product node is added to $T^{(t)}$ and the two sub-trees are joined to form one sub-tree.
|
| 78 |
+
(d) If $a_{\mathrm{act}} = \text{"End" }$ , $T = T^{(t)}$ and the synthetic tree is complete. The last product node is $M_{\mathrm{product}}$ .
|
| 79 |
+
|
| 80 |
+
State Transition Dynamics Each reaction represents one transition step. To ensure that each reaction step is chemically plausible and has a high likelihood of experimental success, we incorporate domain-specific reaction rules encoded as reaction templates in $\mathcal{R}$ . Once a valid action is selected, the transition is deterministic; infeasible actions that do not follow a known template are rejected. Importantly, the structure generated by template application is explicitly incorporated into the new state, whereas the RNN model in Bradshaw et al. (2020) had to implicitly learn the dynamics of the environment (i.e., the outcome of the reaction predictor).
|
| 81 |
+
|
| 82 |
+
Reward For synthesis planning, the reward is the similarity of the product to the target molecule, with a similarity of 1.0 being the highest reward and indicating a perfect match. For molecular design, the reward is determined by how well the product properties match the desired criteria.
|
| 83 |
+
|
| 84 |
+
# 3.3 CONDITIONAL GENERATION FOR SYNTHESIS PLANNING
|
| 85 |
+
|
| 86 |
+
We model synthesis planning as a conditional synthetic tree generation problem. To solve this MDP, we train a model to predict the action, $a^{(t)}$ , based on the state embedding, $z_{\mathrm{state}}^{(t)}$ , at step $t$ , conditioned on $M_{\mathrm{target}}$ . Concretely, at each step, our model, $f$ , estimates $a^{(t)} = (a_{\mathrm{act}}^{(t)}, a_{\mathrm{rt1}}^{(t)}, a_{\mathrm{rxn}}^{(t)}, a_{\mathrm{rt2}}^{(t)}) \sim p(a^{(t)}|S^{(t)}, M_{\mathrm{target}})$ .
|
| 87 |
+
|
| 88 |
+
As summarized in Figure 2, our model consists of four modules: (1) an Action Type selection function, $f_{\mathrm{act}}$ , that classifies action types among the four possible actions ("Add", "Expand", "Merge", and "End"); (2) a First Reactant selection function, $f_{\mathrm{rt1}}$ , that predicts an embedding for the first reactant. A candidate molecule is identified for the first reactant through a k-nearest neighbors (k-NN) search from the potential building blocks, $\mathcal{C}$ (Cover & Hart, 1967). We use the predicted embedding as a query to pick the nearest neighbor among the building blocks; (3) a Reaction selection function, $f_{\mathrm{rxn}}$ , whose output is a probability distribution over available reaction templates, from which inapplicable reactions are masked (based on reactant 1) and a suitable template is then sampled using a greedy search; (4) a Second Reactant selection function, $f_{\mathrm{rt2}}$ , that identifies the second reactant if the sampled template is bi-molecular. The model predicts an embedding for the second reactant, and a candidate is then sampled via a k-NN search from the masked building blocks, $\mathcal{C}'$ .
|
| 89 |
+
|
| 90 |
+
Formally, these four modules predict the probability distributions of actions within one reaction step:
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
a _ {\mathrm {a c t}} ^ {(t)} \sim f _ {\mathrm {a c t}} (S ^ {(t)}, M _ {\text {t a r g e t}}) = \sigma \left(\mathrm {M L P} _ {\mathrm {a c t}} \left(z _ {\text {s t a t e}} ^ {(t)} \oplus z _ {\text {t a r g e t}}\right)\right)
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
a _ {\mathrm {r t} 1} ^ {(t)} \sim f _ {\mathrm {r t} 1} \left(S ^ {(t)}, M _ {\text {t a r g e t}}\right) = \mathrm {k - N N} _ {\mathcal {C}} \left(\mathrm {M L P} _ {\mathrm {r t} 1} \left(z _ {\text {s t a t e}} ^ {(t)} \oplus z _ {\text {t a r g e t}}\right)\right) \tag {1}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
a _ {\mathrm {r x n}} ^ {(t)} \sim f _ {\mathrm {r x n}} (S ^ {(t)}, a _ {\mathrm {r t} 1} ^ {(t)}, M _ {\text {t a r g e t}}) = \sigma \left(\mathrm {M L P} _ {\mathrm {r x n}} \left(z _ {\text {s t a t e}} ^ {(t)} \oplus z _ {\text {t a r g e t}} \oplus z _ {\mathrm {r t} 1} ^ {(t)}\right)\right)
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
a _ {\mathrm {r t} 2} ^ {(t)} \sim f _ {\mathrm {r t} 2} (S ^ {(t)}, a _ {\mathrm {r t} 1} ^ {(t)}, a _ {\mathrm {r x n}} ^ {(t)}, M _ {\text {t a r g e t}}) = \mathrm {k - N N} _ {\mathcal {C} ^ {\prime}} \left(\mathrm {M L P} _ {\mathrm {r t} 2} \left(z _ {\text {s t a t e}} ^ {(t)} \oplus z _ {\text {t a r g e t}} \oplus z _ {\mathrm {r t} 1} ^ {(t)} \oplus z _ {\mathrm {r x n}} ^ {(t)}\right)\right)
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
where $\oplus$ denotes concatenation, $\mathrm{MLP}_{*}$ denotes a multilayer perceptron (MLP), $z_{*}$ denotes the embedding of the corresponding entity, $\mathcal{C}'$ denotes a subset of $\mathcal{C}$ that masks out reactants that do not match the selected template $a_{\mathrm{rxn}}^{(t)}$ , and $\mathrm{k - NN}_{\mathcal{X}}$ is a k-NN from set $\mathcal{X}$ . The k-NN search is based on the cosine similarity between the query vector and the embeddings of all molecules in $\mathcal{X}$ . Whereas all molecular embeddings ( $z_{\mathrm{target}}$ , $z_{\mathrm{rt1}}$ , and $z_{\mathrm{rt2}}$ ) are molecular fingerprints (see Representations in Section 4.1), $z_{\mathrm{rxn}}^{(t)}$ is a one-hot encoding of $a_{\mathrm{rxn}}^{(t)}$ and $z_{\mathrm{state}}^{(t)}$ is a concatenation of molecular fingerprints for the root molecules in $T^{(t)}$ . The $\sigma$ is a softmax masking known invalid actions at each step, such as inapplicable action types (based on the topology of the intermediate tree $T^{(t)}$ ) and reaction templates (based on the requirement for a subgraph match). Each MLP is trained as a separate supervised learning problem using a subset of information from the known synthetic routes. For further details on the model and algorithm, see Appendices D & E.
|
| 109 |
+
|
| 110 |
+
# 3.4 GENETIC ALGORITHM FOR MOLECULAR OPTIMIZATION
|
| 111 |
+
|
| 112 |
+
We approach the problem of synthesizable molecular design by optimizing the molecular embedding, $z_{\mathrm{target}}$ , on which tree generation is conditioned, with respect to the desired properties in $M_{\mathrm{product}}$ . We adopt a genetic algorithm (GA) to perform the numerical optimization on $z_{\mathrm{target}}$ . This approach is procedurally simpler than reinforcement learning to solve the MDP and biases generation toward high-performing molecules, enabled by our conditional generation model. The mating pool is defined as a list of molecular embedding vectors and the fitness function is the desired properties of produced
|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
Figure 2: Overview of our model. Within each step, at most two root molecules (most recent and another) and the target molecule as a conditional code fully describe the state. The networks take the embedding of the state and predict the action type, first reactant, reaction template, and second reactant, successively. Those results are used to update the synthetic tree for one reaction step.
|
| 116 |
+
|
| 117 |
+
product molecules. Within each generation, an offspring pool is constructed from crossover of two vectors randomly sampled from the mating pool as parents. A crossover is defined as inheriting roughly half of the bits from one parent and the remaining bits from another. Mutation can happen to each offspring vector with a small probability, and we decode them to synthetic trees to evaluate the fitness function. The top-performing vectors are selected to form the mating pool for the next generation, and we repeat this process until the stop criteria are met. See Figure 7 for an illustration.
|
| 118 |
+
|
| 119 |
+
# 4 EXPERIMENTS
|
| 120 |
+
|
| 121 |
+
# 4.1 EXPERIMENT SETUP
|
| 122 |
+
|
| 123 |
+
Reaction Templates We use a set of reaction templates based on two publicly available template sets from Hartenfeller et al. (2011) and Button et al. (2019). We combine the two sets, removing duplicate and rare reactions, and obtain a final set of 91 reaction templates. This set contains 13 uni-molecular and 78 bi-molecular reactions. 63 are mainly used for skeleton formation, 23 are used for peripheral modifications, and the remaining 5 can be used for either. Within the skeleton formation reactions, we include 45 ring formation reactions, comprising 37 heterocycle formations and 8 carbocycle formations.
|
| 124 |
+
|
| 125 |
+
Purchasable Building Blocks The set of purchased compounds comprises 147,505 molecules from Enamine Building Blocks (US stock; accessed on May 12, 2021) that match at least one reaction template in our set.
|
| 126 |
+
|
| 127 |
+
Dataset Preparation To prepare a dataset of synthetic pathways obeying these constraints, we applied a random policy to the MDP described in Section 3.2. After filtering by the QED of the product molecules, we obtain 208,644 synthetic trees for training, 69,548 trees each for validation and testing after a random split. We refer to Appendix E for further detail.
|
| 128 |
+
|
| 129 |
+
Representations We use Morgan circular molecular fingerprints of length 4096 and radius 2 to represent molecules as inputs to the MLPs, and Morgan fingerprints of length 256 and radius 2 as inputs to the k-NN module. Additional experiments with other molecular representations showed worse empirical results; further analysis is provided in Appendix I.
|
| 130 |
+
|
| 131 |
+
Genetic Algorithm The GA operates on Morgan fingerprints of 4096 bits and radius 2. The number of bits to inherit is sampled from $\mathcal{N}(2048, 410)$ . Mutation is defined as flipping 24 bits in the fingerprint and occurs with probability 0.5. The population size is 128 and the offspring size is 512. The stop criteria is met when either (a) the model reaches 200 generations, or (b) the increase in the population mean value is $< 0.01$ across 10 generations, indicating some degree of convergence.
|
| 132 |
+
|
| 133 |
+
Optimization Oracle To validate our model, we select several common oracle functions relevant to bioactivity and drug discovery, including a docking oracle. The heuristic oracle functions include: quantitative estimate of drug-likeness (QED); octanol-water partition coefficient (LogP); and JNK3, GSK3 $\beta$ , and DRD2 surrogate models which estimate the response against c-Jun N-terminal kinases-3, glycogen synthase kinase $3\beta$ , and dopamine receptor type 2, respectively. For the docking simulations, we use AutoDock Vina (Trott & Olson, 2010) to dock against the human dopamine receptor $\mathrm{D}_3$ (DRD3, PDB ID: 3PBL), and the main protease, $\mathbf{M}^{\mathrm{pro}}$ , of the SARS-Cov-2 virus (PDB ID: 7L11). We access all oracle functions through the Therapeutic Data Common (TDC) interface (Huang et al., 2021).
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
Figure 3: Examples from ChEMBL used as the target molecule for conditioned generation. (A) A successfully recovered molecule where the low similarity to any training examples indicates the generalizability of the model. (B) A molecule that is not recovered as an example of synthesizable analog recommendation (i.e., a similar product). (C) A molecule that is not recovered but may inspire route development to the true target. Matched substructures are highlighted.
|
| 141 |
+
|
| 142 |
+
# 4.2 SYNTHESIS PLANNING RESULTS
|
| 143 |
+
|
| 144 |
+
We evaluate the model's ability to reconstruct target molecules that are reachable and unreachable under our specific choice of reaction templates and available building blocks. We use the testing data as "reachable" targets (69,548), and a random sample from ChEMBL as predominantly "unreachable" molecules (20,000). None of the products in the test set are seen in the training and validation stages. We use $k = 3$ in the first reactant k-NN search and expand the trees in a greedy manner ( $k = 1$ ) for each choice. From the obtained product molecules, we choose the one that is the most similar to the target molecule as the output as reflected by the Tanimoto similarity using Morgan fingerprints.
|
| 145 |
+
|
| 146 |
+
Table 1: Results of synthetic tree construction for "reachable" and "unreachable" target molecules.
|
| 147 |
+
|
| 148 |
+
<table><tr><td>Dataset</td><td>N</td><td>Recovery Rate↑</td><td>Average Similarity↑</td><td>KL Divergence↑</td><td>FC Distance↓</td></tr><tr><td>Reachable (test set)</td><td>69,548</td><td>51.0%</td><td>0.759</td><td>0.995</td><td>0.067</td></tr><tr><td>Unreachable (ChEMBL)</td><td>20,000</td><td>4.5%</td><td>0.423</td><td>0.966</td><td>1.944</td></tr></table>
|
| 149 |
+
|
| 150 |
+
Results are summarized in Table 1. The recovery rate measures the fraction of molecules that our model succeeds in reconstructing. Our model can reconstruct $51\%$ of reachable molecules from the held-out test set. As opposed to the typical top-down approach to synthesis planning (i.e., retrosynthesis) that requires tens of seconds or even minutes, our bottom-up approach takes only $\sim 1$ second to greedily construct a tree with $k = 1$ . Our model only recovers $4.5\%$ of ChEMBL molecules, but we note that a different choice of templates and starting materials can lead to a much higher ChEMBL recovery rate (Gao & Coley, 2020) without changing the model architecture. Figure 3A
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
|
| 158 |
+

|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
Figure 4: Correlation between properties of $M_{\mathrm{target}}$ and $M_{\mathrm{product}}$ molecules.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+
shows an example where our model successfully reconstructs a molecule dissimilar to all molecules in the training set. We also assessed the average similarity between target and product molecule pairs, KL divergence (Brown et al., 2019), and Fréchet ChemNet (FC) distance (Preuer et al., 2018) between the targets and recovered products. See Appendix F for additional synthesis planning results.
|
| 170 |
+
|
| 171 |
+
Among the four networks, the most significant error comes from the first reactant network, $f_{\mathrm{rt1}}$ , with a validation accuracy of only $30.8\%$ after k-NN retrieval. To compare, $f_{\mathrm{rt2}}$ reaches a validation accuracy of $70.0\%$ without masking out invalid actions. During sampling, we mask out candidate second reactants that are incompatible with the selected reaction template to achieve a much higher accuracy. The action and reaction networks reach $>99\%$ and $85.8\%$ validation accuracy, respectively.
|
| 172 |
+
|
| 173 |
+
# 4.3 SYNTHESIZABLE ANALOG RECOMMENDATION RESULTS
|
| 174 |
+
|
| 175 |
+
We observed that in the cases of unrecoverable molecules, the final products could serve as synthesizable structural analogs under the constraints of $\mathcal{R}$ and $\mathcal{C}$ (see Figure 3B for an example). The metrics in Table 1 show how the product molecules in the unrecovered cases are still structurally similar to the input molecules. We illustrate the correlation between the properties of target and product molecules in Figure 4. We investigated the SA Score (Ertl & Schuffenhauer, 2009b), QED, CLogP, and molecular weight, and observed that most product properties have a positive correlation with the corresponding input properties. The least successful case is QED due to its high sensitivity to structural changes as quantified by the structure-activity landscape index (SALI) (Guha, 2012) (Table 6 in Appendix G). Overall, our model can suggest reasonable synthesizable analogs for target molecules, especially when the desired property is highly correlated with molecular structure.
|
| 176 |
+
|
| 177 |
+
In other cases where the target product is not recovered by the generated pathway, the output synthetic tree may still provide inspiration for the synthesis of target molecules. Figure 3C highlights an example where the failure of reconstruction is due to the binary Morgan fingerprint's inability to distinguish repeating units. Our model successfully constructed one side of this symmetric molecule. In this case, a synthetic chemist would likely recognize that replacing 1-isopropylpiperazine (the reactant added in the second step) with piperazine may lead to synthesis of the target molecule.
|
| 178 |
+
|
| 179 |
+
# 4.4 SYNTHESIZABLE MOLECULAR OPTIMIZATION RESULTS
|
| 180 |
+
|
| 181 |
+
To assess the optimization ability of our algorithm, we first consider common heuristic oracle functions relevant to bioactivity and drug discovery (Tables 2 and 7). Note that the baseline methods we compare to do not constrain synthesizeability, which means they explore a larger chemical space and are able to obtain molecules that score higher but are not synthesizeable. The results show that our model consistently outperforms GCPN (You et al., 2018) and MolDQN (Zhou et al., 2019), and is comparable to $\mathrm{GA + D}$ (Nigam et al., 2019) and MARS (Xie et al., 2021) across different tasks. We highlight the case of $\mathrm{GSK3}\beta$ inhibitor optimization in Figure 5. In this task, our model proposes a molecule scored marginally worse than DST (Fu et al., 2021) and MARS, but much
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
A. Result of optimizing GSK3β
|
| 187 |
+
Top-1 from our model $\mathrm{GSK3}\beta = 0.94$
|
| 188 |
+
Top-1 from DST GSK3β = 0.97
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
Top-1 from MARS $\mathrm{GSK3\beta} = 0.95$
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
Top-1 from GA+D GSK3B = 0.79
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
Known Inhibitor
|
| 198 |
+
Vina score = -8.96 kcal/mol
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
B. Result of optimizing docking score
|
| 202 |
+
Top-1 from our model
|
| 203 |
+
Vina score = -10.50 kcal/mol
|
| 204 |
+
Figure 5: Results of synthesizable molecular design. (A) A comparison between results of our model, DST, MARS, and $\mathrm{GA + D}$ on GSK3 $\beta$ bioactivity optimization. Our model proposes a highly scored molecule with a much simpler structure than the other baselines. (B) The results of docking score optimization for $\mathbf{M}^{\mathrm{pro}}$ of SARS-Cov-2. Our model successfully proposes multiple molecules with stronger predicted binding affinity than a known inhibitor.
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
Second from our model
|
| 208 |
+
Vina score = -9.31 kcal/mol
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Third from our model
|
| 212 |
+
Vina score = -9.25 kcal/mol
|
| 213 |
+
|
| 214 |
+
simpler in structure. Indeed, this molecule can be accessed within one reaction step from purchasable compounds in $\mathcal{C}$ through a simple Suzuki reaction (see Figure 12 for the synthetic pathway). This makes our model's recommendation more immediately actionable, i.e., ready for experimental validation. As an additional quasi-realistic application to structure-based drug design, we optimized the binding affinity to DRD3 and $\mathbf{M}^{\mathrm{pro}}$ of SARS-Cov-2 as an example target protein and successfully generated multiple molecules with improved docking scores relative to a known inhibitor (see Figure 5). We used the Guacamol filter (Brown et al., 2019) and SA_Score (Ertl & Schuffenhauer, 2009a) to quantitatively evaluate the top-100 generated molecules against DRD3 and compared them to the TDC generative benchmark (Huang et al., 2021). Our method is the only one which achieved a high passing rate and low SA_Score, indicating a better quality of the structures (see Table 8). Additional molecular optimization results are available in Appendix H.
|
| 215 |
+
|
| 216 |
+
Table 2: Highest scores of generated molecules for various de novo molecular design tasks. Note that all baselines other than our method don't place constraints on synthetic accessibility.
|
| 217 |
+
|
| 218 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">JNK3</td><td colspan="3">GSK3β</td><td colspan="3">QED</td></tr><tr><td>1st</td><td>2nd</td><td>3rd</td><td>1st</td><td>2nd</td><td>3rd</td><td>1st</td><td>2nd</td><td>3rd</td></tr><tr><td>GCPN</td><td>0.57</td><td>0.56</td><td>0.54</td><td>0.57</td><td>0.56</td><td>0.56</td><td>0.948</td><td>0.947</td><td>0.946</td></tr><tr><td>MolDQN</td><td>0.64</td><td>0.63</td><td>0.63</td><td>0.54</td><td>0.53</td><td>0.53</td><td>0.948</td><td>0.948</td><td>0.948</td></tr><tr><td>GA+D</td><td>0.81</td><td>0.80</td><td>0.80</td><td>0.79</td><td>0.79</td><td>0.78</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MARS</td><td>0.92</td><td>0.91</td><td>0.90</td><td>0.95</td><td>0.93</td><td>0.92</td><td>0.948</td><td>0.948</td><td>0.948</td></tr><tr><td>DST</td><td>0.97</td><td>0.97</td><td>0.97</td><td>0.95</td><td>0.95</td><td>0.95</td><td>0.947</td><td>0.946</td><td>0.946</td></tr><tr><td>Our Method</td><td>0.80</td><td>0.78</td><td>0.77</td><td>0.94</td><td>0.93</td><td>0.92</td><td>0.948</td><td>0.948</td><td>0.948</td></tr></table>
|
| 219 |
+
|
| 220 |
+
# 5 CONCLUSION
|
| 221 |
+
|
| 222 |
+
In this work, we have introduced an amortized approach to conditional synthetic tree generation which can be used for synthesis planning, synthesizable analog recommendation, and molecular design and optimization. Our model bridges molecular generation and synthesis planning by coupling them together into one rapid step, eliminating the need for two-stage pipelines of generation and filtering. We have demonstrated promising results for a variety of molecular optimization tasks with relevance to drug discovery, illustrating how this approach can be used to effectively explore synthesizable chemical space in pursuit of new functional molecules. Additional outlook is in Appendix C.
|
| 223 |
+
|
| 224 |
+
# REPRODUCIBILITY STATEMENT
|
| 225 |
+
|
| 226 |
+
The code repository is given in the supplementary material, including instructions in a README file, all code used for data preprocessing, and all code used to train and evaluate the model. All the data we use are either publicly available or can be calculated by open-sourced software. Section 4.1 and Appendix E describe the experimental setup, implementation details, datasets used, and hardware configuration.
|
| 227 |
+
|
| 228 |
+
# ACKNOWLEDGMENTS
|
| 229 |
+
|
| 230 |
+
This research was supported by the Office of Naval Research under grant number N00014-21-1-2195. RM received additional funding support from the Machine Learning for Pharmaceutical Discovery and Synthesis consortium. We thank John Bradshaw for helpful discussions. We also thank Tianfan Fu, Samuel Goldman and Itai Levin for commenting on the manuscript.
|
| 231 |
+
|
| 232 |
+
# CODE AND DATA AVAILABILITY
|
| 233 |
+
|
| 234 |
+
All code and releasable data can be found at https://github.com/wenhao-gao/SynNet. Additional results can be found in the supporting information.
|
| 235 |
+
|
| 236 |
+
# REFERENCES
|
| 237 |
+
|
| 238 |
+
John Bradshaw, Brooks Paige, Matt J Kusner, Marwin HS Segler, and José Miguel Hernández-Lobato. A model to search for synthesizable molecules. arXiv preprint arXiv:1906.05221, 2019.
|
| 239 |
+
John Bradshaw, Brooks Paige, Matt J Kusner, Marwin HS Segler, and José Miguel Hernández-Lobato. Barking up the right tree: an approach to search over molecule synthesis dags. arXiv preprint arXiv:2012.11522, 2020.
|
| 240 |
+
Nathan Brown, Marco Fiscato, Marwin HS Segler, and Alain C Vaucher. Guacamol: benchmarking models for de novo molecular design. Journal of Chemical Information and Modeling, 59(3): 1096-1108, 2019.
|
| 241 |
+
Alexander Button, Daniel Merk, Jan A Hiss, and Gisbert Schneider. Automated de novo molecular design by hybrid machine intelligence and rule-driven chemical synthesis. Nature Machine Intelligence, 1(7):307-315, 2019.
|
| 242 |
+
Connor W Coley, William H Green, and Klavs F Jensen. Machine learning in computer-aided synthesis planning. Accounts of Chemical Research, 51(5):1281-1289, 2018.
|
| 243 |
+
Connor W Coley, William H Green, and Klavs F Jensen. RDChiral: An rdkit wrapper for handling stereochemistry in retrosynthetic template extraction and application. Journal of Chemical Information and Modeling, 59(6):2529-2537, 2019a.
|
| 244 |
+
Connor W Coley, Wengong Jin, Luke Rogers, Timothy F Jamison, Tommi S Jaakkola, William H Green, Regina Barzilay, and Klavs F Jensen. A graph-convolutional neural network model for the prediction of chemical reactivity. Chemical Science, 10(2):370-377, 2019b.
|
| 245 |
+
Connor W Coley, Dale A Thomas, Justin AM Lummiss, Jonathan N Jaworski, Christopher P Breen, Victor Schultz, Travis Hart, Joshua S Fishman, Luke Rogers, Hanyu Gao, et al. A robotic platform for flow synthesis of organic compounds informed by ai planning. Science, 365(6453), 2019c.
|
| 246 |
+
Connor W Coley, Natalie S Eyke, and Klavs F Jensen. Autonomous discovery in the chemical sciences part i: Progress. Angewandte Chemie International Edition, 59(51):22858-22893, 2020a.
|
| 247 |
+
Connor W Coley, Natalie S Eyke, and Klavs F Jensen. Autonomous discovery in the chemical sciences part ii: Outlook. Angewandte Chemie International Edition, 59(52):23414-23436, 2020b.
|
| 248 |
+
Elias James Corey and W Todd Wipke. Computer-assisted design of complex organic syntheses. Science, 166(3902):178-192, 1969.
|
| 249 |
+
|
| 250 |
+
T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1):21-27, 1967. doi: 10.1109/TIT.1967.1053964.
|
| 251 |
+
Hai Dai Nguyen and Koji Tsuda. A generative model for molecule generation based on chemical reaction trees. arXiv preprint arXiv:2106.03394, 2021.
|
| 252 |
+
Daniel C Elton, Zois Boukouvalas, Mark D Fuge, and Peter W Chung. Deep learning for molecular design—a review of the state of the art. Molecular Systems Design & Engineering, 4(4):828-849, 2019.
|
| 253 |
+
Peter Ertl and Ansgar Schuffenhauer. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. Journal of cheminformatics, 1(1):1-11, 2009a.
|
| 254 |
+
Peter Ertl and Ansgar Schuffenhauer. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. Journal of Cheminformatics, 1(1):1-11, 2009b.
|
| 255 |
+
Tianfan Fu, Wenhao Gao, Cao Xiao, Jacob Yasonik, Connor W Coley, and Jimeng Sun. Differentiable scaffolding tree for molecular optimization. arXiv preprint arXiv:2109.10469, 2021.
|
| 256 |
+
Wenhao Gao and Connor W Coley. The synthesizeability of molecules proposed by generative models. Journal of Chemical Information and Modeling, 60(12):5714-5723, 2020.
|
| 257 |
+
Samuel Genheden, Amol Thakkar, Veronika Chadimova, Jean-Louis Reymond, Ola Engkvist, and Esben Bjerrum. Aizynthfinder: a fast, robust and flexible open-source software for retrosynthetic planning. Journal of Cheminformatics, 12(1):1-9, 2020.
|
| 258 |
+
Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Science, 4(2):268-276, 2018.
|
| 259 |
+
Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Simon Blackburn, Karam Thomas, Connor Coley, Jian Tang, et al. Learning to navigate the synthetically accessible chemical space using reinforcement learning. In International Conference on Machine Learning, pp. 3668-3679. PMLR, 2020.
|
| 260 |
+
Rajarshi Guha. Exploring structure-activity data using the landscape paradigm. Wiley Interdisciplinary Reviews: Computational Molecular Science, 2(6):829-841, 2012.
|
| 261 |
+
Johannes Hachmann, Roberto Olivares-Amaya, Sule Atahan-Evrenk, Carlos Amador-Bedolla, Roel S Sánchez-Carrera, Aryeh Gold-Parker, Leslie Vogt, Anna M Brockway, and Alán Aspuru-Guzik. The Harvard clean energy project: large-scale computational screening and design of organic photovoltaics on the world community grid. The Journal of Physical Chemistry Letters, 2(17): 2241-2251, 2011.
|
| 262 |
+
Markus Hartenfeller, Martin Eberle, Peter Meier, Cristina Nieto-Oberhuber, Karl-Heinz Altmann, Gisbert Schneider, Edgar Jacoby, and Steffen Renner. A collection of robust organic synthesis reactions for in silico molecule design. Journal of Chemical Information and Modeling, 51(12): 3093-3098, 2011.
|
| 263 |
+
Markus Hartenfeller, Heiko Zettl, Miriam Walter, Matthias Rupp, Felix Reisen, Ewgenij Proschak, Sascha Weggen, Holger Stark, and Gisbert Schneider. Dogs: reaction-driven de novo design of bioactive compounds. PLoS computational biology, 8(2):e1002380, 2012.
|
| 264 |
+
Julien Horwood and Emmanuel Noutahi. Molecular design in synthetically accessible chemical space via deep reinforcement learning. ACS Omega, 5(51):32984-32994, 2020.
|
| 265 |
+
Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. Therapeutics data commons: machine learning datasets and tasks for therapeutics. arXiv preprint arXiv:2102.09548, 2021.
|
| 266 |
+
|
| 267 |
+
Jon Paul Janet, Sahasrajit Ramesh, Chenru Duan, and Heather J Kulik. Accurate multiobjective design in a space of millions of transition metal complexes with neural-network-driven efficient global optimization. ACS Central Science, 6(4):513-524, 2020.
|
| 268 |
+
Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In International Conference on Machine Learning, pp. 2323-2332. PMLR, 2018.
|
| 269 |
+
Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Hierarchical generation of molecular graphs using structural motifs. In International Conference on Machine Learning, pp. 4839-4848. PMLR, 2020.
|
| 270 |
+
Tomasz Klucznik, Barbara Mikulak-Klucznik, Michael P McCormack, Heather Lima, Sara Szymkuć, Manishabrata Bhowmick, Karol Molga, Yubai Zhou, Lindsey Rickershauser, Ewa P Gajewska, et al. Efficient syntheses of diverse, medicinally relevant targets planned by computer and executed in the laboratory. Chem, 4(3):522-532, 2018.
|
| 271 |
+
Ksenia Korovina, Sailun Xu, Kirthevasan Kandasamy, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, and Eric Xing. ChemBO: Bayesian optimization of small organic molecules with synthesizable recommendations. In International Conference on Artificial Intelligence and Statistics, pp. 3393-3403. PMLR, 2020.
|
| 272 |
+
Greg Landrum. RDKit: Open-source cheminformatics. URL http://www.rdkit.org.
|
| 273 |
+
Jiankun Lyu, Sheng Wang, Trent E Balius, Isha Singh, Anat Levit, Yurii S Moroz, Matthew J O'Meara, Tao Che, Enkhjargal Algaa, Kateryna Tolmachova, et al. Ultra-large library docking for discovering new chemotypes. Nature, 566(7743):224-229, 2019.
|
| 274 |
+
Barbara Mikulak-Klucznik, Patrycja Gołbiowska, Alison A Bayly, Oskar Popik, Tomasz Klucznik, Sara Szymkuć, Ewa P Gajewska, Piotr Dittwald, Olga Staszewska-Krajewska, Wiktor Beker, et al. Computational planning of the synthesis of complex natural products. Nature, 588(7836):83-88, 2020.
|
| 275 |
+
Karol Molga, Ewa P Gajewska, Sara Szymkuć, and Bartosz A Grzybowski. The logic of translating chemical knowledge into machine-processable forms: a modern playground for physical-organic chemistry. Reaction Chemistry & Engineering, 4(9):1506-1521, 2019.
|
| 276 |
+
AkshitKumar Nigam, Pascal Friederich, Mario Krenn, and Alán Aspuru-Guzik. Augmenting genetic algorithms with deep neural networks for exploring the chemical space. arXiv preprint arXiv:1909.11655, 2019.
|
| 277 |
+
Kristina Preuer, Philipp Renz, Thomas Unterthiner, Sepp Hochreiter, and Günter Klambauer. Fréchet chemnet distance: a metric for generative models for molecules in drug discovery. Journal of Chemical Information and Modeling, 58(9):1736-1741, 2018.
|
| 278 |
+
Sereina Riniker and Gregory A Landrum. Better informed distance geometry: using what we know to improve conformation generation. Journal of Chemical Information and Modeling, 55(12): 2562-2574, 2015.
|
| 279 |
+
Benjamin Sanchez-Lengeling and Alán Aspuru-Guzik. Inverse molecular design using machine learning: Generative models for matter engineering. Science, 361(6400):360-365, 2018.
|
| 280 |
+
Roger Sayle and Daniel Lowe. Nextmove software. URL http://www.nextmovestsoftware.com/namerxn.html.
|
| 281 |
+
Gisbert Schneider and David E Clark. Automated de novo drug design: are we nearly there yet? Angewandte Chemie International Edition, 58(32):10792-10803, 2019.
|
| 282 |
+
Daniel Schwalbe-Koda and Rafael Gómez-Bombarelli. Generative models for automatic chemical design. In Machine Learning Meets Quantum Physics, pp. 445-467. Springer, 2020.
|
| 283 |
+
Philippe Schwaller, Teodoro Laino, Théophile Gaudin, Peter Bolgar, Christopher A Hunter, Costas Bekas, and Alpha A Lee. Molecular transformer: a model for uncertainty-calibrated chemical reaction prediction. ACS Central Science, 5(9):1572-1583, 2019.
|
| 284 |
+
|
| 285 |
+
Philippe Schwaller, Riccardo Petraglia, Valerio Zullo, Vishnu H Nair, Rico Andreas Haeuselmann, Riccardo Pisoni, Costas Bekas, Anna Iuliano, and Teodoro Laino. Predicting retrosynthetic pathways using transformer-based models and a hyper-graph exploration strategy. Chemical Science, 11(12):3316-3325, 2020.
|
| 286 |
+
Marwin HS Segler, Mike Preuss, and Mark P Waller. Planning chemical syntheses with deep neural networks and symbolic ai. Nature, 555(7698):604-610, 2018.
|
| 287 |
+
Teague Sterling and John J Irwin. Zinc 15-ligand discovery for everyone. Journal of Chemical Information and Modeling, 55(11):2324-2337, 2015.
|
| 288 |
+
Sara Szymkuc, Ewa P Gajewska, Tomasz Klucznik, Karol Molga, Piotr Dittwald, Michal Startek, Michal Bajczyk, and Bartosz A Grzybowski. Computer-assisted synthetic planning: the end of the beginning. Angewandte Chemie International Edition, 55(20):5904-5937, 2016.
|
| 289 |
+
Oleg Trot and Arthur J Olson. Autodock vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. Journal of Computational Chemistry, 31(2):455-461, 2010.
|
| 290 |
+
Quentin Vanhaelen, Yen-Chu Lin, and Alex Zhavoronkov. The advent of generative chemistry. ACS Medicinal Chemistry Letters, 11(8):1496-1505, 2020.
|
| 291 |
+
H Maarten Vinkers, Marc R de Jonge, Frederik FD Daeyaert, Jan Heeres, Lucien MH Koymans, Joop H van Lenthe, Paul J Lewi, Henk Timmerman, Koen Van Aken, and Paul AJ Janssen. Synopsis: synthesize and optimize system in silico. Journal of medicinal chemistry, 46(13): 2765-2773, 2003.
|
| 292 |
+
Yutong Xie, Chence Shi, Hao Zhou, Yuwei Yang, Weinan Zhang, Yong Yu, and Lei Li. Mars: Markov molecular sampling for multi-objective drug discovery. arXiv preprint arXiv:2103.10432, 2021.
|
| 293 |
+
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
|
| 294 |
+
Zhenpeng Yao, Benjamin Sánchez-Lengeling, N Scott Bobbitt, Benjamin J Bucior, Sai Govind Hari Kumar, Sean P Collins, Thomas Burns, Tom K Woo, Omar K Farha, Randall Q Snurr, et al. Inverse design of nanoporous crystalline reticular materials with deep generative models. Nature Machine Intelligence, 3(1):76-86, 2021.
|
| 295 |
+
Jiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. arXiv preprint arXiv:1806.02473, 2018.
|
| 296 |
+
Chun-Hui Zhang, Elizabeth A Stone, Maya Deshmukh, Joseph A Ippolito, Mohammad M Ghahre-manpour, Julian Tirado-Rives, Krasimir A Spasov, Shuo Zhang, Yuka Takeo, Shalley N Kudalkar, et al. Potent noncovalent inhibitors of the main protease of sars-cov-2 from molecular sculpting of the drug perampanel guided by free energy perturbation calculations. ACS Central Science, 7(3): 467-475, 2021.
|
| 297 |
+
Alex Zhavoronkov, Yan A Ivanenkov, Alex Aliper, Mark S Veselov, Vladimir A Aladinskiy, Anastasiya V Aladinskaya, Victor A Terentiev, Daniil A Polykovskiy, Maksim D Kuznetsov, Arip Asadulaev, et al. Deep learning enables rapid identification of potent DDR1 kinase inhibitors. Nature Biotechnology, 37(9):1038-1040, 2019.
|
| 298 |
+
Zhenpeng Zhou, Steven Kearnes, Li Li, Richard N Zare, and Patrick Riley. Optimization of molecules via deep reinforcement learning. *Scientific Reports*, 9(1):1-10, 2019.
|
| 299 |
+
Julie B Zimmerman, Paul T Anastas, Hanno C Erythropel, and Walter Leitner. Designing for a green chemistry future. Science, 367(6476):397-400, 2020.
|
| 300 |
+
|
| 301 |
+
# APPENDIX
|
| 302 |
+
|
| 303 |
+
# A SYNTHETIC TREE AND REACTION TEMPLATE
|
| 304 |
+
|
| 305 |
+

|
| 306 |
+
Figure 6: Illustration of a synthetic pathway as a synthetic tree (A) and reaction templates (B & C). (A) is the synthetic tree of remdesivir, a drug authorized for emergency use to treat COVID-19. Different color box labels indicate different types of chemical nodes. (B) and (C) are examples of reaction templates for uni- and bi-molecular reactions, where SMARTS is a specific syntax for encoding reaction transforms.
|
| 307 |
+
|
| 308 |
+

|
| 309 |
+
|
| 310 |
+
# B ILLUSTRATION OF GENETIC ALGORITHM
|
| 311 |
+
|
| 312 |
+

|
| 313 |
+
Figure 7: Illustration of the genetic algorithm used to optimize the molecules. We use the conditional synthetic tree generator as a decoder to obtain molecules corresponding to input fingerprints. We crossover and mutate on the pool of fingerprints to optimize the molecule implicitly.
|
| 314 |
+
|
| 315 |
+
# C OUTLOOK
|
| 316 |
+
|
| 317 |
+
While our model shows promising results on the recovery of molecules from generated trees and de novo molecular optimization performance, there remain multiple challenges to be addressed:
|
| 318 |
+
|
| 319 |
+
Initial Reactant Selection Our results show that the first reactant selection is the primary bottleneck to target molecule recovery. We consider two major reasons for this: (1) the input to $f_{\mathrm{rt1}}$ is usually the target molecule only, with no action mask applied. The large action space (i.e., all purchased molecules) and the limited input information thus make the problem the most difficult of the four tasks; (2) during the generation of the synthetic trees, we adopt a depth-first approach that implicitly introduces a canonical ordering during reactant selection. The implicit ordering is arbitrary and might harm the training of the network. Allowing the model to predict any valid order or including multiple orders as a form of data augmentation may improve performance. Further, it may be possible to first select the action (template) to constrain the space of compatible first reactants, which may improve performance given that this is a smaller search space.
|
| 320 |
+
|
| 321 |
+
Reaction Templates and Purchasable Compound Selection The definition of synthesizable used in this work is that the molecule can be reached with a synthetic pathway only containing starting materials belonging to our list of purchased compounds and reactions that follow our list of reaction templates. Here, we use only 91 templates, which pales in comparison to the $\sim 1,300$ reaction families defined by NameRxn (Sayle & Lowe) the tens of thousands of templates in the expert CASP program SYNTHIA (Szymkuć et al., 2016), and the hundreds of thousands in the data-driven CASP program ASKCOS (Coley et al., 2019c). Using a more comprehensive set of reaction templates would enlarge the chemical space our model can explore. Additional constraints on template applicability could improve the feasibility of pathways, e.g., to mitigate selectivity concerns (cf. Figure 12).
|
| 322 |
+
|
| 323 |
+
Molecular Representation and Molecular Similarity Certain results (e.g., Figure 3C) reveal a limitation of using boolean Morgan fingerprints. As this type of fingerprint only accounts for the presence or absence of specific substructures, it cannot distinguish molecules with different numbers of repeated units or other symmetries. Applying a count or summation-based representation, coupled with development of a similarity measurement based on that representation, would solve this problem.
|
| 324 |
+
|
| 325 |
+
Beam Search for Decoding Besides tackling the aforementioned challenges, in future work we also plan to implement beam search with our networks to enhance the model performance for bottom-up synthesis planning. Improving the sample efficiency of the synthesizable molecular design algorithm (the GA module), and applying it to more high-fidelity computational oracles would also be of interest. Those advances could enable faster and more accurate synthesis planning, as well as an even better tool for de novo molecular design.
|
| 326 |
+
|
| 327 |
+
# D ALGORITHM
|
| 328 |
+
|
| 329 |
+
Algorithm 1 Synthetic tree generation
|
| 330 |
+
1: Input: List of reaction templates $\mathcal{R}$ ,List of building blocks $\mathcal{C}$ , an input molecule $M_{\mathrm{target}}$ molecular encoder $z_{M} = E(M)$ , an action network $\mathrm{MLP}_{\mathrm{act}}$ , a reactant1 selection network $\mathrm{MLP}_{\mathrm{rt1}}$ , a reaction network $\mathrm{MLP}_{\mathrm{rxn}}$ , and a reactant2 selection network $\mathrm{MLP}_{\mathrm{rt2}}$
|
| 331 |
+
2: Output: A rooted binary tree $T$ , representing a synthesis pathway.
|
| 332 |
+
3: Encode target molecule, $z_{\mathrm{target}}\gets E(M_{\mathrm{target}})$
|
| 333 |
+
4: Initialize state $S\gets \emptyset$ , tree $T\gets \emptyset$ $M_{\mathrm{most\_recent}}\gets None$
|
| 334 |
+
5: for $t = 1,2,\dots ,t_{max}$ do
|
| 335 |
+
6: Predict and sample action type: $a_{\mathrm{act}}\sim p_{\mathrm{act}} = \sigma (\mathrm{MLP}_{\mathrm{act}}(z_{\mathrm{state}}\oplus z_{\mathrm{target}}))$
|
| 336 |
+
7: Predict first reactant, $a_{\mathrm{rt1}} = \mathrm{MLP}_{\mathrm{rt1}}(z_{\mathrm{state}}\oplus z_{\mathrm{target}})$
|
| 337 |
+
8: if $a_{\mathrm{act}} = end$ then
|
| 338 |
+
9: break
|
| 339 |
+
10: else if $a_{\mathrm{act}} = add$ then
|
| 340 |
+
11: $M_{\mathrm{rt1}}\gets k - NN_{\mathcal{C}}(a_{\mathrm{rt1}})$
|
| 341 |
+
12: $z_{\mathrm{rt1}}\gets E(M_{\mathrm{rt1}})$
|
| 342 |
+
13: else
|
| 343 |
+
14: $M_{\mathrm{rt1}}\gets M_{\mathrm{most\_recent}}$
|
| 344 |
+
15: $z_{\mathrm{rt1}}\gets E(M_{\mathrm{most\_recent}})$
|
| 345 |
+
16: end if
|
| 346 |
+
17: Predict and sample reaction template: $a_{\mathrm{rxn}}\sim p_{\mathrm{rxn}}\gets \sigma (\mathrm{MLP}_{\mathrm{rxn}}(z_{\mathrm{state}}\oplus z_{\mathrm{target}}\oplus z_{\mathrm{rt1}}))$
|
| 347 |
+
18: if $a_{\mathrm{rxn}}$ is bi-molecular then
|
| 348 |
+
19: if $a_{\mathrm{act}} = merge$ then
|
| 349 |
+
20: $M_{\mathrm{rt2}}\gets S / M_{\mathrm{rt1}}$
|
| 350 |
+
21: $z_{\mathrm{rt2}}\gets E(M_{\mathrm{rt2}})$
|
| 351 |
+
22: else
|
| 352 |
+
23: Predict second reactant, $a_{\mathrm{rt2}}\gets \mathrm{MLP}_{\mathrm{rt2}}(z_{\mathrm{state}}\oplus z_{\mathrm{target}}\oplus z_{\mathrm{rt1}}\oplus a_{\mathrm{rxn}})$
|
| 353 |
+
24: $M_{\mathrm{rt2}}\gets k - NN_{C^{\prime}}(a_{\mathrm{rt2}})$
|
| 354 |
+
25: $z_{\mathrm{rt2}}\gets E(M_{\mathrm{rt2}})$
|
| 355 |
+
26: end if
|
| 356 |
+
27: end if
|
| 357 |
+
28: rxn_sem $\leftarrow \mathcal{R}[a_{\mathrm{rxn}}]$
|
| 358 |
+
29: Run reaction $M_{\mathrm{product}}\gets \mathrm{rxn\_tem}(M_{\mathrm{rt1}},M_{\mathrm{rt2}})$
|
| 359 |
+
30: Update $T,S$ and $M_{\mathrm{most\_recent}}\gets M_{\mathrm{product}}$
|
| 360 |
+
31: end for
|
| 361 |
+
|
| 362 |
+
# E ADDITIONAL EXPERIMENTAL DETAILS
|
| 363 |
+
|
| 364 |
+
Network Setup For all experiments reported in this paper, the four networks, $f_{\mathrm{act}}$ , $f_{\mathrm{rt1}}$ , $f_{\mathrm{rxn}}$ , and $f_{\mathrm{rt2}}$ , use 5 fully connected layers with 1000, 1200, 3000, and 3000 neurons in the hidden layers, respectively. Batch normalization is applied to all hidden layers before ReLU activation. In $f_{\mathrm{act}}$ and $f_{\mathrm{rxn}}$ , a softmax is applied after the last layer and cross entropy loss is used, while $f_{\mathrm{rt1}}$ and $f_{\mathrm{rt2}}$ use a linear activation in the last layer and mean squared error (MSE) loss. We use the Adam optimizer to train all networks with a learning rate of 1e-4 and mini-batch size of 64.
|
| 365 |
+
|
| 366 |
+
Training Each MLP is trained as a separate supervised learning problem using a subset of information from the known synthetic routes. For instance, $f_{rxn}$ is a classification network which learns to select a discrete action given information on the current state of the tree, the target molecule, and the first reactant. Similarly, $f_{act}$ is a classification network which learns to select the correct action type given the state of the tree. On the other hand, $f_{rt1}$ and $f_{rt2}$ learn embeddings (regression) for the first and second reactant candidates, respectively, followed by a nearest-neighbors search from $\mathcal{C}$ and $\mathcal{C}'$ .
|
| 367 |
+
|
| 368 |
+
Dataset Preparation Following the procedure we described in Section 3.2, we applied a random policy to generate the synthetic trees. We randomly sampledasurable reactants, and randomly applied matching reaction templates to them. Doing so, we obtained $550\mathrm{k}$ synthetic trees and filtered by the QED of the product molecules $(\mathrm{QED} > 0.5)$ as well as randomly with a probability 1 - QED/0.5 to increase their drug likeness, in a crude sense. Ultimately, we obtained 208,644 synthetic trees for training, 69,548 trees for validation, and 69,548 trees for testing after a random split.
|
| 369 |
+
|
| 370 |
+
Docking Procedure We downloaded the crystal structure of the $\mathbf{M}^{\mathrm{pro}}$ of SARS-Cov-2 virus with PDB ID: 7L11. We removed the water molecules and ions from the file, and estimated the docking box based on the docked pose with a reported inhibitor, compound 5 in Zhang et al. (2021). For each ligand generated by our method, we use RDKit to generate molecular conformations (Landrum; Riniker & Landrum, 2015) and perform docking simulations using AutoDock Vina (Trott & Olson, 2010). We set exhaustiveness as 8 during the generation and recorded the high-scored conformations for a rescoring with exhaustiveness as 32. The reported values and rank are based on the rescoring.
|
| 371 |
+
|
| 372 |
+
Hardware Models were trained on a node with double Intel Xeon Gold 6230 20-core 2.1GHz processors, 512 GB DDR4 RAM, and eight RTX 2080Ti graphics cards with 11GB VRAM. Data preprocessing and predictions were made on a CPU node with 512 GB RAM and two AMD EPYC 7702 64-core 2GHz processors.
|
| 373 |
+
|
| 374 |
+
# F ADDITIONAL RESULTS ON SYNTHESIS PLANNING
|
| 375 |
+
|
| 376 |
+
Table 3: Results of synthesis planning for molecules from training, validation, and test datasets. Recovery rate indicates the fraction of final product molecules which match the input target molecules. Average similarity measures the Tanimoto similarity between the fingerprints of the product molecules and target molecules.
|
| 377 |
+
|
| 378 |
+
<table><tr><td>Dataset</td><td>Recovery Rate↑</td><td>Average Similarity↑</td><td>Average Similarity (Unrecovered)↑</td></tr><tr><td>Train.</td><td>92.1%</td><td>0.975</td><td>0.688</td></tr><tr><td>Valid</td><td>51.4%</td><td>0.761</td><td>0.508</td></tr><tr><td>Test</td><td>51.0%</td><td>0.759</td><td>0.508</td></tr><tr><td>ChEMBL</td><td>4.50%</td><td>0.423</td><td>0.396</td></tr></table>
|
| 379 |
+
|
| 380 |
+
Table 4: Analysis of the generated molecules in unrecovered cases. We compare with baseline generative methods to evaluate the similarity of input and output molecules. Baseline data from Guacamol (Brown et al., 2019)
|
| 381 |
+
|
| 382 |
+
<table><tr><td></td><td>Validity↑</td><td>Uniqueness↑</td><td>Novelty↑</td><td>KL Divergence↑</td><td>FCD↑</td></tr><tr><td>Random Sample</td><td>1.000</td><td>0.997</td><td>0.000</td><td>0.998</td><td>0.929</td></tr><tr><td>SMILES LSTM</td><td>0.959</td><td>1.000</td><td>0.912</td><td>0.991</td><td>0.913</td></tr><tr><td>AAE</td><td>0.882</td><td>1.000</td><td>0.998</td><td>0.886</td><td>0.526</td></tr><tr><td>VAE</td><td>0.870</td><td>0.999</td><td>0.974</td><td>0.982</td><td>0.863</td></tr><tr><td>Our Model (Reachable)</td><td>1.000</td><td>0.999</td><td>1.000</td><td>1.000</td><td>0.920</td></tr><tr><td>Our Model (Unreachable)</td><td>1.000</td><td>0.988</td><td>1.000</td><td>1.000</td><td>0.684</td></tr></table>
|
| 383 |
+
|
| 384 |
+
# G ADDITIONAL RESULTS ON ANALOGS RECOMMENDATION
|
| 385 |
+
|
| 386 |
+

|
| 387 |
+
|
| 388 |
+

|
| 389 |
+
|
| 390 |
+

|
| 391 |
+
Figure 8: Correlation between input and output values on ChEMBL molecules.
|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
Figure 9: Correlation between input and output values on test set molecules.
|
| 401 |
+
|
| 402 |
+

|
| 403 |
+
|
| 404 |
+

|
| 405 |
+
|
| 406 |
+

|
| 407 |
+
|
| 408 |
+

|
| 409 |
+
Figure 10: Correlation between input and output values on validation set molecules.
|
| 410 |
+
|
| 411 |
+

|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
Figure 11: Correlation between input and output values on training set molecules.
|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
|
| 422 |
+
$$
|
| 423 |
+
S A L I = \frac {1}{N} \sum_ {(i, j)} \frac {\left| d _ {i} - d _ {j} \right| / r a n g e}{1 - s i m (i , j)} \tag {2}
|
| 424 |
+
$$
|
| 425 |
+
|
| 426 |
+
Table 5: Results of synthesis planning for unrecovered cases in "reachable" and "unreachable" molecules. Metrics other than recovery rate are measured for unrecovered molecules only.
|
| 427 |
+
|
| 428 |
+
<table><tr><td>Dataset</td><td>N</td><td>Average Similarity↑</td><td>KL Divergence↑</td><td>FC Distance↓</td></tr><tr><td>Reachable (test set)</td><td>69,548</td><td>0.508</td><td>1.000</td><td>0.315</td></tr><tr><td>Unreachable (ChEMBL)</td><td>20,000</td><td>0.396</td><td>1.000</td><td>2.140</td></tr></table>
|
| 429 |
+
|
| 430 |
+
Table 6: SALI of oracle functions we investigated. A higher SALI value means that a small structure change can lead to a larger property change, such that the function is more sensitive to detailed structural changes.
|
| 431 |
+
|
| 432 |
+
<table><tr><td></td><td>SA Score</td><td>QED</td><td>LogP</td><td>Mol. Weight</td></tr><tr><td>Reachable</td><td>0.139</td><td>0.368</td><td>0.053</td><td>0.127</td></tr><tr><td>Unreachable</td><td>0.115</td><td>0.383</td><td>0.013</td><td>0.043</td></tr></table>
|
| 433 |
+
|
| 434 |
+
# H ADDITIONAL RESULTS ON MOLECULAR DESIGN
|
| 435 |
+
|
| 436 |
+
Table 7: Results of synthesizable molecular optimization on common oracle functions. Seeds are randomly sampled from the ZINC database (Sterling & Irwin, 2015). Top-n is the average value for the top n molecules. "Seeds" refers to the mean scores of the initial mating pool we sampled and "Outputs" refers to the mean scores of the 128 generated molecules.
|
| 437 |
+
|
| 438 |
+
<table><tr><td></td><td>Best from seeds</td><td>Top-1</td><td>Top-10</td><td>Top-100</td><td>Seeds</td><td>Outputs</td></tr><tr><td>QED</td><td>0.947</td><td>0.948</td><td>0.948</td><td>0.947</td><td>0.673±0.289</td><td>0.946±0.001</td></tr><tr><td>LogP</td><td>3.81</td><td>25.82</td><td>25.05</td><td>23.96</td><td>1.09±35.18</td><td>23.72±0.69</td></tr><tr><td>JNK3</td><td>0.120</td><td>0.800</td><td>0.758</td><td>0.719</td><td>0.032±0.025</td><td>0.715±0.017</td></tr><tr><td>GSK3β</td><td>0.310</td><td>0.940</td><td>0.907</td><td>0.815</td><td>0.050±0.051</td><td>0.803±0.041</td></tr><tr><td>DRD2</td><td>0.181</td><td>1.000</td><td>1.000</td><td>0.998</td><td>0.007±0.018</td><td>0.996±0.003</td></tr></table>
|
| 439 |
+
|
| 440 |
+
Table 8: Leaderboard on TDC DRD3 docking benchmark using ZINC and Docking. Mean and standard deviation across three runs are reported. Arrows $(\uparrow, \downarrow)$ indicate the direction of better performance. The best method is bolded and the second best is underlined. Note in particular the low SA_Score and high % Pass, which are heuristics for synthetic complexity and drug likeness/quality.
|
| 441 |
+
|
| 442 |
+
<table><tr><td colspan="3">Method Category</td><td colspan="2">Domain-Specific Methods</td><td colspan="4">State-of-the-Art Methods in ML</td><td>Ours</td></tr><tr><td>Metric</td><td>Best-in-data</td><td># Calls</td><td>Screening</td><td>Graph-GA</td><td>LSTM</td><td>GCPN</td><td>MolDQN</td><td>MARS</td><td>SynNet</td></tr><tr><td>Top100 (↓)</td><td>-12.080</td><td></td><td>-10.542±0.035</td><td>-14.811±0.413</td><td>-13.017±0.385</td><td>-10.045±0.226</td><td>-8.236±0.089</td><td>-9.509±0.035</td><td>-11.133</td></tr><tr><td>Top10 (↓)</td><td>-12.590</td><td></td><td>-11.483±0.056</td><td>-15.930±0.336</td><td>-14.030±0.421</td><td>-11.483±0.581</td><td>-9.348±0.188</td><td>-10.693±0.172</td><td>-12.020</td></tr><tr><td>Top1 (↓)</td><td>-12.800</td><td></td><td>-12.100±0.356</td><td>-16.533±0.309</td><td>-14.533±0.525</td><td>-12.300±0.993</td><td>-9.990±0.194</td><td>-11.433±0.450</td><td>-12.300</td></tr><tr><td>Diversity (↑)</td><td>0.864</td><td>5000</td><td>0.872±0.003</td><td>0.626±0.092</td><td>0.740±0.056</td><td>0.922±0.002</td><td>0.893±0.005</td><td>0.873±0.002</td><td>0.821</td></tr><tr><td>Novelty (↑)</td><td>-</td><td></td><td>-</td><td>1.000±0.000</td><td>1.000±0.000</td><td>1.000±0.000</td><td>1.000±0.000</td><td>1.000±0.000</td><td>1.000</td></tr><tr><td>%Pass (↑)</td><td>0.780</td><td></td><td>0.683±0.073</td><td>0.393±0.308</td><td>0.257±0.103</td><td>0.167±0.045</td><td>0.023±0.012</td><td>0.527±0.087</td><td>0.800</td></tr><tr><td>Top1 Pass (↓)</td><td>-11.700</td><td></td><td>-10.100±0.000</td><td>-14.267±0.450</td><td>-12.533±0.403</td><td>-9.367±0.170</td><td>-7.980±0.112</td><td>-9.000±0.082</td><td>-12.300</td></tr><tr><td>SA_Score (↓)</td><td>2.973</td><td></td><td>3.036±0.014</td><td>4.783±1.195</td><td>2.611±0.238</td><td>6.843±0.210</td><td>6.687±0.049</td><td>3.103±0.011</td><td>2.801</td></tr></table>
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
|
| 448 |
+

|
| 449 |
+
Figure 12: The optimized structures and their corresponding synthetic pathways for various inhibitor design tasks. The third row is optimizing docking score against the $\mathbf{M}^{\mathrm{pro}}$ of SARS-Cov-2.
|
| 450 |
+
|
| 451 |
+
# I HYPERPARAMETER TUNING
|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
Action Network
|
| 455 |
+
|
| 456 |
+

|
| 457 |
+
Reactant1 Network
|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
Reaction Network
|
| 461 |
+
Figure 13: The validation loss during training with different radius and number of bits as network input. The validation loss is 1—accuracy, where the accuracies for the reactant networks are the accuracies of the k-NN searches ( $k = 1$ ).
|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
Reactant2 Network
|
| 465 |
+
|
| 466 |
+
Besides Morgan fingerprints, other molecular representations explored during hyperparameter tuning included Graph Isomorphism Network (GIN) (Xu et al., 2018) embeddings and RDKit 2D descriptors (Figure 14). Morgan fingerprints of length 256 and radius 2 were found to work best.
|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
Figure 14: The validation loss during training using different action embeddings to conduct the k-NN search. The validation loss is 1—accuracy, where the accuracies for the reactant networks are the accuracies of the k-NN searches ( $k = 1$ ).
|
| 470 |
+
|
| 471 |
+

|
amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:db022fb63514202d08225c524b1c8aff545d1e32bf83b57c794475a55c086083
|
| 3 |
+
size 1129605
|
amortizedtreegenerationforbottomupsynthesisplanningandsynthesizablemoleculardesign/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb62e74aa868cab4b4c2a50bf0540ad57b0a1fc39c02c6e96e88f733ec07d165
|
| 3 |
+
size 640213
|
analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/0bcbe61b-1f79-4ef5-bba6-d757f2071a87_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:df8f9b7f2304d58f2093978cef5ac8992562480e4c082e36b179d07603c8fe58
|
| 3 |
+
size 269155
|
analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/0bcbe61b-1f79-4ef5-bba6-d757f2071a87_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8c6f7410e9afa0ff8817a66500cce5db72ce40853940bc1bcd67958f6c8da5d0
|
| 3 |
+
size 302987
|
analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/0bcbe61b-1f79-4ef5-bba6-d757f2071a87_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da4ced6537555eb7e603bdecbe44ac824b59daa97a9fcd57eeb7b58450dec08f
|
| 3 |
+
size 10182843
|
analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b878f42be30e0fd20a6c4306a8e838548e6a752292642542d4a3e41a59b9f137
|
| 3 |
+
size 2794142
|
analyzingandimprovingtheoptimizationlandscapeofnoisecontrastiveestimation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cf39c7ac14a86dd4899145d8a74f5201bb8928c5f15b40e3bbe8f69e0b287e29
|
| 3 |
+
size 1596194
|
anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/1a555d73-3897-4439-aa69-6ee02f262397_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:45bd14f542af93a7a595788d6f0af8a4d5602d135412b144e41b8f92da528c43
|
| 3 |
+
size 130887
|
anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/1a555d73-3897-4439-aa69-6ee02f262397_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a49da9190a84157c4fb09e86df28a19c09448044e6ec2de9400f8c2c5c234ae0
|
| 3 |
+
size 155980
|
anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/1a555d73-3897-4439-aa69-6ee02f262397_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc0d288a209571eebc25e32d6ef04d4bfc867f40196ed9747ee3c9362fc61117
|
| 3 |
+
size 10545755
|
anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/full.md
ADDED
|
@@ -0,0 +1,481 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ANOMALY TRANSFORMER: TIME SERIES ANOMALY DETECTION WITH ASSOCIATION DISCREPANCY
|
| 2 |
+
|
| 3 |
+
Jiehui Xu\*, Haixu Wu\*, Jianmin Wang, Mingsheng Long
|
| 4 |
+
|
| 5 |
+
School of Software, BNRist, Tsinghua University, Beijing 100084, China
|
| 6 |
+
|
| 7 |
+
{xjh20,whx20}@mails.tsinghua.edu.cn,{jimwang,mingsheng}@tsinghua.edu.cn
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Unsupervised detection of anomaly points in time series is a challenging problem, which requires the model to derive a distinguishable criterion. Previous methods tackle the problem mainly through learning pointwise representation or pairwise association, however, neither is sufficient to reason about the intricate dynamics. Recently, Transformers have shown great power in unified modeling of pointwise representation and pairwise association, and we find that the self-attention weight distribution of each time point can embody rich association with the whole series. Our key observation is that due to the rarity of anomalies, it is extremely difficult to build nontrivial associations from abnormal points to the whole series, thereby, the anomalies' associations shall mainly concentrate on their adjacent time points. This adjacent-concentration bias implies an association-based criterion inherently distinguishable between normal and abnormal points, which we highlight through the Association Discrepancy. Technically, we propose the Anomaly Transformer with a new Anomaly-Attention mechanism to compute the association discrepancy. A minimax strategy is devised to amplify the normal-abnormal distinguishability of the association discrepancy. The Anomaly Transformer achieves state-of-the-art results on six unsupervised time series anomaly detection benchmarks of three applications: service monitoring, space & earth exploration, and water treatment.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Real-world systems always work in a continuous way, which can generate several successive measurements monitored by multi-sensors, such as industrial equipment, space probe, etc. Discovering the malfunctions from large-scale system monitoring data can be reduced to detecting the abnormal time points from time series, which is quite meaningful for ensuring security and avoiding financial loss. But anomalies are usually rare and hidden by vast normal points, making the data labeling hard and expensive. Thus, we focus on time series anomaly detection under the unsupervised setting.
|
| 16 |
+
|
| 17 |
+
Unsupervised time series anomaly detection is extremely challenging in practice. The model should learn informative representations from complex temporal dynamics through unsupervised tasks. Still, it should also derive a distinguishable criterion that can detect the rare anomalies from plenty of normal time points. Various classic anomaly detection methods have provided many unsupervised paradigms, such as the density-estimation methods proposed in local outlier factor (LOF, (Breunig et al., 2000)), clustering-based methods presented in one-class SVM (OC-SVM, (Schölkopf et al., 2001)) and SVDD (Tax & Duin, 2004). These classic methods do not consider the temporal information and are difficult to generalize to unseen real scenarios. Benefiting from the representation learning capability of neural networks, recent deep models (Su et al., 2019; Shen et al., 2020; Li et al., 2021) have achieved superior performance. A major category of methods focus on learning pointwise representations through well-designed recurrent networks and are self-supervised by the reconstruction or autoregressive task. Here, a natural and practical anomaly criterion is the pointwise reconstruction or prediction error. However, due to the rarity of anomalies, the pointwise representation is less informative for complex temporal patterns and can be dominated by normal time points, making anomalies less distinguishable. Also, the reconstruction or prediction error is calculated point by point, which cannot provide a comprehensive description of the temporal context.
|
| 18 |
+
|
| 19 |
+
Another major category of methods detect anomalies based on explicit association modeling. The vector autoregression and state space models fall into this category. The graph was also used to capture the association explicitly, through representing time series with different time points as vertices and detecting anomalies by random walk (Cheng et al., 2008; 2009). In general, it is hard for these classic methods to learn informative representations and model fine-grained associations. Recently, graph neural network (GNN) has been applied to learn the dynamic graph among multiple variables in multivariate time series (Zhao et al., 2020; Deng & Hooi, 2021). While being more expressive, the learned graph is still limited to a single time point, which is insufficient for complex temporal patterns. Besides, subsequence-based methods detect anomalies by calculating the similarity among subsequences (Boniol & Palpanas, 2020). While exploring wider temporal context, these methods cannot capture the fine-grained temporal association between each time point and the whole series.
|
| 20 |
+
|
| 21 |
+
In this paper, we adapt Transformers (Vaswani et al., 2017) to time series anomaly detection in the unsupervised regime. Transformers have achieved great progress in various areas, including natural language processing (Brown et al., 2020), machine vision (Liu et al., 2021) and time series (Zhou et al., 2021). This success is attributed to its great power in unified modeling of global representation and long-range relation. Applying Transformers to time series, we find that the temporal association of each time point can be obtained from the self-attention map, which presents as a distribution of its association weights to all the time points along the temporal dimension. The association distribution of each time point can provide a more informative description for the temporal context, indicating dynamic patterns, such as the period or trend of time series. We name the above association distribution as the series-association, which can be discovered from the raw series by Transformers.
|
| 22 |
+
|
| 23 |
+
Further, we observe that due to the rarity of anomalies and the dominance of normal patterns, it is harder for anomalies to build strong associations with the whole series. The associations of anomalies shall concentrate on the adjacent time points that are more likely to contain similar abnormal patterns due to the continuity. Such an adjacent-concentration inductive bias is referred to as the prior-association. In contrast, the dominating normal time points can discover informative associations with the whole series, not limiting to the adjacent area. Based on this observation, we try to utilize the inherent normal-abnormal distinguishability of the association distribution. This leads to a new anomaly criterion for each time point, quantified by the distance between each time point's prior-association and its series-association, named as Association Discrepancy. As aforementioned, because the associations of anomalies are more likely to be adjacent-concentrating, anomalies will present a smaller association discrepancy than normal time points.
|
| 24 |
+
|
| 25 |
+
Going beyond previous methods, we introduce Transformers to unsupervised time series anomaly detection and propose the Anomaly Transformer for association learning. To compute the Association Discrepancy, we renovate the self-attention mechanism to the Anomaly-Attention, which contains a two-branch structure to model the prior-association and series-association of each time point respectively. The prior-association employs the learnable Gaussian kernel to present the adjacent-concentration inductive bias of each time point, while the series-association corresponds to the self-attention weights learned from raw series. Besides, a minimax strategy is applied between the two branches, which can amplify the normal-abnormal distinguishability of the Association Discrepancy and further derive a new association-based criterion. Anomaly Transformer achieves strong results on six benchmarks, covering three real applications. The contributions are summarized as follows:
|
| 26 |
+
|
| 27 |
+
- Based on the key observation of Association Discrepancy, we propose the Anomaly Transformer with an Anomaly-Attention mechanism, which can model the prior-association and series-association simultaneously to embody the Association Discrepancy.
|
| 28 |
+
- We propose a minimax strategy to amplify the normal-abnormal distinguishability of the Association Discrepancy and further derive a new association-based detection criterion.
|
| 29 |
+
- Anomaly Transformer achieves the state-of-the-art anomaly detection results on six benchmarks for three real applications, justified by extensive ablations and insightful case studies.
|
| 30 |
+
|
| 31 |
+
# 2 RELATED WORK
|
| 32 |
+
|
| 33 |
+
# 2.1 UNSUPERVISED TIME SERIES ANOMALY DETECTION
|
| 34 |
+
|
| 35 |
+
As an important real-world problem, unsupervised time series anomaly detection has been widely explored. Categorizing by the anomaly determination criterion, the paradigms roughly include the density-estimation, clustering-based, reconstruction-based and autoregression-based methods.
|
| 36 |
+
|
| 37 |
+
In density-estimation methods, the classic methods such as local outlier factor (LOF, (Breunig et al., 2000)) and connectivity outlier factor (COF, (Tang et al., 2002)) respectively calculate local density and local connectivity for outlier determination. DAGMM (Zong et al., 2018) and MPPCACD (Yairi et al., 2017) integrate the Gaussian Mixture Model to estimate the density of representations.
|
| 38 |
+
|
| 39 |
+
In clustering-based methods, the anomaly score is always formalized as the distance to cluster center. SVDD (Tax & Duin, 2004) and Deep SVDD (Ruff et al., 2018) gather the representations from normal data to a compact cluster. THOC (Shen et al., 2020) fuses the multi-scale temporal features from intermediate layers by a hierarchical clustering mechanism and detects the anomalies by the multi-layer distances. ITAD (Shin et al., 2020) conducts the clustering on decomposed tensors.
|
| 40 |
+
|
| 41 |
+
The reconstruction-based models attempt to detect the anomalies by the reconstruction error. Park et al. (2018) presented the LSTM-VAE model that employs the LSTM backbone for temporal modeling and the Variational AutoEncoder (VAE) for reconstruction. OmniAnomaly proposed by Su et al. (2019) further extends the LSTM-VAE model with a normalizing flow and uses the reconstruction probabilities for detection. InterFusion from Li et al. (2021) renovates the backbone to a hierarchical VAE to model the inter- and intra-dependency among multiple series simultaneously. GANs (Goodfellow et al., 2014) are also used for reconstruction-based anomaly detection (Schlegl et al., 2019; Li et al., 2019a; Zhou et al., 2019) and perform as an adversarial regularization.
|
| 42 |
+
|
| 43 |
+
The autoregression-based models detect the anomalies by the prediction error. VAR extends ARIMA (Anderson & Kendall, 1976) and predicts the future based on the lag-dependent covariance. The autoregressive model can also be replaced by LSTMs (Hundman et al., 2018; Tariq et al., 2019).
|
| 44 |
+
|
| 45 |
+
This paper is characterized by a new association-based criterion. Different from the random walk and subsequence-based methods (Cheng et al., 2008; Boniol & Palpanas, 2020), our criterion is embodied by a co-design of the temporal models for learning more informative time-point associations.
|
| 46 |
+
|
| 47 |
+
# 2.2 TRANSFORMERS FOR TIME SERIES ANALYSIS
|
| 48 |
+
|
| 49 |
+
Recently, Transformers (Vaswani et al., 2017) have shown great power in sequential data processing, such as natural language processing (Devlin et al., 2019; Brown et al., 2020), audio processing (Huang et al., 2019) and computer vision (Dosovitskiy et al., 2021; Liu et al., 2021). For time series analysis, benefiting from the advantage of the self-attention mechanism, Transformers are used to discover the reliable long-range temporal dependencies (Kitaev et al., 2020; Li et al., 2019b; Zhou et al., 2021; Wu et al., 2021). Especially for time series anomaly detection, GTA proposed by Chen et al. (2021) employs the graph structure to learn the relationship among multiple IoT sensors, as well as the Transformer for temporal modeling and the reconstruction criterion for anomaly detection. Unlike the previous usage of Transformers, Anomaly Transformer renovates the self-attention mechanism to the Anomaly-Attention based on the key observation of association discrepancy.
|
| 50 |
+
|
| 51 |
+
# 3 METHOD
|
| 52 |
+
|
| 53 |
+
Suppose monitoring a successive system of $d$ measurements and recording the equally spaced observations over time. The observed time series $\mathcal{X}$ is denoted by a set of time points $\{x_{1}, x_{2}, \dots, x_{N}\}$ , where $x_{t} \in \mathbb{R}^{d}$ represents the observation of time $t$ . The unsupervised time series anomaly detection problem is to determine whether $x_{t}$ is anomalous or not without labels.
|
| 54 |
+
|
| 55 |
+
As aforementioned, we highlight the key to unsupervised time series anomaly detection as learning informative representations and finding distinguishable criterion. We propose the Anomaly Transformer to discover more informative associations and tackle this problem by learning the Association Discrepancy, which is inherently normal-abnormal distinguishable. Technically, we propose the Anomaly-Attention to embody the prior-association and series-associations, along with a minimax optimization strategy to obtain a more distinguishable association discrepancy. Co-designed with the architecture, we derive an association-based criterion based on the learned association discrepancy.
|
| 56 |
+
|
| 57 |
+
# 3.1 ANOMALY TRANSFORMER
|
| 58 |
+
|
| 59 |
+
Given the limitation of Transformers (Vaswani et al., 2017) for anomaly detection, we renovate the vanilla architecture to the Anomaly Transformer (Figure 1) with an Anomaly-Attention mechanism.
|
| 60 |
+
|
| 61 |
+
Overall Architecture Anomaly Transformer is characterized by stacking the Anomaly-Attention blocks and feed-forward layers alternately. This stacking structure is conducive to learning underly
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
Figure 1: Anomaly Transformer. Anomaly-Attention (left) models the prior-association and series-association simultaneously. In addition to the reconstruction loss, our model is also optimized by the minimax strategy with a specially-designed stop-gradient mechanism (gray arrows) to constrain the prior- and series-associations for more distinguishable association discrepancy.
|
| 65 |
+
|
| 66 |
+
ing associations from deep multi-level features. Suppose the model contains $L$ layers with length- $N$ input time series $\mathcal{X}\in \mathbb{R}^{N\times d}$ . The overall equations of the $l$ -th layer are formalized as:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\begin{array}{l} \mathcal {Z} ^ {l} = \text {L a y e r - N o r m} \left(\text {A n o m a l y - A t t e n t i o n} \left(\mathcal {X} ^ {l - 1}\right) + \mathcal {X} ^ {l - 1}\right) \\ \mathcal {X} ^ {l} = \operatorname {L a y e r - N o r m} \left(\operatorname {F e e d - F o r w a r d} (\mathcal {Z} ^ {l}) + \mathcal {Z} ^ {l}\right), \\ \end{array}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
where $\mathcal{X}^l\in \mathbb{R}^{N\times d_{\mathrm{model}}}$ , $l\in \{1,\dots ,L\}$ denotes the output of the $l$ -th layer with $d_{\mathrm{model}}$ channels. The initial input $\mathcal{X}^0 = \operatorname {Embedding}(\mathcal{X})$ represents the embedded raw series. $\mathcal{Z}^l\in \mathbb{R}^{N\times d_{\mathrm{model}}}$ is the $l$ -th layer's hidden representation. Anomaly-Attention $(\cdot)$ is to compute the association discrepancy.
|
| 73 |
+
|
| 74 |
+
Anomaly-Attention Note that the single-branch self-attention mechanism (Vaswani et al., 2017) cannot model the prior-association and series-association simultaneously. We propose the Anomaly-Attention with a two-branch structure (Figure 1). For the prior-association, we adopt a learnable Gaussian kernel to calculate the prior with respect to the relative temporal distance. Benefiting from the unimodal property of the Gaussian kernel, this design can pay more attention to the adjacent horizon constitutionally. We also use a learnable scale parameter $\sigma$ for the Gaussian kernel, making the prior-associations adapt to the various time series patterns, such as different lengths of anomaly segments. The series-association branch is to learn the associations from raw series, which can find the most effective associations adaptively. Note that these two forms maintain the temporal dependencies of each time point, which are more informative than point-wise representation. They also reflect the adjacent-concentration prior and the learned associations respectively, whose discrepancy shall be normal-abnormal distinguishable. The Anomaly-Attention in the $l$ -th layer is:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\begin{array}{l} \text {I n i t i a l i z a t i o n :} \mathcal {Q}, \mathcal {K}, \mathcal {V}, \sigma = \mathcal {X} ^ {l - 1} W _ {\mathcal {Q}} ^ {l}, \mathcal {X} ^ {l - 1} W _ {\mathcal {K}} ^ {l}, \mathcal {X} ^ {l - 1} W _ {\mathcal {V}} ^ {l}, \mathcal {X} ^ {l - 1} W _ {\sigma} ^ {l} \\ \text {P r i o r - A s s o c i a t i o n :} \mathcal {P} ^ {l} = \operatorname {R e s c a l e} \left(\left[ \frac {1}{\sqrt {2 \pi} \sigma_ {i}} \exp \left(- \frac {\left| j - i \right| ^ {2}}{2 \sigma_ {i} ^ {2}}\right) \right] _ {i, j \in \{1, \dots , N \}}\right) \tag {2} \\ \text {S e r i e s - A s s o c i a t i o n :} \mathcal {S} ^ {l} = \operatorname {S o f t m a x} \left(\frac {\mathcal {Q K} ^ {\mathrm {T}}}{\sqrt {d _ {\mathrm {m o d e l}}}}\right) \\ \begin{array}{l} \text {R e c o n s t r u c t i o n :} \widehat {\mathcal {Z}} ^ {l} = \mathcal {S} ^ {l} \mathcal {V}, \end{array} \\ \end{array}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
where $\mathcal{Q},\mathcal{K},\mathcal{V}\in \mathbb{R}^{N\times d_{\mathrm{model}}}$ , $\sigma \in \mathbb{R}^{N\times 1}$ represent the query, key, value of self-attention and the learned scale respectively. $W_{\mathcal{Q}}^{l},W_{\mathcal{K}}^{l},W_{\mathcal{V}}^{l}\in \mathbb{R}^{d_{\mathrm{model}}\times d_{\mathrm{model}}}$ , $W_{\sigma}^{l}\in \mathbb{R}^{d_{\mathrm{model}}\times 1}$ represent the parameter matrices for $\mathcal{Q},\mathcal{K},\mathcal{V},\sigma$ in the $l$ -th layer respectively. Prior-association $\mathcal{P}^l\in \mathbb{R}^{N\times N}$ is generated based on the learned scale $\sigma \in \mathbb{R}^{N\times 1}$ and the $i$ -th element $\sigma_{i}$ corresponds to the $i$ -th time point. Concretely, for the $i$ -th time point, its association weight to the $j$ -th point is calculated by the Gaussian kernel $G(|j - i|;\sigma_i) = \frac{1}{\sqrt{2\pi}\sigma_i}\exp \left(-\frac{|j - i|^2}{2\sigma_i^2}\right)$ w.r.t. the distance $|j - i|$ . Further, we use Rescale $(\cdot)$ to transform the association weights to discrete distributions $\mathcal{P}^l$ by dividing the row sum. $\mathcal{S}^l\in \mathbb{R}^{N\times N}$
|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
Figure 2: Minimax association learning. At the minimize phase, the prior-association minimizes the Association Discrepancy within the distribution family derived by Gaussian kernel. At the maximize phase, the series-association maximizes the Association Discrepancy under the reconstruction loss.
|
| 84 |
+
|
| 85 |
+
denotes the series-associations. $\mathrm{Softmax}(\cdot)$ normalizes the attention map along the last dimension, and each row of $S^l$ forms a discrete distribution. $\widehat{\mathcal{Z}}^l\in \mathbb{R}^{N\times d_{\mathrm{model}}}$ is the hidden representation after the Anomaly-Attention in the $l$ -th layer. We use Anomaly-Attention $(\cdot)$ to summarize Equation 2. In the multi-head version, the learned scale is $\sigma \in \mathbb{R}^{N\times h}$ for $h$ heads. $\mathcal{Q}_m,\mathcal{K}_m,\mathcal{V}_m\in \mathbb{R}^{N\times \frac{d_{\mathrm{model}}}{h}}$ denote the query, key and value of the $m$ -th head respectively. The block concatenates the outputs $\{\widehat{\mathcal{Z}}_m^l\in \mathbb{R}^{N\times \frac{d_{\mathrm{model}}}{h}}\}_{1\leq m\leq h}$ from multiple heads and gets the final result $\widehat{\mathcal{Z}}^l\in \mathbb{R}^{N\times d_{\mathrm{model}}}$ .
|
| 86 |
+
|
| 87 |
+
Association Discrepancy We formalize the Association Discrepancy as the symmetrized KL divergence between prior- and series-associations, which represents the information gain between these two distributions (Neal, 2007). We average the association discrepancy from multiple layers to combine the associations from multi-level features into a more informative measure as:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\operatorname {A s s D i s} (\mathcal {P}, \mathcal {S}; \mathcal {X}) = \left[ \frac {1}{L} \sum_ {l = 1} ^ {L} \left(\mathrm {K L} \left(\mathcal {P} _ {i,:} ^ {l} \| \mathcal {S} _ {i,:} ^ {l}\right) + \mathrm {K L} \left(\mathcal {S} _ {i,:} ^ {l} \| \mathcal {P} _ {i,:} ^ {l}\right)\right) \right] _ {i = 1, \dots , N} \tag {3}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $\mathrm{KL}(\cdot \| \cdot)$ is the KL divergence computed between two discrete distributions corresponding to every row of $\mathcal{P}^l$ and $S^l$ . AssDis $(\mathcal{P}, S; \mathcal{X}) \in \mathbb{R}^{N \times 1}$ is the point-wise association discrepancy of $\mathcal{X}$ with respect to prior-association $\mathcal{P}$ and series-association $S$ from multiple layers. The $i$ -th element of AssDis corresponds to the $i$ -th time point of $\mathcal{X}$ . From previous observation, anomalies will present smaller AssDis $(\mathcal{P}, S; \mathcal{X})$ than normal time points, which makes AssDis inherently distinguishable.
|
| 94 |
+
|
| 95 |
+
# 3.2 MINIMAX ASSOCIATION LEARNING
|
| 96 |
+
|
| 97 |
+
As an unsupervised task, we employ the reconstruction loss for optimizing our model. The reconstruction loss will guide the series-association to find the most informative associations. To further amplify the difference between normal and abnormal time points, we also use an additional loss to enlarge the association discrepancy. Due to the unimodal property of the prior-association, the discrepancy loss will guide the series-association to pay more attention to the non-adjacent area, which makes the reconstruction of anomalies harder and makes anomalies more identifiable. The loss function for input series $\mathcal{X} \in \mathbb{R}^{N \times d}$ is formalized as:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathcal {L} _ {\text {T o t a l}} (\widehat {\mathcal {X}}, \mathcal {P}, \mathcal {S}, \lambda ; \mathcal {X}) = \| \mathcal {X} - \widehat {\mathcal {X}} \| _ {\mathrm {F}} ^ {2} - \lambda \times \| \operatorname {A s s D i s} (\mathcal {P}, \mathcal {S}; \mathcal {X}) \| _ {1} \tag {4}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where $\widehat{\mathcal{X}}\in \mathbb{R}^{N\times d}$ denotes the reconstruction of $\mathcal{X}$ . $\| \cdot \|_{\mathrm{F}},\| \cdot \| _k$ indicate the Frobenius and $k$ -norm. $\lambda$ is to trade off the loss terms. When $\lambda >0$ , the optimization is to enlarge the association discrepancy. A minimax strategy is proposed to make the association discrepancy more distinguishable.
|
| 104 |
+
|
| 105 |
+
Minimax Strategy Note that directly maximizing the association discrepancy will extremely reduce the scale parameter of the Gaussian kernel (Neal, 2007), making the prior-association meaningless. Towards a better control of association learning, we propose a minimax strategy (Figure 2). Concretely, for the minimize phase, we drive the prior-association $\mathcal{P}^l$ to approximate the series-association $S^l$ that is learned from raw series. This process will make the prior-association adapt to various temporal patterns. For the maximize phase, we optimize the series-association to enlarge the association discrepancy. This process forces the series-association to pay more attention to the
|
| 106 |
+
|
| 107 |
+
non-adjacent horizon. Thus, integrating the reconstruction loss, the loss functions of two phases are:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\text {M i n i m i z e P h a s e :} \mathcal {L} _ {\text {T o t a l}} (\widehat {\mathcal {X}}, \mathcal {P}, \mathcal {S} _ {\text {d e t a c h}}, - \lambda ; \mathcal {X})
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\text {M a x i m i z e} \quad \mathcal {L} _ {\text {T o t a l}} (\widehat {\mathcal {X}}, \mathcal {P} _ {\text {d e t a c h}}, \mathcal {S}, \lambda ; \mathcal {X}), \tag {5}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where $\lambda > 0$ and $^*\mathrm{detach}$ means to stop the gradient backpropagation of the association (Figure 1). As $\mathcal{P}$ approximates $S_{\mathrm{detach}}$ in the minimize phase, the maximize phase will conduct a stronger constraint to the series-association, forcing the time points to pay more attention to the non-adjacent area. Under the reconstruction loss, this is much harder for anomalies to achieve than normal time points, thereby amplifying the normal-abnormal distinguishability of the association discrepancy.
|
| 118 |
+
|
| 119 |
+
Association-based Anomaly Criterion We incorporate the normalized association discrepancy to the reconstruction criterion, which will take the benefits of both temporal representation and the distinguishable association discrepancy. The final anomaly score of $\mathcal{X} \in \mathbb{R}^{N \times d}$ is shown as follows:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\operatorname {A n o m a l y S c o r e} (\mathcal {X}) = \operatorname {S o f t m a x} \left(- \operatorname {A s s D i s} (\mathcal {P}, \mathcal {S}; \mathcal {X})\right) \odot \left[ \| \mathcal {X} _ {i,:} - \widehat {\mathcal {X}} _ {i,:} \| _ {2} ^ {2} \right] _ {i = 1, \dots , N} \tag {6}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $\odot$ is the element-wise multiplication. AnomalyScore $(\mathcal{X}) \in \mathbb{R}^{N \times 1}$ denotes the point-wise anomaly criterion of $\mathcal{X}$ . Towards a better reconstruction, anomalies usually decrease the association discrepancy, which will still derive a higher anomaly score. Thus, this design can make the reconstruction error and the association discrepancy collaborate to improve detection performance.
|
| 126 |
+
|
| 127 |
+
# 4 EXPERIMENTS
|
| 128 |
+
|
| 129 |
+
We extensively evaluate Anomaly Transformer on six benchmarks for three practical applications.
|
| 130 |
+
|
| 131 |
+
Datasets Here is a description of the six experiment datasets: (1) SMD (Server Machine Dataset, Su et al. (2019)) is a 5-week-long dataset collected from a large Internet company with 38 dimensions. (2) PSM (Pooled Server Metrics, Abdulaal et al. (2021)) is collected internally from multiple application server nodes at eBay with 26 dimensions. (3) Both MSL (Mars Science Laboratory rover) and SMAP (Soil Moisture Active Passive satellite) are public datasets from NASA (Hundman et al., 2018) with 55 and 25 dimensions respectively, which contain the telemetry anomaly data derived from the Incident Surprise Anomaly (ISA) reports of spacecraft monitoring systems. (4) SWaT (Secure Water Treatment, Mathur & Tippenhauer (2016)) is obtained from 51 sensors of the critical infrastructure system under continuous operations. (5) NeurIPS-TS (NeurIPS 2021 Time Series Benchmark) is a dataset proposed by Lai et al. (2021) and includes five time series anomaly scenarios categorized by behavior-driven taxonomy as point-global, pattern-contextual, pattern-shapelet, pattern-seasonal and pattern-trend. The statistical details are summarized in Table 13 of Appendix.
|
| 132 |
+
|
| 133 |
+
Implementation details Following the well-established protocol in Shen et al. (2020), we adopt a non-overlapped sliding window to obtain a set of sub-series. The sliding window is with a fixed size of 100 for all datasets. We label the time points as anomalies if their anomaly scores (Equation 6) are larger than a certain threshold $\delta$ . The threshold $\delta$ is determined to make a proportion $r$ of time points of the validation dataset labeled as anomalies. For the main results, we set $r = 0.1\%$ for SWaT, $0.5\%$ for SMD and $1\%$ for other datasets. We adopt the widely-used adjustment strategy (Xu et al., 2018; Su et al., 2019; Shen et al., 2020): if a time point in a certain successive abnormal segment is detected, all anomalies in this abnormal segment are viewed to be correctly detected. This strategy is justified from the observation that an abnormal time point will cause an alert and further make the whole segment noticed in real-world applications. Anomaly Transformer contains 3 layers. We set the channel number of hidden states $d_{\mathrm{model}}$ as 512 and the number of heads $h$ as 8. The hyperparameter $\lambda$ (Equation 4) is set as 3 for all datasets to trade-off two parts of the loss function. We use the ADAM (Kingma & Ba, 2015) optimizer with an initial learning rate of $10^{-4}$ . The training process is early stopped within 10 epochs with the batch size of 32. All the experiments are implemented in Pytorch (Paszke et al., 2019) with a single NVIDIA TITAN RTX 24GB GPU.
|
| 134 |
+
|
| 135 |
+
Baselines We extensively compare our model with 18 baselines, including the reconstruction-based models: InterFusion (2021), BeatGAN (2019), OmniAnomaly (2019), LSTM-VAE (2018); the density-estimation models: DAGMM (2018), MPPCACD (2017), LOF (2000); the clustering-based methods: ITAD (2020), THOC (2020), Deep-SVDD (2018); the autoregression-based models: CL-MPPCA (2019), LSTM (2018), VAR (1976); the classic methods: OC-SVM (2004), IsolationForest (2008). Another 3 baselines from change point detection and time series segmentation are deferred to Appendix I. InterFusion (2021) and THOC (2020) are the state-of-the-art deep models.
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
Figure 3: ROC curves (horizontal-axis: false-positive rate; vertical-axis: true-positive rate) for the five datasets. A higher AUC value (area under the ROC curve) indicates a better performance. The predefined threshold proportion $r$ is in $\{0.5\%, 1.0\%, 1.5\%, 2.0\%, 10\%, 20\%, 30\% \}$ .
|
| 139 |
+
|
| 140 |
+
Table 1: Quantitative results for Anomaly Transformer (Ours) in the five datasets. The $P$ , $R$ and $F1$ represent the precision, recall and F1-score (as %) respectively. F1-score is the harmonic mean of precision and recall. For these three metrics, a higher value indicates a better performance.
|
| 141 |
+
|
| 142 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>OCSVM</td><td>44.34</td><td>76.72</td><td>56.19</td><td>59.78</td><td>86.87</td><td>70.82</td><td>53.85</td><td>59.07</td><td>56.34</td><td>45.39</td><td>49.22</td><td>47.23</td><td>62.75</td><td>80.89</td><td>70.67</td></tr><tr><td>IsolationForest</td><td>42.31</td><td>73.29</td><td>53.64</td><td>53.94</td><td>86.54</td><td>66.45</td><td>52.39</td><td>59.07</td><td>55.53</td><td>49.29</td><td>44.95</td><td>47.02</td><td>76.09</td><td>92.45</td><td>83.48</td></tr><tr><td>LOF</td><td>56.34</td><td>39.86</td><td>46.68</td><td>47.72</td><td>85.25</td><td>61.18</td><td>58.93</td><td>56.33</td><td>57.60</td><td>72.15</td><td>65.43</td><td>68.62</td><td>57.89</td><td>90.49</td><td>70.61</td></tr><tr><td>Deep-SVDD</td><td>78.54</td><td>79.67</td><td>79.10</td><td>91.92</td><td>76.63</td><td>83.58</td><td>89.93</td><td>56.02</td><td>69.04</td><td>80.42</td><td>84.45</td><td>82.39</td><td>95.41</td><td>86.49</td><td>90.73</td></tr><tr><td>DAGMM</td><td>67.30</td><td>49.89</td><td>57.30</td><td>89.60</td><td>63.93</td><td>74.62</td><td>86.45</td><td>56.73</td><td>68.51</td><td>89.92</td><td>57.84</td><td>70.40</td><td>93.49</td><td>70.03</td><td>80.08</td></tr><tr><td>MMPCACD</td><td>71.20</td><td>79.28</td><td>75.02</td><td>81.42</td><td>61.31</td><td>69.95</td><td>88.61</td><td>75.84</td><td>81.73</td><td>82.52</td><td>68.29</td><td>74.73</td><td>76.26</td><td>78.35</td><td>77.29</td></tr><tr><td>VAR</td><td>78.35</td><td>70.26</td><td>74.08</td><td>74.68</td><td>81.42</td><td>77.90</td><td>81.38</td><td>53.88</td><td>64.83</td><td>81.59</td><td>60.29</td><td>69.34</td><td>90.71</td><td>83.82</td><td>87.13</td></tr><tr><td>LSTM</td><td>78.55</td><td>85.28</td><td>81.78</td><td>85.45</td><td>82.50</td><td>83.95</td><td>89.41</td><td>78.13</td><td>83.39</td><td>86.15</td><td>83.27</td><td>84.69</td><td>76.93</td><td>89.64</td><td>82.80</td></tr><tr><td>CL-MPPCA</td><td>82.36</td><td>76.07</td><td>79.09</td><td>73.71</td><td>88.54</td><td>80.44</td><td>86.13</td><td>63.16</td><td>72.88</td><td>76.78</td><td>81.50</td><td>79.07</td><td>56.02</td><td>99.93</td><td>71.80</td></tr><tr><td>ITAD</td><td>86.22</td><td>73.71</td><td>79.48</td><td>69.44</td><td>84.09</td><td>76.07</td><td>82.42</td><td>66.89</td><td>73.85</td><td>63.13</td><td>52.08</td><td>57.08</td><td>72.80</td><td>64.02</td><td>68.13</td></tr><tr><td>LSTM-VAE</td><td>75.76</td><td>90.08</td><td>82.30</td><td>85.49</td><td>79.94</td><td>82.62</td><td>92.20</td><td>67.75</td><td>78.10</td><td>76.00</td><td>89.50</td><td>82.20</td><td>73.62</td><td>89.92</td><td>80.96</td></tr><tr><td>BeatGAN</td><td>72.90</td><td>84.09</td><td>78.10</td><td>89.75</td><td>85.42</td><td>87.53</td><td>92.38</td><td>55.85</td><td>69.61</td><td>64.01</td><td>87.46</td><td>73.92</td><td>90.30</td><td>93.84</td><td>92.04</td></tr><tr><td>OmniAnomaly</td><td>83.68</td><td>86.82</td><td>85.22</td><td>89.02</td><td>86.37</td><td>87.67</td><td>92.49</td><td>81.99</td><td>86.92</td><td>81.42</td><td>84.30</td><td>82.83</td><td>88.39</td><td>74.46</td><td>80.83</td></tr><tr><td>InterFusion</td><td>87.02</td><td>85.43</td><td>86.22</td><td>81.28</td><td>92.70</td><td>86.62</td><td>89.77</td><td>88.52</td><td>89.14</td><td>80.59</td><td>85.58</td><td>83.01</td><td>83.61</td><td>83.45</td><td>83.52</td></tr><tr><td>THOC</td><td>79.76</td><td>90.95</td><td>84.99</td><td>88.45</td><td>90.97</td><td>89.69</td><td>92.06</td><td>89.34</td><td>90.68</td><td>83.94</td><td>86.36</td><td>85.13</td><td>88.14</td><td>90.99</td><td>89.54</td></tr><tr><td>Ours</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td></tr></table>
|
| 143 |
+
|
| 144 |
+
# 4.1 MAIN RESULTS
|
| 145 |
+
|
| 146 |
+
Real-world datasets We extensively evaluate our model on five real-world datasets with ten competitive baselines. As shown in Table 1, Anomaly Transformer achieves the consistent state-of-the-art on all benchmarks. We observe that deep models that consider the temporal information outperform the general anomaly detection model, such as Deep-SVDD (Ruff et al., 2018) and DAGMM (Zong et al., 2018), which verifies the effectiveness of temporal modeling. Our proposed Anomaly Transformer goes beyond the point-wise representation learned by RNNs and models the more informative associations. The results in Table 1 are persuasive for the advantage of association learning in time series anomaly detection. In addition, we plot the ROC curve in Figure 3 for a complete comparison. Anomaly Transformer has the highest AUC values on all five datasets. It means that our model performs well in the false-positive and true-positive rates under various preset thresholds, which is important for real-world applications.
|
| 147 |
+
|
| 148 |
+
NeurIPS-TS benchmark This benchmark is generated from well-designed rules proposed by Lai et al. (2021), including all types of anomalies and covering both the point-wise and patternwise anomalies. As shown in Figure 4, Anomaly Transformer can still achieve state-of-the-art performance. This verifies the effectiveness of our model on various anomalies.
|
| 149 |
+
|
| 150 |
+
Ablation study As shown in Table 2, we further investigate the effect of each part in our model. Our association-based criterion outperforms the widely-used reconstruction criterion consistently.
|
| 151 |
+
|
| 152 |
+
Specifically, the association-based criterion brings a remarkable $18.76\%$ ( $76.20 \rightarrow 94.96$ ) averaged absolute F1-score promotion. Also, directly taking the association discrepancy as the criterion still achieves a good performance (F1-score: $91.55\%$ ) and surpasses the previous state-of-the-art model
|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
Figure 4: Results for NeurIPS-TS.
|
| 156 |
+
|
| 157 |
+
THOC (F1-score: $88.01\%$ calculated from Table 1). Besides, the learnable prior-association (corresponding to $\sigma$ in Equation 2) and the minimax strategy can further improve our model and get $8.43\%$ $(79.05\rightarrow 87.48)$ and $7.48\%$ $(87.48\rightarrow 94.96)$ averaged absolute promotions respectively. Finally, our proposed Anomaly Transformer surpasses the pure Transformer by $18.34\%$ $(76.62\rightarrow 94.96)$ absolute improvement. These verify that each module of our design is effective and necessary. More ablations of association discrepancy can be found in Appendix D.
|
| 158 |
+
|
| 159 |
+
Table 2: Ablation results (F1-score) in anomaly criterion, prior-association and optimization strategy. Recon, AssDis and Assoc mean the pure reconstruction performance, pure association discrepancy and our proposed association-based criterion respectively. Fix is to fix Learnable scale parameter $\sigma$ of prior-association as 1.0. Max and Minimax refer to the strategies for association discrepancy in the maximization (Equation 4) and minimax (Equation 5) way respectively.
|
| 160 |
+
|
| 161 |
+
<table><tr><td>Architecture</td><td>Anomaly Criterion</td><td>Prior-Association</td><td>Optimization Strategy</td><td>SMD</td><td>MSL</td><td>SMAP</td><td>SWaT</td><td>PSM</td><td>Avg F1(as %)</td></tr><tr><td>Transformer</td><td>Recon</td><td>×</td><td>×</td><td>79.72</td><td>76.64</td><td>73.74</td><td>74.56</td><td>78.43</td><td>76.62</td></tr><tr><td rowspan="4">Anomaly Transformer</td><td>Recon</td><td>Learnable</td><td>Minmax</td><td>71.35</td><td>78.61</td><td>69.12</td><td>81.53</td><td>80.40</td><td>76.20</td></tr><tr><td>AssDis</td><td>Learnable</td><td>Minmax</td><td>87.57</td><td>90.50</td><td>90.98</td><td>93.21</td><td>95.47</td><td>91.55</td></tr><tr><td>Assoc</td><td>Fix</td><td>Max</td><td>83.95</td><td>82.17</td><td>70.65</td><td>79.46</td><td>79.04</td><td>79.05</td></tr><tr><td>Assoc</td><td>Learnable</td><td>Max</td><td>88.88</td><td>85.20</td><td>87.84</td><td>81.65</td><td>93.83</td><td>87.48</td></tr><tr><td>*final</td><td>Assoc</td><td>Learnable</td><td>Minmax</td><td>92.33</td><td>93.59</td><td>96.90</td><td>94.07</td><td>97.89</td><td>94.96</td></tr></table>
|
| 162 |
+
|
| 163 |
+
# 4.2 MODEL ANALYSIS
|
| 164 |
+
|
| 165 |
+
To explain how our model works intuitively, we provide the visualization and statistical results for our three key designs: anomaly criterion, learnable prior-association and optimization strategy.
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
Figure 5: Visualization of different anomaly categories (Lai et al., 2021). We plot the raw series (first row) from NeurIPS-TS dataset, as well as their corresponding reconstruction (second row) and association-based criteria (third row). The point-wise anomalies are marked by red circles and the pattern-wise anomalies are in red segments. The wrongly detected cases are bounded by red boxes.
|
| 169 |
+
|
| 170 |
+
Anomaly criterion visualization To get more intuitive cases about how association-based criterion works, we provide some visualization in Figure 5 and explore the criterion performance under different types of anomalies, where the taxonomy is from Lai et al. (2021). We can find that our proposed association-based criterion is more distinguishable in general. Concretely, the association-based criterion can obtain the consistent smaller values for the normal part, which is quite contrasting
|
| 171 |
+
|
| 172 |
+
in point-contextual and pattern-seasonal cases (Figure 5). In contrast, the jitter curves of the reconstruction criterion make the detection process confused and fail in the aforementioned two cases. This verifies that our criterion can highlight the anomalies and provide distinct values for normal and abnormal points, making the detection precise and reducing the false-positive rate.
|
| 173 |
+
|
| 174 |
+

|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+

|
| 179 |
+
|
| 180 |
+

|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
(a) Point-Global
|
| 186 |
+
Figure 6: Learned scale parameter $\sigma$ for different types of anomalies (highlight in red).
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
(b) Point-Contextual
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
(c) Pattern-Shapelet
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
(d) Pattern-Seasonal
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
(e) Pattern-Trend
|
| 199 |
+
|
| 200 |
+
Prior-association visualization During the minimax optimization, the prior-association is learned to get close to the series-association. Thus, the learned $\sigma$ can reflect the adjacent-concentrating degree of time series. As shown in Figure 6, we find that $\sigma$ changes to adapt to various data patterns of time series. Especially, the prior-association of anomalies generally has a smaller $\sigma$ than normal time points, which matches our adjacent-concentration inductive bias of anomalies.
|
| 201 |
+
|
| 202 |
+
**Optimization strategy analysis** Only with the reconstruction loss, the abnormal and normal time points present similar behavior in the association weights to adjacent time points, corresponding to a contrast value closed to 1 (Table 3). Maximizing the association discrepancy will force the series-associations to pay more attention to the non-adjacent area. However, to obtain a better reconstruction, the anomalies must maintain much larger adjacent association weights than normal time points, corresponding to a larger contrast value. But direct maximization will cause optimization difficulty of Gaussian kernel, and cannot strongly amplify the difference between normal and abnormal time points as expected (SMD: $1.15 \rightarrow 1.27$ ). The minimax strategy optimizes the prior-association to provide a stronger constraint to series-association, thereby obtaining more distinguishable contrast values and better performance than the direct maximization (SMD: $1.27 \rightarrow 2.39$ ).
|
| 203 |
+
|
| 204 |
+
Table 3: Results of adjacent association weights for Abnormal and Normal time points respectively. Recon, Max and Minimax represent the association learning process that is supervised by reconstruction loss, direct maximization and minimax strategy respectively. A higher contrast value ( $\frac{\text{Abnormal}}{\text{Normal}}$ ) indicates a stronger distinguishability between normal and abnormal time points.
|
| 205 |
+
|
| 206 |
+
<table><tr><td rowspan="2">Dataset Optimization</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Recon</td><td>Max</td><td>Ours</td><td>Recon</td><td>Max</td><td>Ours</td><td>Recon</td><td>Max</td><td>Ours</td><td>Recon</td><td>Max</td><td>Ours</td><td>Recon</td><td>Max</td><td>Ours</td></tr><tr><td>Abnormal (%)</td><td>1.08</td><td>0.95</td><td>0.86</td><td>1.01</td><td>0.65</td><td>0.35</td><td>1.29</td><td>1.18</td><td>0.70</td><td>1.27</td><td>0.89</td><td>0.37</td><td>1.02</td><td>0.56</td><td>0.29</td></tr><tr><td>Normal (%)</td><td>0.94</td><td>0.75</td><td>0.36</td><td>1.00</td><td>0.59</td><td>0.22</td><td>1.23</td><td>1.09</td><td>0.49</td><td>1.18</td><td>0.78</td><td>0.21</td><td>0.99</td><td>0.54</td><td>0.11</td></tr><tr><td>Contrast ( Abnormal / Normal)</td><td>1.15</td><td>1.27</td><td>2.39</td><td>1.01</td><td>1.10</td><td>1.59</td><td>1.05</td><td>1.08</td><td>1.43</td><td>1.08</td><td>1.14</td><td>1.76</td><td>1.03</td><td>1.04</td><td>2.64</td></tr></table>
|
| 207 |
+
|
| 208 |
+
# 5 CONCLUSION AND FUTURE WORK
|
| 209 |
+
|
| 210 |
+
This paper studies the unsupervised time series anomaly detection problem. Unlike previous works, we learn the more informative time-point associations by Transformers. Based on the key observation of association discrepancy, we propose the Anomaly Transformer, including an Anomaly-Attention with the two-branch structure to embody the association discrepancy. A minimax strategy is adopted to further amplify the difference between normal and abnormal time points. By introducing the association discrepancy, we propose the association-based criterion, which makes the reconstruction performance and association discrepancy collaborate. Anomaly Transformer achieves the state-of-the-art results on an exhaustive set of empirical studies. Future work includes theoretical study of Anomaly Transformer in light of classic analysis for autoregression and state space models.
|
| 211 |
+
|
| 212 |
+
# ACKNOWLEDGMENTS
|
| 213 |
+
|
| 214 |
+
This work was supported by the National Megaproject for New Generation AI (2020AAA0109201), National Natural Science Foundation of China (62022050 and 62021002), Beijing Nova Program (Z201100006820041), and BNRist Innovation Fund (BNR2021RC01002).
|
| 215 |
+
|
| 216 |
+
# REFERENCES
|
| 217 |
+
|
| 218 |
+
Ahmed Abdulaal, Zhuanghua Liu, and Tomer Lancewicki. Practical approach to asynchronous multivariate time series anomaly detection and localization. KDD, 2021.
|
| 219 |
+
Ryan Prescott Adams and David J. C. MacKay. Bayesian online changepoint detection. arXiv preprint arXiv:0710.3742, 2007.
|
| 220 |
+
O.Anderson and M.Kendall.Time-series.2nd edn.J.R.Stat.Soc.(SeriesD),1976.
|
| 221 |
+
Paul Boniol and Themis Palpanas. Series2graph: Graph-based subsequence anomaly detection for time series. Proc. VLDB Endow., 2020.
|
| 222 |
+
Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng, and Jörg Sander. LOF: identifying density-based local outliers. In SIGMOD, 2000.
|
| 223 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS, 2020.
|
| 224 |
+
Zekai Chen, Dingshuo Chen, Zixuan Yuan, Xiuzhen Cheng, and Xiao Zhang. Learning graph structures with transformer for multivariate time series anomaly detection in IoT. ArXiv, abs/2104.03466, 2021.
|
| 225 |
+
Haibin Cheng, Pang-Ning Tan, Christopher Potter, and Steven A. Klooster. A robust graph-based algorithm for detection and characterization of anomalies in noisy multivariate time series. ICDM Workshops, 2008.
|
| 226 |
+
Haibin Cheng, Pang-Ning Tan, Christopher Potter, and Steven A. Klooster. Detection and characterization of anomalies in multivariate time series. In SDM, 2009.
|
| 227 |
+
Shohreh Deldari, Daniel V. Smith, Hao Xue, and Flora D. Salim. Time series change point detection with self-supervised contrastive predictive coding. In WWW, 2021.
|
| 228 |
+
Ailin Deng and Bryan Hooi. Graph neural network-based anomaly detection in multivariate time series. AAAI, 2021.
|
| 229 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, 2019.
|
| 230 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
|
| 231 |
+
I. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014.
|
| 232 |
+
Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Ian Simon, Curtis Hawthorne, Noam Shazeer, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, and Douglas Eck. Music transformer. In ICLR, 2019.
|
| 233 |
+
Kyle Hundman, Valentino Constantinou, Christopher Laporte, Ian Colwell, and Tom Söderström. Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding. KDD, 2018.
|
| 234 |
+
|
| 235 |
+
Eamonn J. Keogh, Taposh Roy, Naik U, and Agrawal A. Multi-dataset time-series anomaly detection competition, Competition of International Conference on Knowledge Discovery & Data Mining 2021. URL https://compete.hexagon-ml.com/practice/competition/39/.
|
| 236 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
|
| 237 |
+
Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In *ICLR*, 2020.
|
| 238 |
+
Kwei-Herng Lai, D. Zha, Junjie Xu, and Yue Zhao. Revisiting time series outlier detection: Definitions and benchmarks. In NeurIPS Dataset and Benchmark Track, 2021.
|
| 239 |
+
Dan Li, Dacheng Chen, Lei Shi, Baihong Jin, Jonathan Goh, and See-Kiong Ng. Mad-gan: Multi-variate anomaly detection for time series data with generative adversarial networks. In ICANN, 2019a.
|
| 240 |
+
Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In NeurIPS, 2019b.
|
| 241 |
+
Zhihan Li, Youjian Zhao, Jiaqi Han, Ya Su, Rui Jiao, Xidao Wen, and Dan Pei. Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding. KDD, 2021.
|
| 242 |
+
F. Liu, K. Ting, and Z. Zhou. Isolation forest. ICDM, 2008.
|
| 243 |
+
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Ching-Feng Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. ICCV, 2021.
|
| 244 |
+
Aditya P. Mathur and Nils Ole Tippenhauer. Swat: a water treatment testbed for research and training on ICS security. In CySWATER, 2016.
|
| 245 |
+
Radford M. Neal. Pattern recognition and machine learning. Technometrics, 2007.
|
| 246 |
+
Daehyung Park, Yuuna Hoshi, and Charles C. Kemp. A multimodal anomaly detector for robot-assisted feeding using an LSTM-based variational autoencoder. RA-L, 2018.
|
| 247 |
+
Adam Paszke, S. Gross, Francisco Massa, A. Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Z. Lin, N. Gimelshein, L. Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
|
| 248 |
+
Mathias Perslev, Michael Jensen, Sune Darkner, Poul Jørgen Jennum, and Christian Igel. U-time: A fully convolutional network for time series segmentation applied to sleep staging. In NeurIPS. 2019.
|
| 249 |
+
Lukas Ruff, Nico Gornitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Robert A. Vandermeulen, Alexander Binder, Emmanuel Müller, and M. Kloft. Deep one-class classification. In ICML, 2018.
|
| 250 |
+
T. Schlegl, Philipp Seebock, S. Waldstein, G. Langs, and U. Schmidt-Erfurth. f-anogan: Fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal., 2019.
|
| 251 |
+
B. Scholkopf, John C. Platt, J. Shawe-Taylor, Alex Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural Comput., 2001.
|
| 252 |
+
Lifeng Shen, Zhuocong Li, and James T. Kwok. Timeseries anomaly detection using temporal hierarchical one-class network. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, MariaFlorina Balcan, and Hsuan-Tien Lin (eds.), NeurIPS, 2020.
|
| 253 |
+
Youjin Shin, Sangyup Lee, Shahroz Tariq, Myeong Shin Lee, Okchul Jung, Daewon Chung, and Simon S. Woo. Itad: Integrative tensor-based anomaly detection system for reducing false positives of satellite systems. CIKM, 2020.
|
| 254 |
+
|
| 255 |
+
Ya Su, Y. Zhao, Chenhao Niu, Rong Liu, W. Sun, and Dan Pei. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. KDD, 2019.
|
| 256 |
+
Jian Tang, Zhixiang Chen, A. Fu, and D. Cheung. Enhancing effectiveness of outlier detections for low density patterns. In PAKDD, 2002.
|
| 257 |
+
Shahroz Tariq, Sangyup Lee, Youjin Shin, Myeong Shin Lee, Okchul Jung, Daewon Chung, and Simon S. Woo. Detecting anomalies in space using multivariate convolutional LSTM with mixtures of probabilistic pca. KDD, 2019.
|
| 258 |
+
D. Tax and R. Duin. Support vector data description. Mach. Learn., 2004.
|
| 259 |
+
Robert Tibshirani, Guenther Walther, and Trevor Hastie. Estimating the number of clusters in a dataset via the gap statistic. J. R. Stat. Soc. (Series B), 2001.
|
| 260 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
|
| 261 |
+
Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with Auto-Correlation for long-term series forecasting. In NeurIPS, 2021.
|
| 262 |
+
Haowen Xu, Wenxiao Chen, N. Zhao, Zeyan Li, Jiahao Bu, Zhihan Li, Y. Liu, Y. Zhao, Dan Pei, Yang Feng, Jian Jhen Chen, Zhaogang Wang, and Honglin Qiao. Unsupervised anomaly detection via variational auto-encoder for seasonal kpis in web applications. WWW, 2018.
|
| 263 |
+
Takehisa Yairi, Naoya Takeishi, Tetsuo Oda, Yuta Nakajima, Naoki Nishimura, and Noboru Takata. A data-driven health monitoring method for satellite housekeeping data based on probabilistic clustering and dimensionality reduction. IEEE Trans. Aerosp. Electron. Syst., 2017.
|
| 264 |
+
Hang Zhao, Yujing Wang, Juanyong Duan, Congrui Huang, Defu Cao, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, and Qi Zhang. Multivariate time-series anomaly detection via graph attention network. ICDM, 2020.
|
| 265 |
+
Bin Zhou, Shenghua Liu, Bryan Hooi, Xueqi Cheng, and Jing Ye. Beatgan: Anomalous rhythm detection using adversarially generated time series. In *IJCAI*, 2019.
|
| 266 |
+
Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In AAAI, 2021.
|
| 267 |
+
Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Dae-ki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In ICLR, 2018.
|
| 268 |
+
|
| 269 |
+
# A PARAMETER SENSITIVITY
|
| 270 |
+
|
| 271 |
+
We set the window size as 100 throughout the main text, which considers the temporal information, memory and computation efficiency. And we set the loss weight $\lambda$ based on the convergence property of the training curve.
|
| 272 |
+
|
| 273 |
+
Furthermore, Figure 7 provides the model performance under different choices of the window size and the loss weight. We present that our model is stable to the window size over extensive datasets (Figure 7 left). Note that a larger window size indicates a larger memory cost and a smaller sliding number. Especially, only considering the performance, its relationship to the window size can be determined by the data pattern. For example, our model performs better when the window size is 50 for the SMD dataset. Besides, we adopt the loss weight $\lambda$ in Equation 5 to trade off the reconstruction loss and the association part. We find that $\lambda$ is stable and easy to tune in the range of 2 to 4. The above results verify the sensitivity of our model, which is essential for applications.
|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
Figure 7: Parameter sensitivity for sliding window size (left) and loss weight $\lambda$ (right). The model with $\lambda = 0$ still adopts the association-based criterion but only supervised by reconstruction loss.
|
| 277 |
+
|
| 278 |
+

|
| 279 |
+
|
| 280 |
+
# B IMPLEMENTATION DETAILS
|
| 281 |
+
|
| 282 |
+
We present the pseudo-code of Anomaly-Attention in Algorithm 1.
|
| 283 |
+
|
| 284 |
+
Algorithm 1 Anomaly-Attention Mechanism (multi-head version).
|
| 285 |
+
Input: $\mathcal{X}\in \mathbb{R}^{N\times d_{\mathrm{model}}}$ : input; $\mathcal{D} = \left((j - i)^2\right)_{i,j\in \{1,\dots ,N\}}$ $\in \mathbb{R}^{N\times N}$ : relative distance matrix
|
| 286 |
+
Layer params: MLPinput: linear projector for input; MLPoutput: linear projector for output
|
| 287 |
+
1: Q, K, V, $\sigma =$ SplitMLPinput(X), dim=1 2: for $(Q_m,K_m,V_m,\sigma_m)$ in $(Q,K,V,\sigma)$ .. $\triangleright Q_{m},K_{m},V_{m}\in \mathbb{R}^{N\times \frac{d_{\mathrm{model}}}{h}},\sigma_{m}\in \mathbb{R}^{N\times 1}$
|
| 288 |
+
3: $\sigma_{m} =$ Broadcast $(\sigma_{m},\dim = 1)$ 4: $\mathcal{P}_m = \frac{1}{\sqrt{2\pi}\sigma_m}\exp \Bigl (-\frac{\mathcal{D}}{2\sigma_m^2}\Bigr)$ 5: $\mathcal{P}_m = \mathcal{P}_m /$ Broadcast Sum(Pm, dim=1) 6: $S_{m} =$ Softmax $\left(\sqrt{\frac{h}{d_{\mathrm{model}}}}\mathcal{Q}_{m}\mathcal{K}_{m}^{\mathrm{T}}\right)$ 7: $\widehat{\mathcal{Z}}_m = S_m\mathcal{V}_m$ 8: $\widehat{\mathcal{Z}} = \mathrm{MLP}_{\mathrm{output}}\Big(\mathrm{Concat}([\widehat{\mathcal{Z}}_1,\dots ,\widehat{\mathcal{Z}}_h],\dim = 1)\Big)$ 9: Return $\widehat{\mathcal{Z}}$ 0 Keep the $\mathcal{P}_m$ and $S_{m}$ $m = 1,\dots ,h$
|
| 289 |
+
|
| 290 |
+
# C MORE SHOWCASES
|
| 291 |
+
|
| 292 |
+
To obtain an intuitive comparison of main results (Table 1), we visualize the criterion of various baselines. Anomaly Transformer can present the most distinguishable criterion (Figure 8). Besides,
|
| 293 |
+
|
| 294 |
+
for the real-world dataset, Anomaly Transformer can also detect the anomalies correctly. Especially for the SWaT dataset (Figure 9(d)), our model can detect the anomalies in the early stage, which is meaningful for real-world applications, such as the early warning of malfunctions.
|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
Figure 8: Visualization of learned criterion for the NeurIPS-TS dataset. Anomalies are labeled by red circles and red segments (first row). The failure cases of the baselines are bounded by red boxes.
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
Figure 9: Visualization of the model learned criterion in real-world datasets. We select one dimension of the data for visualization. These showcases are from the test set of corresponding datasets.
|
| 301 |
+
|
| 302 |
+
# D ABLATION OF ASSOCIATION DISCREPANCY
|
| 303 |
+
|
| 304 |
+
We present the pseudo code of the calculation in Algorithm 2.
|
| 305 |
+
|
| 306 |
+
# D.1 ABLATION OF MULTI-LEVEL QUANTIFICATION
|
| 307 |
+
|
| 308 |
+
We average the association discrepancy from multiple layers for the final results (Equation 6). We further investigate the model performance under the single-layer usage. As shown in Table 4, the multiple-layer design achieves the best, which verifies the effectiveness of multi-level quantification.
|
| 309 |
+
|
| 310 |
+
Table 4: Model performance under difference selection of model layers for association discrepancy.
|
| 311 |
+
|
| 312 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>layer 1</td><td>87.15</td><td>92.87</td><td>89.92</td><td>90.36</td><td>94.11</td><td>92.19</td><td>93.65</td><td>99.03</td><td>96.26</td><td>92.61</td><td>91.92</td><td>92.27</td><td>97.20</td><td>97.50</td><td>97.35</td></tr><tr><td>layer 2</td><td>87.22</td><td>95.17</td><td>91.02</td><td>90.82</td><td>92.41</td><td>91.60</td><td>93.69</td><td>98.75</td><td>96.15</td><td>92.48</td><td>92.50</td><td>92.49</td><td>96.12</td><td>98.62</td><td>97.35</td></tr><tr><td>layer 3</td><td>87.27</td><td>93.89</td><td>90.46</td><td>91.61</td><td>88.81</td><td>90.19</td><td>93.40</td><td>98.83</td><td>96.04</td><td>88.75</td><td>91.22</td><td>89.96</td><td>77.25</td><td>94.53</td><td>85.02</td></tr><tr><td>Multiple-layer</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td></tr></table>
|
| 313 |
+
|
| 314 |
+
# D.2 ABLATION OF STATISTICAL DISTANCE
|
| 315 |
+
|
| 316 |
+
We select the following widely-used statistical distances to calculate the association discrepancy:
|
| 317 |
+
|
| 318 |
+
- Symmetrized Kullback-Leibler Divergence (Ours).
|
| 319 |
+
- Jensen-Shannon Divergence (JSD).
|
| 320 |
+
- Wasserstein Distance (Wasserstein).
|
| 321 |
+
Cross-Entropy (CE).
|
| 322 |
+
L2 Distance (L2).
|
| 323 |
+
|
| 324 |
+
Table 5: Model performance under different definitions of association discrepancy.
|
| 325 |
+
|
| 326 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>L2</td><td>85.26</td><td>74.80</td><td>79.69</td><td>85.58</td><td>81.30</td><td>83.39</td><td>91.25</td><td>56.77</td><td>70.00</td><td>79.90</td><td>87.45</td><td>83.51</td><td>70.24</td><td>96.34</td><td>81.24</td></tr><tr><td>CE</td><td>88.23</td><td>81.85</td><td>84.92</td><td>90.07</td><td>86.44</td><td>88.22</td><td>92.37</td><td>64.08</td><td>75.67</td><td>62.78</td><td>81.50</td><td>70.93</td><td>70.71</td><td>94.68</td><td>80.96</td></tr><tr><td>Wasserstein</td><td>78.80</td><td>71.86</td><td>75.17</td><td>60.77</td><td>36.47</td><td>45.58</td><td>90.46</td><td>57.62</td><td>70.40</td><td>92.00</td><td>71.63</td><td>80.55</td><td>68.25</td><td>92.18</td><td>78.43</td></tr><tr><td>JSD</td><td>85.33</td><td>90.09</td><td>87.64</td><td>91.19</td><td>92.42</td><td>91.80</td><td>94.83</td><td>95.14</td><td>94.98</td><td>83.75</td><td>96.75</td><td>89.78</td><td>95.33</td><td>98.58</td><td>96.93</td></tr><tr><td>Ours</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td></tr></table>
|
| 327 |
+
|
| 328 |
+
As shown in Table 5, our proposed definition of association discrepancy still achieves the best performance. We find that both the CE and JSD can provide fairly good results, which are close to our definition in principle and can be used to represent the information gain. The L2 distance is not suitable for the discrepancy, which overlooks the property of discrete distribution. The Wasserstein distance also fails in some datasets. The reason is that the prior-association and series-association are exactly matched in the position indexes. Still, the Wasserstein distance is not calculated point by point and considers the distribution offset, which may bring noises to the optimization and detection.
|
| 329 |
+
|
| 330 |
+
# Algorithm 2 Association Discrepancy AssDis $(\mathcal{P},\mathcal{S};\mathcal{X})$ Calculation (multi-head version).
|
| 331 |
+
|
| 332 |
+
Input: time series length $N$ ; layers number $L$ ; heads number $h$ ; prior-association $\mathcal{P}_{\mathrm{all}} \in \mathbb{R}^{L \times h \times N \times N}$ ; series-association $\mathcal{S}_{\mathrm{all}} \in \mathbb{R}^{L \times h \times N \times N}$ ;
|
| 333 |
+
|
| 334 |
+
1: $\mathcal{P}' = \operatorname{Mean}(\mathcal{P}, \dim = 1)$ $\triangleright \mathcal{P}' \in \mathbb{R}^{L \times N \times N}$
|
| 335 |
+
2: $S^{\prime} = \operatorname {Mean}(S,\dim = 1)$
|
| 336 |
+
3: $\mathcal{R}' = \mathrm{KL}\Big((\mathcal{P}',\mathcal{S}'),\dim = -1\Big) + \mathrm{KL}\Big((\mathcal{S}',\mathcal{P}')$ , $\dim = -1\Big)$
|
| 337 |
+
4: $\mathcal{R} = \mathrm{Mean}(\mathcal{R}',\mathrm{dim} = 0)$ $\triangleright \mathcal{R}\in \mathbb{R}^{N\times 1}$
|
| 338 |
+
5: Return $\mathcal{R}$ Represent the association discrepancy of each time point
|
| 339 |
+
|
| 340 |
+
# D.3 ABLATION OF PRIOR-ASSOCIATION
|
| 341 |
+
|
| 342 |
+
In addition to the Gaussian kernel with a learnable scale parameter, we also try to use the power-law kernel $P(x; \alpha) = x^{-\alpha}$ with a learnable power parameter $\alpha$ for prior-association, which is also a unimodal distribution. As shown in Table 6, power-law kernel can achieve a good performance in most of the datasets. However, because the scale parameter is easier to optimize than the parameter of power, Gaussian kernel still surpasses the power-law kernel consistently.
|
| 343 |
+
|
| 344 |
+
Table 6: Model performance under different definitions of prior-association. Our Anomaly Transformer adopts the Gaussian kernel as the prior. Power-law refers to the power-law kernel.
|
| 345 |
+
|
| 346 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Power-law</td><td>89.41</td><td>92.46</td><td>90.91</td><td>90.95</td><td>85.87</td><td>88.34</td><td>91.95</td><td>58.24</td><td>71.31</td><td>92.52</td><td>93.29</td><td>92.90</td><td>96.46</td><td>98.15</td><td>97.30</td></tr><tr><td>Ours</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td></tr></table>
|
| 347 |
+
|
| 348 |
+
# E ABLATION OF ASSOCIATION-BASED CRITERION
|
| 349 |
+
|
| 350 |
+
# E.1 CALCULATION
|
| 351 |
+
|
| 352 |
+
We present the pseudo-code of association-based criterion in Algorithm 3.
|
| 353 |
+
|
| 354 |
+
Algorithm 3 Association-based Criterion AnomalyScore $(\mathcal{X})$ Calculation
|
| 355 |
+
|
| 356 |
+
Input: time series length $N$ ; input time series $\mathcal{X} \in \mathbb{R}^{N \times d}$ ; reconstruction time series $\widehat{\mathcal{X}} \in \mathbb{R}^{N \times d}$ ; association discrepancy AssDis( $\mathcal{P}, \mathcal{S}$ ; $\mathcal{X}$ ) $\in \mathbb{R}^{N \times 1}$ ;
|
| 357 |
+
|
| 358 |
+
1: $\mathcal{C}_{\mathrm{AD}} = \mathrm{Softmax}(-\mathrm{AssDis}(\mathcal{P},\mathcal{S};\mathcal{X}),\dim = 0)$ $\triangleright \mathcal{C}_{\mathrm{AD}}\in \mathbb{R}^{N\times 1}$
|
| 359 |
+
2: $\mathcal{C}_{\mathrm{Recon}} = \mathrm{Mean}\Big((\mathcal{X} - \widehat{\mathcal{X}})^2,\dim = 1\Big)$
|
| 360 |
+
3: $\mathcal{C} = \mathcal{C}_{\mathrm{AD}}\times \mathcal{C}_{\mathrm{Recon}}$ $\triangleright \mathcal{C}\in \mathbb{R}^{N\times 1}$
|
| 361 |
+
|
| 362 |
+
4: Return $\mathcal{C}$ Anomaly score for each time point
|
| 363 |
+
|
| 364 |
+
# E.2 ABLATION OF CRITERION DEFINITION
|
| 365 |
+
|
| 366 |
+
We explore the model performance under different definitions of anomaly criterion, including the pure association discrepancy, pure reconstruction performance and different combination methods for association discrepancy and reconstruction performance: addition and multiplication.
|
| 367 |
+
|
| 368 |
+
Association Discrepancy: AnomalyScore $(\mathcal{X}) = \mathrm{Softmax}\Big(-\mathrm{AssDis}(\mathcal{P},\mathcal{S};\mathcal{X})\Big)$
|
| 369 |
+
|
| 370 |
+
Reconstruction: AnomalyScore $(\mathcal{X}) = \left[\| \mathcal{X}_{i,:} - \widehat{\mathcal{X}}_{i,:}\| _2^2\right]_{i = 1,\dots ,N},$
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
\text {A d d i t i o n : A n o m a l y S c o r e} (\mathcal {X}) = \operatorname {S o f t m a x} \Big (- \operatorname {A s s D i s} (\mathcal {P}, \mathcal {S}; \mathcal {X}) \Big) + \left[ \| \mathcal {X} _ {i,:} - \widehat {\mathcal {X}} _ {i,:} \| _ {2} ^ {2} \right] _ {i = 1, \dots , N},
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
Multiplication (Ours): AnomalyScore $(\mathcal{X}) = \mathrm{Softmax}\Big(-\mathrm{AssDis}(\mathcal{P},\mathcal{S};\mathcal{X})\Big)\odot \Big[\| \mathcal{X}_{i,:} - \widehat{\mathcal{X}}_{i,:}\| _2^2\Big]_{i = 1,\dots ,N}.$ (7)
|
| 377 |
+
|
| 378 |
+
From Table 7, we find that directly using our proposed association discrepancy can also achieve a good performance, which surpasses the competitive baseline THOC (Shen et al., 2020) consistently. Besides, the multiplication combination that we used in Equation 6 performs the best, which can bring a better collaboration to the reconstruction performance and association discrepancy.
|
| 379 |
+
|
| 380 |
+
Table 7: Ablation of criterion definition. We also include the state-of-the-art deep model THOC (Shen et al., 2020) for comparison. AssDis and Recon represent the pure association discrepancy and the pure reconstruction performance respectively. Ours refers to our proposed association-based criterion with the multiplication combination.
|
| 381 |
+
|
| 382 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td><td>Avg</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>F1(%)</td></tr><tr><td>THOC</td><td>79.76</td><td>90.95</td><td>84.99</td><td>88.45</td><td>90.97</td><td>89.69</td><td>92.06</td><td>89.34</td><td>90.68</td><td>83.94</td><td>86.36</td><td>85.13</td><td>88.14</td><td>90.99</td><td>89.54</td><td>88.01</td></tr><tr><td>Recon</td><td>78.63</td><td>65.29</td><td>71.35</td><td>79.15</td><td>78.07</td><td>78.61</td><td>89.38</td><td>56.35</td><td>69.12</td><td>76.81</td><td>86.89</td><td>81.53</td><td>69.84</td><td>94.73</td><td>80.40</td><td>76.20</td></tr><tr><td>AssDis</td><td>86.74</td><td>88.42</td><td>87.57</td><td>91.20</td><td>89.81</td><td>90.50</td><td>91.56</td><td>90.41</td><td>90.98</td><td>97.27</td><td>89.48</td><td>93.21</td><td>97.80</td><td>93.25</td><td>95.47</td><td>91.55</td></tr><tr><td>Addition</td><td>77.16</td><td>70.58</td><td>73.73</td><td>88.08</td><td>87.37</td><td>87.72</td><td>91.28</td><td>55.97</td><td>69.39</td><td>84.34</td><td>81.98</td><td>83.14</td><td>97.60</td><td>97.61</td><td>97.61</td><td>82.32</td></tr><tr><td>Ours</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td><td>94.96</td></tr></table>
|
| 383 |
+
|
| 384 |
+
# F CONVERGENCE OF MINIMAX OPTIMIZATION
|
| 385 |
+
|
| 386 |
+
The total loss of our model (Equation 4) contains two parts: the reconstruction loss and the association discrepancy. Towards a better control of association learning, we adopt a minimax strategy for optimization (Equation 5). During the minimization phase, the optimization trends to minimize the association discrepancy and the reconstruction error. During the maximization phase, the optimization trends to maximize the association discrepancy and minimize the reconstruction error.
|
| 387 |
+
|
| 388 |
+
We plot the change curve of the above two parts during the training procedure. As shown in Figures 10 and 11, both parts of the total loss can converge within limited iterations on all the five real-world datasets. This nice convergence property is essential for the optimization of our model.
|
| 389 |
+
|
| 390 |
+

|
| 391 |
+
Figure 10: Change curve of reconstruction loss $\| \mathcal{X} - \widehat{\mathcal{X}}\|_{\mathrm{F}}^2$ in real-world datasets during training.
|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
|
| 401 |
+

|
| 402 |
+
Figure 11: Change curve of association discrepancy $\| \mathrm{AssDis}(\mathcal{P},\mathcal{S};\mathcal{X})\| _1$ in real-world datasets during the training process.
|
| 403 |
+
|
| 404 |
+

|
| 405 |
+
|
| 406 |
+

|
| 407 |
+
|
| 408 |
+

|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
|
| 412 |
+
# G MODEL PARAMETER SENSITIVITY
|
| 413 |
+
|
| 414 |
+
In this paper, we set the hyper-parameters $L$ and $d_{\mathrm{model}}$ following the convention of Transformers (Vaswani et al., 2017; Zhou et al., 2021).
|
| 415 |
+
|
| 416 |
+
Furthermore, to evaluate model parameter sensitivity, we investigate the performance and efficiency under different choices for the number of layers $L$ and hidden channels $d_{\mathrm{model}}$ . Generally, increasing the model size can obtain better results but with larger memory and computation costs.
|
| 417 |
+
|
| 418 |
+
Table 8: Model performance under different choices of the number of layers $L$ .
|
| 419 |
+
|
| 420 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>L=1</td><td>89.24</td><td>93.73</td><td>91.43</td><td>91.99</td><td>97.59</td><td>94.71</td><td>93.58</td><td>99.35</td><td>96.38</td><td>91.57</td><td>95.33</td><td>93.42</td><td>96.74</td><td>98.09</td><td>97.41</td></tr><tr><td>L=2</td><td>89.26</td><td>94.33</td><td>91.72</td><td>91.89</td><td>94.73</td><td>93.29</td><td>93.79</td><td>98.91</td><td>96.28</td><td>92.37</td><td>94.59</td><td>93.47</td><td>97.22</td><td>98.23</td><td>97.72</td></tr><tr><td>L=3</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td></tr><tr><td>L=4</td><td>89.59</td><td>95.76</td><td>92.58</td><td>91.88</td><td>95.40</td><td>93.61</td><td>93.75</td><td>99.13</td><td>96.37</td><td>93.37</td><td>93.45</td><td>93.41</td><td>97.30</td><td>97.58</td><td>97.44</td></tr></table>
|
| 421 |
+
|
| 422 |
+
Table 9: Model performance under different choices of the number of hidden channels $d_{\mathrm{model}}$ . Mem means the averaged GPU memory cost. Time is the averaged running time of 100 iterations during the training process.
|
| 423 |
+
|
| 424 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td><td colspan="2">Mem Time</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>(GB)</td><td>(s)</td></tr><tr><td>dmodel = 256</td><td>88.83</td><td>91.82</td><td>90.30</td><td>91.96</td><td>97.60</td><td>94.70</td><td>93.74</td><td>99.47</td><td>96.52</td><td>93.91</td><td>93.99</td><td>93.95</td><td>97.38</td><td>98.16</td><td>97.77</td><td>4.9</td><td>0.12</td></tr><tr><td>dmodel = 512</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td><td>5.5</td><td>0.15</td></tr><tr><td>dmodel = 1024</td><td>89.44</td><td>96.33</td><td>92.76</td><td>91.80</td><td>94.99</td><td>93.37</td><td>93.58</td><td>99.47</td><td>96.43</td><td>92.02</td><td>95.01</td><td>93.49</td><td>95.78</td><td>98.12</td><td>96.94</td><td>6.6</td><td>0.27</td></tr></table>
|
| 425 |
+
|
| 426 |
+
# H PROTOCOL OF THRESHOLD SELECTION
|
| 427 |
+
|
| 428 |
+
Our paper focuses on unsupervised time series anomaly detection. Experimentally, each dataset includes training, validation and testing subsets. Anomalies are only labeled in the testing subset. Thus, we select the hyper-parameters following the Gap Statistic method (Tibshirani et al., 2001) in K-Means. Here is the selection procedure:
|
| 429 |
+
|
| 430 |
+
- After the training phase, we apply the model to the validation subset (without label) and obtain the anomaly scores (Equation 6) of all time points.
|
| 431 |
+
- We count the frequency of the anomaly scores in the validation subset. It is observed that the distribution of anomaly scores is separated into two clusters. We find that the cluster with a larger anomaly score contains $r$ time points. And for our model, $r$ is closed to $0.1\%$ , $0.5\%$ , $1\%$ for SWaT, SMD and other datasets respectively (Table 10).
|
| 432 |
+
- Due to the size of the test subset being still inaccessible in real-world applications, we have to fix the threshold as a fixed value $\delta$ , which can guarantee that the anomaly scores of $r$ time points in the validation set are larger than $\delta$ and thus detected as anomalies.
|
| 433 |
+
|
| 434 |
+
Table 10: Statistical results of anomaly score distribution on the validation set. We count the number of time points with corresponding values in several intervals.
|
| 435 |
+
(a) SMD, MSL and SWaT datasets.
|
| 436 |
+
|
| 437 |
+
<table><tr><td>Anomaly Score Interval</td><td>SMD</td><td>MSL</td><td>SWaT</td></tr><tr><td>(0,+∞]</td><td>141681</td><td>11664</td><td>99000</td></tr><tr><td>[0,10-2]</td><td>140925</td><td>11537</td><td>98849</td></tr><tr><td>(10-2,0.1]</td><td>2</td><td>8</td><td>17</td></tr><tr><td>(0.1,+∞]</td><td>754</td><td>119</td><td>134</td></tr><tr><td>Ratio of (0.1,+∞]</td><td>0.53%</td><td>1.02%</td><td>0.14%</td></tr></table>
|
| 438 |
+
|
| 439 |
+
(b) SMAP and PSM datasets.
|
| 440 |
+
|
| 441 |
+
<table><tr><td>Anomaly Score Interval</td><td>SMAP</td><td>PSM</td></tr><tr><td>(0,+∞]</td><td>27037</td><td>26497</td></tr><tr><td>[0,10-3]</td><td>26732</td><td>26223</td></tr><tr><td>(10-3,10-2]</td><td>0</td><td>5</td></tr><tr><td>(10-2,+∞]</td><td>305</td><td>269</td></tr><tr><td>Ratio of (10-2,+∞]</td><td>1.12%</td><td>1.01%</td></tr></table>
|
| 442 |
+
|
| 443 |
+
Note that, directly setting the $\delta$ is also feasible. According to the intervals in Table 10, we can fix the $\delta$ as 0.1 for the SMD, MSL and SWaT datasets, 0.01 for the SMAP and PSM datasets, which yield a quite close performance to setting $r$ .
|
| 444 |
+
|
| 445 |
+
Table 11: Model performance. Choose by $\delta$ means that we fix $\delta$ as 0.1 for the SMD, MSL and SWaT datasets, 0.01 for the SMAP and PSM datasets. Choose by $r$ means that we select $r$ as $0.1\%$ for SWaT, $0.5\%$ for SMD and $1\%$ for the other datasets.
|
| 446 |
+
|
| 447 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Choose by δ</td><td>88.65</td><td>97.17</td><td>92.71</td><td>91.86</td><td>95.15</td><td>93.47</td><td>97.69</td><td>98.24</td><td>97.96</td><td>86.02</td><td>95.01</td><td>90.29</td><td>97.69</td><td>98.24</td><td>97.96</td></tr><tr><td>Choose by r</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td></tr></table>
|
| 448 |
+
|
| 449 |
+
In real-world applications, the number of selected anomalies is always decided up to human resources. Under this consideration, setting the number of detected anomalies by the ratio $r$ is more practical and easier to decide according to the available resources.
|
| 450 |
+
|
| 451 |
+
# I MORE BASELINES
|
| 452 |
+
|
| 453 |
+
In addition to the time series anomaly detection methods, the methods for change point detection and time series segmentation can also perform as valuable baselines. Thus, we also include the BOCPD (Adams & MacKay, 2007) and TS-CP2 (Deldari et al., 2021) from change point detection and U-Time (Perslev et al., 2019) from time series segmentation for comparison. Anomaly Transformer still achieves the best performance.
|
| 454 |
+
|
| 455 |
+
Table 12: Additional quantitative results for Anomaly Transformer (Ours) in five real-world datasets. The $P, R$ and $F1$ represent the precision, recall and F1-score (as %) respectively. F1-score is the harmonic mean of precision and recall. For these metrics, a higher value indicates a better performance.
|
| 456 |
+
|
| 457 |
+
<table><tr><td>Dataset</td><td colspan="3">SMD</td><td colspan="3">MSL</td><td colspan="3">SMAP</td><td colspan="3">SWaT</td><td colspan="3">PSM</td></tr><tr><td>Metric</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>BOCPD</td><td>70.90</td><td>82.04</td><td>76.07</td><td>80.32</td><td>87.20</td><td>83.62</td><td>84.65</td><td>85.85</td><td>85.24</td><td>89.46</td><td>70.75</td><td>79.01</td><td>80.22</td><td>75.33</td><td>77.70</td></tr><tr><td>TS-CP2</td><td>87.42</td><td>66.25</td><td>75.38</td><td>86.45</td><td>68.48</td><td>76.42</td><td>87.65</td><td>83.18</td><td>85.36</td><td>81.23</td><td>74.10</td><td>77.50</td><td>82.67</td><td>78.16</td><td>80.35</td></tr><tr><td>U-Time</td><td>65.95</td><td>74.75</td><td>70.07</td><td>57.20</td><td>71.66</td><td>63.62</td><td>49.71</td><td>56.18</td><td>52.75</td><td>46.20</td><td>87.94</td><td>60.58</td><td>82.85</td><td>79.34</td><td>81.06</td></tr><tr><td>Ours</td><td>89.40</td><td>95.45</td><td>92.33</td><td>92.09</td><td>95.15</td><td>93.59</td><td>94.13</td><td>99.40</td><td>96.69</td><td>91.55</td><td>96.73</td><td>94.07</td><td>96.91</td><td>98.90</td><td>97.89</td></tr></table>
|
| 458 |
+
|
| 459 |
+
# J LIMITATIONS AND FUTURE WORK
|
| 460 |
+
|
| 461 |
+
Window size As shown in the Figure 7 of Appendix A, the model may fail if the window size is too small for association learning. But the Transformers is with quadratic complexity w.r.t. the window size. The trade-off is needed for real-world applications.
|
| 462 |
+
|
| 463 |
+
Theoretical analysis As a well-established deep model, the performance of Transformers has been explored in previous works. But it is still under-exploring for the theory of complex deep models. In the future, we will explore the theorem of Anomaly Transformer for better justifications in light of classic analysis for autoregression and state space models.
|
| 464 |
+
|
| 465 |
+
# K DATASET
|
| 466 |
+
|
| 467 |
+
Here is the statistical details of experiment datasets.
|
| 468 |
+
|
| 469 |
+
Table 13: Details of benchmarks. AR represents the truth abnormal proportion of the whole dataset.
|
| 470 |
+
|
| 471 |
+
<table><tr><td>Benchmarks</td><td>Applications</td><td>Dimension</td><td>Window</td><td>#Training</td><td>#Validation</td><td>#Test (labeled)</td><td>AR (Truth)</td></tr><tr><td>SMD</td><td>Server</td><td>38</td><td>100</td><td>566,724</td><td>141,681</td><td>708,420</td><td>0.042</td></tr><tr><td>PSM</td><td>Server</td><td>25</td><td>100</td><td>105,984</td><td>26,497</td><td>87,841</td><td>0.278</td></tr><tr><td>MSL</td><td>Space</td><td>55</td><td>100</td><td>46,653</td><td>11,664</td><td>73,729</td><td>0.105</td></tr><tr><td>SMAP</td><td>Space</td><td>25</td><td>100</td><td>108,146</td><td>27,037</td><td>427,617</td><td>0.128</td></tr><tr><td>SWaT</td><td>Water</td><td>51</td><td>100</td><td>396,000</td><td>99,000</td><td>449,919</td><td>0.121</td></tr><tr><td>NeurIPS-TS</td><td>Various Anomalies</td><td>1</td><td>100</td><td>20,000</td><td>10,000</td><td>20,000</td><td>0.018</td></tr></table>
|
| 472 |
+
|
| 473 |
+
# L UCR DATASET
|
| 474 |
+
|
| 475 |
+
UCR Dataset is a very challenging and comprehensive dataset provided by the Multi-dataset Time Series Anomaly Detection Competition of KDD2021 (Keogh et al., Competition of International Conference on Knowledge Discovery & Data Mining 2021). The whole dataset contains 250 sub-datasets, covering various real-world scenarios. Each sub-dataset of UCR has only one anomaly segment and only has one dimension. These sub-datasets range in length from 6,684 to 900,000 and are pre-divided into training and test sets.
|
| 476 |
+
|
| 477 |
+
We also experiment on the UCR dataset for a wide evaluation. As show in Table 14, our Anomaly Transformer still achieves the state-of-the-art in this challenging benchmark.
|
| 478 |
+
|
| 479 |
+
Table 14: Quantitative results in UCR Dataset. $IF$ refers to the IsolationForest (2008). Ours is our Anomaly Transformer. $P, R$ and $F1$ represent the precision, recall and F1-score (\%) respectively.
|
| 480 |
+
|
| 481 |
+
<table><tr><td>Metric</td><td>LSTM-VAE</td><td>InterFusion</td><td>OmniAnomaly</td><td>THOC</td><td>Deep-SVDD</td><td>BeatGAN</td><td>LOF</td><td>OC-SVM</td><td>IF</td><td>Ours</td></tr><tr><td>P</td><td>62.08</td><td>60.74</td><td>64.21</td><td>54.61</td><td>47.08</td><td>45.20</td><td>41.47</td><td>41.14</td><td>40.77</td><td>72.80</td></tr><tr><td>R</td><td>97.60</td><td>95.20</td><td>86.93</td><td>80.83</td><td>88.91</td><td>88.42</td><td>98.80</td><td>94.00</td><td>93.60</td><td>99.60</td></tr><tr><td>F1</td><td>75.89</td><td>74.16</td><td>73.86</td><td>65.19</td><td>61.56</td><td>59.82</td><td>58.42</td><td>57.23</td><td>56.80</td><td>84.12</td></tr></table>
|
anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6b6abb116e0909fb13321a48d8f1c7846786d7966c409f506c208613e4ea8af1
|
| 3 |
+
size 1448820
|
anomalytransformertimeseriesanomalydetectionwithassociationdiscrepancy/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:af85ff86241d4a11a8318a2cf3ba98b855b07969dad6280d85e7f86969a9de8e
|
| 3 |
+
size 674392
|
assessinggeneralizationofsgdviadisagreement/9244bf2e-51d2-4901-8c9d-762b80ce69da_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:30701a1da52111edd04d83c70bba86ef5caa581603481d835a508c3de79712df
|
| 3 |
+
size 177795
|
assessinggeneralizationofsgdviadisagreement/9244bf2e-51d2-4901-8c9d-762b80ce69da_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b083a36f66b86635c62817c74f75fbf4bb8e35e453f6a68e140a08dfb200ad67
|
| 3 |
+
size 212335
|