Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1081",
"Title": "So3krates: Equivariant attention for interactions on arbitrary length-scales in molecular systems",
"Limitation": "A limitation of the current implementation of SO3KRATES is that spherical neighborhoods N in eq. (14) are computed from all pairwise distances in SPHC space. An alternative implementation could use a space partitioning scheme to find neighborhoods more efficiently. In a broader context, our work falls into the category of approaches that can help to reduce the vast computational complexity of molecular and material simulations. This can accelerate novel drug and material designs, which holds the promise of tackling societal challenges, such as climate change and sustainable energy supply [53]. Of course, our method could also be used for nefarious applications, e.g. design of chemical warfare, but this is true for all quantum chemistry methods. Future research will focus on applications of SO3KRATES to materials and bio-molecules, which are typical examples of chemical systems where the accurate description of non-local effects is necessary to produce novel insights. Efficient treatment of non-local effects in point cloud data goes beyond the domain of quantum chemistry. One way of representing non-local dependencies are non-local neural networks [54]. In comparison to the presented approach they compute a relation in feature rather than in Euclidean space, making it incapable of capturing direct geometric relations in Euclidean space. However, this might be necessary if the relative orientation of objects far apart from each other plays a role for identifying different objects.",
"Reviewer Comment": "Reviewer_2: Overall the authors have come up with a principled approach for including long ranged geometric features into a message passing architecture. I think the method is interesting and sufficiently novel for publication, and the results that are presented are favourable but perhaps slightly incomplete (see significance section below).\nOriginality\nThe idea to perform some sort of convolution with spherical harmonic wavelets to provide additional geometric input is not new, and nor is the discussion of the equivariance of this operation. However, the interpretation of these SPHC features as living in a coordinate space where interesting neighbourhoods can be learned for message passing that correspond to very distant communication in Euclidean space is an interesting new insight that I had not thought about before.\nQuality\nThe main takeaway for me was that the proposed model is (1) solving the specific case of cumulene highlighted in literature before (2) achieving similar performance on MD17 as a number of other methods, but with a small number of parameters and higher train/test speed. I have comments on (1) in the Questions section below and (2) in the significance section below. However I think overall the quality of results meets the publication bar because the lightweight nature of the model seems a promising direction.\nClarity\nThe paper is easy to follow.\nSignificant\nI feel like the significance of the paper suffers slightly from a couple of missing baselines:\n(1) I’m not an expert on ML forcefields, but I think SpookyNet makes more of a deliberate effort to model non-local behaviour than NequIP, so I was not sure why NequIP was chosen as the baseline in fig 2. A more considered description of how models like SpookyNet, PhysNet, sGDML etc capture non-locality and what sort of non-locality SPHC is particularly adept (or not!) at capturing might be more educational than the somewhat obvious (and already published) statement that a NequIP model cannot capture effects larger than it can see.\n(2) I’m not sure I know how to evaluate the differences between the numbers on MD17: Although the statement that the errors are low and the parameter counts are low are true, all the models in table 1 have ~10^6 parameters in order of magnitude and all have broadly similar performance on MD17. Do these differences matter in practice? The timings of So3krates in fig 1d seem the most convincing difference to existing methods, but some of the baseline models are missing from this plot and DimeNet (the closest competitor in timing) is missing from the rest of the paper.\n(3) SpookyNet has ~10-20 meV/A on the full QM7-X set, but this is not mentioned so I was not sure why So3krates was compared with sGDML rather than something like SpookyNet. Also I think I missed why So3krates is only tested on a subset of QM7-X.\nI also flag a slight concern that this paper mainly addresses the cumulene problem which is not of broad interest to NeurIPS, but I personally found this interesting so I did not factor this into my score.\nQuestions:\nThe proposed SPHC solution is well suited to the specific angle-dependent challenge of cumulene, however there are no other examples of non-local interaction such as dipole-dipole or charge-dipole interaction highlighted in the paper. Is there another case study where SPHC is the correct representation to capture the non-local effects and a 6A cutoff is so clearly deficient? How does So3krates do at modeling water clusters or the Au2-MgO system considered in the SpookyNet paper or other examples originating from Ko et al?\nLimitations:\nLocality is often built-in to neural-network based force fields to both (1) limit computational cost and (2) enable generalisation from smaller sub-systems to larger simulations. Do we lose these when we use a non-local model? To what extent can So3krates be scaled to something like the 100 million atom system recently run on 128 GPUs using Allegro [1]. Perhaps such an experiment is hard to run, but some evidence that the speed benefit of So3krates on MD17 translates to a large scale simulation would help the case for non-local model development.\n[1] https://arxiv.org/pdf/2204.05249.pdf\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_3: Strengths\nSPHC representation helps the model capture global dependencies in a natural way.\nThe entire model is equivariant, which is a desirable property to have for its data efficiency.\nThe proposed method requires very few parameters to achieve good performance.\nIt is also very fast for inference, which is an important property to have for many applications like MD simulations where millions of inference steps are needed.\nWeaknesses\nThe paper does not present results from multiple, diverse datasets (unlike many other papers in this area). So, it is unclear how generalizable the results are.\nThe results in table 1 show that Nequip obtains better results (though at higher training & inference cost). Therefore, this method is only suitable when computational efficiency is more important than accuracy.\nQuestions:\nHow does the performance vary with the number of nodes? Spherical neighborhoods computation and self-attention could be very expensive for large systems like bio-molecules and materials.\nMD17 is a small, relatively homogenous dataset. Do the authors have a sense for how the model would perform on a large and diverse dataset like OC20 (opencatalystproject.org)?\nLimitations:\nSeems adequate.\nEthics Flag: No\nSoundness: 2 fair\nPresentation: 3 good\nContribution: 2 fair\n\nReviewer_4: The proposal is sound and convincing, and echoes the long established methods in signal processing. Experimental demonstration and comparison to prior art are convincing.\nQuestions:\nThe idea of using harmonic functions like wavelets in deep learning has been pursued before [for example, see \"Wavelet Neural Networks: With Applications in Financial Engineering, Chaos, and Classification\", Antonios K. Alexandridis, Achilleas D. Zapranis, John Wiley & Sons, 2014]. How does the current proposal compare to those uses?\nLimitations:\nThe authors discuss the limits of the current applications to materials discovery, while projecting a wider scope of application to other spatial modeling needs like in computer vision. However, in those wider areas, there have been more active considerations of other representations to capture spatio-temporal dependences. Some review of where the proposal approach could distinguish itself can add to the argument.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good",
"abstractText": "The application of machine learning methods in quantum chemistry has enabled the study of numerous chemical phenomena, which are computationally intractable with traditional ab-initio methods. However, some quantum mechanical properties of molecules and materials depend on non-local electronic effects, which are often neglected due to the difficulty of modeling them efficiently. This work proposes a modified attention mechanism adapted to the underlying physics, which allows to recover the relevant non-local effects. Namely, we introduce spherical harmonic coordinates (SPHCs) to reflect higher-order geometric information for each atom in a molecule, enabling a non-local formulation of attention in the SPHC space. Our proposed model SO3KRATES3 – a self-attention based message passing neural network – uncouples geometric information from atomic features, making them independently amenable to attention mechanisms. Thereby we construct spherical filters, which extend the concept of continuous filters in Euclidean space to SPHC space and serve as foundation for a spherical self-attention mechanism. We show that in contrast to other published methods, SO3KRATES is able to describe nonlocal quantum mechanical effects over arbitrary length scales. Further, we find evidence that the inclusion of higher-order geometric correlations increases data efficiency and improves generalization. SO3KRATES matches or exceeds stateof-the-art performance on popular benchmarks, notably, requiring a significantly lower number of parameters (0.25–0.4x) while at the same time giving a substantial speedup (6–14x for training and 2–11x for inference) compared to other models.",
"1 Introduction": "Atomistic simulations use long time-scale molecular dynamics (MD) trajectories to predict macroscopic properties that arise from interactions on the microscopic scale [1–3]. Their predictive reliability is determined by the accuracy of the underlying force field (FF), which needs to be queried at every time step. This quickly becomes a computational bottleneck if the forces are determined from first principles, which may be required for accurate results. To that end, machine learning\n⇤thorbenjan.frank@googlemail.com †klaus-robert.mueller@tu-berlin.de 3https://github.com/thorben-frank/mlff\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nFFs (MLFFs) offer a computationally more efficient, yet accurate empirical alternative to expensive ab-initio methods [2, 4–24].\nIn recent years, Geometric Deep Learning has become a popular design paradigm, which exploits relevant symmetry groups of the underlying learning problem by incorporating a geometric prior [12, 25, 26]. This effectively restricts the learnable functions of the model to a subspace with a meaningful inductive bias. Prominent examples for such models are e.g. convolutional neural networks (CNNs) [27], which are equivariant w. r. t. the group of translations, or graph neural networks (GNNs) [28], which are invariant w. r. t. node permutation.\nFor molecular property prediction, it has been shown that equivariance w. r. t. the 3D rotation group SO(3) greatly improves data efficiency and accuracy of the learned FFs [29–32]. To achieve equivariance, architectures either rely on feature expansions in terms of spherical harmonics (SH) [33] or explicitly include (dihedral) angles [29, 34]. While the latter scales quadratically (cubically) in the number of neighboring atoms and has been shown to be geometrically incomplete [35], the calculation of spherical harmonics scales only linear in the number of neighboring atoms, which makes them a fast and accurate alternative [30, 32, 36].\n* Reference times were taken from [34]. As our own timings were measured on a different GPU, we decreased the reported times according to speedup-factors reported in [37]. For full details, see appendix A.6.\nHowever, current higher-order geometric representations based on SHs usually result in expensive transformations, since an individual feature channel per SH degree (and order) is required. As a result, going to higher degrees is computationally expensive and comes at the price of increasing complexity, resulting in state-of-the-art (SOTA) models with millions of parameters [30, 32, 34]. However, in order to be applicable to large molecular structures, models are required to be both efficient and accurate on all length scales.\nNon-local electronic effects have been outlined as one of the major challenges for a new generation of MLFFs [21]. They result in non-local, higher-order geometric relations between atoms. Most current architectures implicitly assume locality of interactions (expressed through a local neighborhood), which prohibits an efficient description of all relevant atomic interactions at larger scales. Simply increasing the cutoff radius used to determine local neighborhoods is not an adequate solution, since it only shifts the problem to larger length scales [30].\nIn this work, we propose spherical harmonic coordinates (SPHCs), which encode higher-order geometric information for each node in a molecular graph (Fig. 1e). This is in stark contrast to current approaches, which consider molecules as three-dimensional point clouds with learned features and fixed atomic coordinates: we propose to make the SPHCs themselves a learned quantity. Through localization in the space of SPHCs (Fig. 1f), models are able to efficiently describe electronic effects that are non-local in three-dimensional Euclidean space (Fig. 1g).\nWe then present SO3KRATES (Fig. 1b), a self-attention based message-passing neural network (MPNN), which decouples atomic features and SPHCs and updates them individually (Fig. 1a). This resembles ideas from equivariant graph neural networks [25], but allows to go to arbitrarily high geometric orders. The separation of higher-order geometric and feature information allows to overcome the parametric and computational complexity usually encountered in models with higherorder geometric representations, since we only require a single feature channel (instead of one per SH degree and order). Thus, SO3KRATES resembles some early architectures like SCHNET [10] or PHYSNET [19] in parametric simplicity. We further show that SO3KRATES outperforms the popular SGDML [38] kernel model by a large margin in the low-data regime, a domain which has so far been considered to be dominated by kernel machines [21]. Numerical evidence suggests that the data efficiency of SO3KRATES is directly related to the maximal degree of geometric information encoded in the SPHCs. We then apply SO3KRATES to the well-established MD17 benchmark and show that our model achieves SOTA results, despite is light-weight structure and having only 0.25–0.4x the number of parameters of competitive architectures (Fig. 1c), while achieving speedups of 6–14x and 2–11x for training and inference, respectively (Fig. 1d).\nAlthough we focus on quantum chemistry applications in this work, the developed methods are also applicable to other fields where long-ranged correlations in three-dimensional data are relevant. For example, models based on SPHCs may also be applicable to tasks like 3D shape classification or computer vision.",
"2 Preliminaries and Related Work": "In the following, we review the most important concepts our method is based upon and relate it to prior work.\nMessage Passing Neural Networks MPNNs [14] carry over many of the benefits of convolutions to unstructured domains and have thus become one of the most promising approaches for the description of molecular properties. Their general working principle relies on the repeated iteration of message passing (MP) steps, which can be phrased as follows [14, 25]\nmij = m(fi, fj , rij) (1)\nmi = X\nj2N (i)\nmij (2)\nf 0i = u(fi,mi) . (3)\nHere, mij is the message between atoms i and j computed with the message function m(·), mi is the aggregation of all messages in the neighborhood N (i) of atom i, and u(·) is an update function returning updated features f 0i based on the current features fi and message mi. The neighborhood\nN (i) consists of all atoms which lie within a given cutoff radius around the atomic position ri, which ensures linear scaling in the number of atoms n. While earlier variants parametrized messages only in terms of inter-atomic distances [13, 19], more recent approaches also take higher-order geometric information into account [25, 29, 31, 32, 39, 40].\nMolecules as Point Clouds A molecule can be considered as a point cloud of n atoms P3D(R,F), where R = (r1, ...rn) denotes the set of atomic positions ri 2 R3 and F = (f1, . . . , fn) is the set of rotationally invariant atomic descriptors, or features, fi 2 RF . We write the distance vector pointing from the position of atom i to the position of atom j as rij = rj ri, the distance as rij = krijk2 and the normalized distance vector as r̂ij = rij/rij . Given the point cloud, a density over Euclidean space assigning a vector value to each point r can be constructed as\n⇢(r) = nX\ni=1\n(kri rk2) · fi , (4)\nwhere is the Dirac delta function. It can be shown that applying a convolutional filter on ⇢(r) resembles the update steps used in MPNNs [41].\nEquivariance Given a set of transformations that act on a vector space A as Sg : A 7! A to which we associate an abstract group G, a function f : A 7! B is said to be equivariant w. r. t. G if\nf(Sgx) = Tgf(x) , (5) where Tg : B 7! B is an equivalent transformation on the output space [25]. Thus, in order to say that f is equivariant, it must hold that under transformation of the input, the output transforms “in the same way”. While equivariance has been a popular concept in signal processing for decades (cf. e.g. [42] or wavelet neural networks [43]), recent years have seen efforts to design group equivariant NNs and kernel methods, since respecting relevant symmetries builds an important inductive bias [12, 44, 45]. Examples are CNNs [27] which are equivariant w. r. t. translation, GNNs [14, 28] which are invariant (Tg = I) w. r. t. permutation, or architectures which are equivariant w. r. t. the SO(3) group [25, 31, 33, 36, 46]. In this work, we consider the SO(3) group of rotations, such that A is the Euclidean space R3, where the corresponding group actions are given by rotation matrices R✓ 2 R3⇥3.\nSpherical Harmonics The spherical harmonics are special functions defined on the surface of the sphere S2 = {r̂ 2 R3 : kr̂k2 = 1} and form an orthonormal basis for the irreducible representations (irreps) of SO(3). In the context of tensor field networks [33], they have been introduced as elementary building blocks for SO(3)-equivariant neural networks. The spherical harmonics are commonly denoted as Y ml (r̂) : S\n2 7! R, where the degree l determines all possible values of the order m 2 { l, . . . ,+l}. They transform under rotation as\nY ml (R✓ r̂) = X\nm0\nDlmm0(R✓)Y m0 l (r̂) , (6)\nwhere Dlmm0(R✓) are the entries of the Wigner-D matrix D l (R✓) 2 R(2l+1)⇥(2l+1) [47]. Based on the spherical harmonics, we define a vector-valued function Y (l) : S2 7! R2l+1 for each degree l, with entries Y ml for all valid orders m of a given degree l. Since Y (l) (R✓ r̂) = Dl(R✓)Y (l)(r̂) (cf. eq. (6)), Y (l) is equivariant w. r. t. SO(3).\nTensor Product Contractions The irreps Y (l1) and Y (l2) can be coupled by computing their tensor product Y (l1) ⌦ Y (l2), which can equivalently be expressed as a direct sum [33, 48]\nY (l1) ⌦ Y (l2) = l1+l2M\nl3=|l1 l2|\n:=(Y (l1)⌦l3Y (l2)) z }| { Ỹ (l3) , (7)\nwhere the entry of order m3 for the coupled irreps Ỹ (l3) is given by\nỸ l3m3 = l1X\nm1= l1\nl2X\nm2= l2\nCl1,l2,l3m1,m2,m3Y l1 m1Y l2 m2 , (8)\nand Cl1,l2,l3m1,m2,m3 are the so-called Clebsch-Gordon coefficients. In the following, we will denote the tensor product of degrees l1 and l2 followed by “contraction” to l3 (meaning the irreps of degree l3 in the direct sum representation of their tensor product) as\nY (l1) ⌦l3 Y (l2) , which is a mapping of\nthe form R(2l1+1)⇥(2l2+1) 7! R2l3+1, since m3 2 { l3, . . . l3}.",
"3 Methods": "In the following, we describe the main methodological contributions of this work. We introduce the concept of an adapted point cloud P3D(R,X ,F), which incorporates the set of spherical harmonics coordinates (SPHCs) X = ( 1, . . . , n) (see below) in addition to features F and Euclidean coordinates R. However, contrary to R, SPHCs X are refined during the message passing updates. Having SPHCs as part of the molecular point cloud extends the idea of current MPNNs, which learn message functions on R, only. Instead, we learn a message function m (cf. eq. (1)) on both, the (fixed) atomic coordinates R as well as on the SPHCs X . This adapted message-passing scheme allows to learn non-local geometric corrections. Based on these design principles, we propose the SO3KRATES architecture.\nInitialization Feature vectors are initialized from the atomic numbers zi 2 N (denoting which chemical element an atom belongs to) by an embedding map\nfi = femb(zi), (9)\nwhere femb : N 7! RF . We define SPHCs as the concatenation of degrees L := {lmin, . . . , lmax}\n= [ (lmin)| {z } 2R2lmin+1 , . . . , (lmax)| {z } 2R2lmax+1 ] 2 R(lmax lmin+1) 2 , (10)\nsuch that their transformation under rotation can be expressed in terms of concatenated Wigner-D matrices (see appendix A.1). The short-hand (l) 2 R2l+1 refers to the subset of SPHCs with degree l. They are initialized as\n(l)i = 1\nCi\nX\nj2N (i)\nrcut(rij) · Y (l)(r̂ij), (11)\nwhere Ci = P\nj2N (i) rcut(rij), rcut : R 7! R is the cosine cutoff function [6], and the sum runs over the neighborhood N (i) of atom i.\nMessage Passing Update Two branches of attention-weighted MP steps are defined for the feature vectors f and SPHCs (see Fig. 1a). After initialization (eqs. (9) and (11)), the features are updated as\nf 0i = fi + X\nj2N (i)\nrcut(rij) · ↵ij · fj , (12)\nwhere ↵ij 2 R are self-attention [49, 50] coefficients (see below). In analogy to the feature vectors, it is possible to define an MP update for the SPHCs as\n0 (l)i = (l) i +\nX\nj2N (i)\nrcut(rij) · ↵ (l) ij · Y (l) (r̂ij) , (13)\nwhere individual attention coefficients ↵(l)ij 2 R for each degree of the SPHCs are computed using multi-head attention [49]. However, with this definition, both MP updates are limited to local neighborhoods N (i). To be able to model non-local effects, we introduce the SPHC distance matrix X 2 Rn⇥n with entries ij = k i jk2, i.e. distances between two atoms i and j in SPHC space for all possible pair-wise combinations of n atoms. To have uniform scales, we further apply the softmax along each row of X to generate a rescaled matrix X̃ = softmax(X) with entries ̃ij . A polynomial cutoff function ̃cut [29] is then applied to X̃ to define spherical neighborhoods N (i) (see A.2), which may include atoms that are far away in Euclidean space (see Fig. 1f). The spherical cutoff distance is chosen as ̃cut = 1/n to ensure that spherical neighborhoods remain small, even\nwhen going to larger molecules. We then incorporate non-local geometric corrections into the MP update of the SPHCs as\n0 (l)i = (l) i +\nX\nj2N (i)\nrcut(rij) · ↵ (l) ij · Y (l) (r̂ij)\n| {z } local in R3\n+\nX\nj2N (i)\ñcut(̃ij) · ↵ (l) ij · Y (l) (r̂ij)\n| {z } local in , but non-local in R3\n. (14)\nWe will show in the first part of the experiments, how geometric corrections from SPHC space allow for modelling non-local quantum effects, inaccessible to current architectures. In the second part, we use a SO3KRATES model without geometric corrections, which makes it a traditional MPNN in the sense of only localizing in R3. We find this architecture to be highly parameter, data and time efficient while capable of reaching SOTA results.\nSpherical Filter and Self-Attention The self-attention coefficients in eqs. (12)–(14) are calculated as\n↵ij = f T i (wij fj)/\np F , (15)\nwhere wij 2 RF is the output of a filter generating function and ‘ ’ denotes the element-wise product. The filter maps the Euclidean distance rij and per-degree SPHC distances (l) ij = k (l) j (l) i k2 between the current SPHCs of atoms i and j into the feature space RF (as a short-hand, we write the vector containing all per-degree SPHC distances as [ (l)ij ]l2L). It is built as the linear combination of two filter-generating functions\nwij = r(rij)| {z } radial filter\n+ s ⇣ [ (l)ij ]l2L ⌘\n| {z } spherical filter\n, (16)\nwhich separately act on the Euclidean and SPHC distances. We call r : R 7! RF the radial filter function and s : R|L| 7! RF the spherical filter function (an ablation study for s can be found in appendix A.4). Since per-atom features fi, interatomic distances rij , and per-degree distances (l) ij are invariant under rotations (proof in appendix A.1), so are the self-attention coefficients ↵ij .\nWhile we choose to pass the per-degree norms directly into the filter generating function s, future work might explore the possibilities of alternative metrics (instead of the L2 norm) or an expansion in terms of basis functions as it is common practice for inter-atomic distances (see appendix A.3 eq. (30)).\nAtomwise Interaction After each MP update, features and SPHCs are coupled with each other according to\nf 0i = fi + 1 ⇣ fi, [ (l) i ]l2L, [̃ (l) i ]l2L ⌘ , (17)\n0 (l)i = (l) i + (l) 2 ⇣ fi, [ (l) i ]l2L, [̃ (l) i ]l2L ⌘ (l)i + (l) 3 ⇣ [̃(l)i ]l2L ⌘ ̃(l)i , (18)\nwhere 1 : RF+2|L| 7! RF , (l)2 : RF+2|L| 7! R, and (l) 3 : R|L| 7! R. In the inputs to 1,2,3, degree-wise scalars (l) = k (l)k2 are used to preserve equivariance. The coupling step additionally includes cross-degree coupled SPHCs ̃(l)i for each degree l. Following [48] they are constructed as\ñ(l)i = lmaxX\nl1=lmin\nlmaxX\nl2=l1+1\nkl1,l2,l ⇣ (l1)i ⌦l (l2) i ⌘ , (19)\nwhere kl1,l2,l 2 R are learnable coefficients for all valid combinations of l1, l2 given l and the term in brackets is the contraction of degrees l1 and l2 into degree l (eq. (8)).\nSO3KRATES architecture Using the design paradigm above, we build the transformer network SO3KRATES, which consists of a self-attention block on F and X (eqs. (12) and (13)), respectively, as well as an interaction block (eqs. (17) and (18)) per layer. After initialization of the features and the SPHCs according to eqs. (9) and (11), they are updated iteratively by passing through nl layers. Atomic energy contributions Ei 2 R are predicted from the features of the final layer using a two-layered output block. The individual contributions are summed to the total energy prediction E = Pn i Ei. See Fig. 1b for an overview. More details on the implementation, training details and network hyperparameters are given in appendix A.3 and A.13.",
"4 Experiments": "In the first subsection, we show how non-local quantum effects can be incorporated by using nonlocal corrections from the space of SPHCs. In the second part of the experiments, we remove the non-local part which yields a traditional, R3-local MPNN which reaches SOTA results on established benchmarks while requiring much less computational time and parameters than competitive models. A scaling analysis as well as an accuracy comparison for both model variants can be found in appendix A.8 and A.5.\nNon-Local Geometric Interactions For efficiency reasons, MPNNs only consider interactions between atoms in local neighborhoods, i.e. within a cutoff radius rcut. Thus, information can only be propagated over a distance of rcut within a single MP step. Although multiple MP updates increase the effective cutoff distance, because information can “hop” between different neighborhoods as long as they share at least one atom, each MP step is accompanied by an undesirable loss of information, which limits the accuracy that can be obtained. Consequently, MPNNs are unable to describe nonlocal effects on length-scales that exceed the effective cutoff distance. To illustrate this problem, we consider the challenging open task [21] of learning the potential energy of cumulene molecules with different sizes (see Fig. 2a). Here, the relative orientation of the hydrogen rotors at the far ends of the molecule strongly influences its energy due to non-local electronic effects [21]. In order to be able to successfully learn the energy profile with a local model, the effective cutoff has to be large enough to allow information to propagate from one hydrogen rotor to the other.\nAs a representative example for MPNNs, we consider the recently proposed NEQUIP model [32], which achieves SOTA performance on several benchmarks. We find that even when the effective cutoff radius is large enough in principle, an MPNN with nl = 4, rcut = 2.5Å, and lmax 1 fails to learn the correct energy profile. This is due to the fact that the relevant geometric information “cancels out” (similar to addition of vectors oriented in opposite directions) within each neighborhood,\nunderlining the limited expressiveness of mean-field interactions in MPNNs. Only by including higher-order geometric correlations, e.g. going to lmax = 3, the correct energy profile can be recovered (at the cost of computational efficiency). When going to even larger cumulene structures, however, the effective cutoff becomes too small and it is necessary to increase the number of MP layers to solve the task (again, at the cost of lower computational efficiency), which is illustrated in appendix A.11. Neither increasing the maximum degree of interactions lmax, nor the number of layers nl, is a satisfactory workaround: Instead of offering a general solution to describe non-local interactions, both options decrease computational efficiency, while only shifting the problem to larger length-scales or higher-order geometric correlations.\nWe further apply three additional models to the cumulene structure with nine carbon atoms. To that end, we use an invariant SCHNET model with varying cutoff distances (6 Å and 12 Å), an inherently global but invariant SGDML model and the SPOOKYNET architecture which explicitly includes global effects using a non-local block. We find that none of the three is capable of describing the rotor energy profile of cumulene.\nIn contrast, our proposed SO3KRATES architecture is able to reproduce the energy profile for cumulene molecules of all sizes independent of the effective cutoff radius. Crucially, even with lmax = 1, the predicted energy matches the ab-initio reference faithfully. We find that geometric corrections in the MP update of the SPHCs (cf. eq. (14)) are responsible for the increased capability of describing higher-order geometric correlations, as a SO3KRATES model with a naive MP update (cf. eq. (13)) fails to solve this task with lmax = 1 (see Fig. 2a). We further confirm that the model picks up on the physically relevant interaction between the hydrogen rotors by analysing the attention values after training (see Fig. 8 appendix A.7). To illustrate how SO3KRATES is able to describe non-local effects, we show a low-dimensional projection of the atomic SPHCs before and after training for the largest of the cumulene molecules (Fig. 3). After training, the SPHCs for hydrogen atoms at opposite ends of the molecule are embedded close together in SPHC space, allowing SO3KRATES to efficiently model the non-local geometric dependence between the hydrogen rotors.\nGeneralization to structures, larger than those in the training data are usually associated with the re-usability of the learned, local representations. For that reason, it is unclear if this property still holds when non-local corrections are used. As we show in appendix A.5 a SO3KRATES model with non-local corrections still generalizes well to completely unknown and larger structures.\nBenchmarks, Data Efficiency and Generalization As pointed out in [31] and [32], equivariant features not only increase performance, but also improve data efficiency. The latter is particularly important, as ab-initio methods for reference data generation can become exceedingly expensive when high accuracy is required. Here, we use a subset of the recently introduced QM7-X data set [51], which we call QM7-X250. It contains 250 different molecular structures, each with 80 data points for training, 10 data points for validation and 11-3748 data points for testing (for details, see appendix A.9). The small number of training/validation samples per molecule makes it particularly suited for evaluating model behavior in the low data regime. In the following, we train (1) one model\nper structure in QM7-X250 and (2) one model for all structures in QM7-X250 (250 ⇥ 80 = 20k training points), which we refer to as individual and joint models, respectively.\nWe start by investigating the performance as a function of the maximal degree lmax and find that the error strongly decreases with higher lmax (Fig. 4a). As kernel methods are known to perform well in the low-data regime [21], we compare our results to SGDML [38] kernel models, which only use distances as a molecular descriptor (corresponding to lmax = 0). For lmax = 0, we find SGDML gives competitive results, whereas for lmax = 1, SO3KRATES starts to outperform SGDML. As soon as lmax 2, however, the prediction accuracy of SO3KRATES is greater than that of SGDML by a large margin. Thus, increasing the order of geometric information in the SPHCs leads to strong improvements in the low-data regime. For jointly trained models, we find that SO3KRATES outperforms SGDML even for lmax = 0, with continuous improvement for increasing lmax. In appendix A.10 we report energy and force errors across degrees and further experimental details.\nThe generalization capability of SO3KRATES is tested, by applying a jointly trained model to 25 completely unknown molecules from the QM7-X data set (see, Fig. 4b, details in appendix A.10). Again, we find that force MAEs decrease with increasing lmax. For reference, we compare SO3KRATES to individually trained SGDML models and find that SO3KRATES performs on par, or even slightly better, for lmax 2. Going beyond lmax = 2 is found to only marginally improve generalization. In addition, we report results for a model trained on the full QM7-X data set in appendix A.5, following [30].\nFor completeness, we also apply SO3KRATES to the popular MD17 benchmark (see Table 1). We find, that SO3KRATES outperforms networks that have the same parameter complexity by a large margin (PAINN and NEWTONNET). Notably, it requires significantly less parameters than other SH based architectures (NEQUIP and SPOOKYNET), while performing only slightly worse or even on par with them. Furthermore, SO3KRATES outperforms DIMENET, its closest competitor in timing (cf. Fig. 1.d), consistently by a large margin. Compared to current SH based approaches, GEMNETQ needs less parameters (still ⇠ 2.5x more than SO3KRATES) to achieve competitive results. However, it requires the explicit calculation of dihedral angles which scales cubically in the number of neighboring atoms. Due to its linear scaling (see A.8) and lightweight structure, SO3KRATES can significantly reduce the time for training and inference (see Fig. 1d and A.6).",
"5 Discussion and Conclusion": "Due to the locality assumption used in most MPNNs, they are unable to model non-local electronic effects, which result in global geometric dependencies between different parts of a molecule. The length-scales of such interactions often greatly exceed the cutoff radius used in the MP step, and even though stacking multiple MP layers increases the effective cutoff, ultimately, MPNNs are not capable of efficiently modeling geometric dependencies on arbitrary length scales.\nIn this work, we contribute conceptually by proposing an efficient and scalable solution to this problem. We suggest a set of refinable, equivariant coordinates for point clouds in Euclidean space, called spherical harmonic coordinates (SPHCs). Non-local geometric effects can then be efficiently\nmodeled by including geometric corrections, which are localized in the space of SPHCs, but non-local in Euclidean space. Further, we show that introducing spherical filter functions acting on the SPHCs increases geometric resolution and predictive accuracy.\nWe then propose the SO3KRATES architecture, a self-attention based MPNN, which decouples atomic features from higher-order geometric information. This allows to drastically decrease the parametric complexity while still achieving SOTA prediction accuracy. We show evidence that increasing the geometric order of SPHCs greatly improves model performance in the low-data regime, as well as generalization to unknown molecules.\nA limitation of the current implementation of SO3KRATES is that spherical neighborhoods N in eq. (14) are computed from all pairwise distances in SPHC space. An alternative implementation could use a space partitioning scheme to find neighborhoods more efficiently. In a broader context, our work falls into the category of approaches that can help to reduce the vast computational complexity of molecular and material simulations. This can accelerate novel drug and material designs, which holds the promise of tackling societal challenges, such as climate change and sustainable energy supply [53]. Of course, our method could also be used for nefarious applications, e.g. design of chemical warfare, but this is true for all quantum chemistry methods.\nFuture research will focus on applications of SO3KRATES to materials and bio-molecules, which are typical examples of chemical systems where the accurate description of non-local effects is necessary to produce novel insights. Efficient treatment of non-local effects in point cloud data goes beyond the domain of quantum chemistry. One way of representing non-local dependencies are non-local neural networks [54]. In comparison to the presented approach they compute a relation in feature rather than in Euclidean space, making it incapable of capturing direct geometric relations in Euclidean space. However, this might be necessary if the relative orientation of objects far apart from each other plays a role for identifying different objects.",
"6 Acknowledgements": "All authors acknowledge support by the Federal Ministry of Education and Research (BMBF) for BIFOLD (01IS18037A). KRM was partly supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government(MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program, Korea University and No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation), and was partly supported by the German Ministry for Education and Research (BMBF) under Grants 01IS14013A-E, AIMM, 01GQ1115, 01GQ0850, 01IS18025A and 01IS18037A; the German Research Foundation (DFG). We thank Stefan Chmiela, Mihail Bogojeski and Nicklas Schmitz for helpful discussions and feedback on the manuscript.",
"Reviewer Summary": "Reviewer_2: The authors consider the problem of capturing non-local effects in molecular modelling. To address this, they construct a model based on:\nSpherical Harmonic Coordinates (SPHC): A wavelet transform with different orders of spherical harmonic is used to represent the molecule geometry in a SPHC space. An equivariance relation between rotations in the euclidean input and Wigner-D transformations in SPHC gives the model the right symmetries, and the intention is that non-local interactions in Euclidean space may become local in SPHC.\nAn adapted message passing scheme which passes messages in local neighbourhoods both in Euclidean and SPHC space while preserving equivariance.\nThe authors provide demonstration of their method on an educational example of cumulene twisiting as well as larger benchmarking on QM7-X250 and MD17.\n\nReviewer_3: The authors propose a new ML force field based on higher order interactions and equivariant self-attention. The model represents atomic features using spherical harmonic coordinates that are updated during the forward pass through the network, and spherical neighborhoods are constructed using these representations. This helps the model learn relations between distant atoms. As a result, the proposed method is computationally efficient, requires few parameters and achieves comparable performance to recent methods like Nequip.\n\nReviewer_4: The paper proposes to use separate representations of spherical harmonic coordinates jointly with representations of atomic features in learning models of molecular force fields. This allows message passing to local neighborhoods in the spherical harmonic coordinates which translate to non-local neighborhoods in Euclidean space. As a result the method is able to model long-range interaction effects that are prohibitively expensive to model with existing, hop-limited message-passing graph neural networks. It is also helpful to the low-data cases since the fewer number of parameters in the network requires less training data to estimate."
}