SlowGuess commited on
Commit
a2ee22c
·
verified ·
1 Parent(s): 048ff8e

Add Batch 370e241e-1b84-4b80-9e28-9dd4dc009592

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/a22701cc-3dd4-480f-8a1b-06791826807f_content_list.json +3 -0
  2. acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/a22701cc-3dd4-480f-8a1b-06791826807f_model.json +3 -0
  3. acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/a22701cc-3dd4-480f-8a1b-06791826807f_origin.pdf +3 -0
  4. acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/full.md +0 -0
  5. acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/images.zip +3 -0
  6. acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/layout.json +3 -0
  7. acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/c49a2369-aac6-40eb-82cb-07484fd037ff_content_list.json +3 -0
  8. acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/c49a2369-aac6-40eb-82cb-07484fd037ff_model.json +3 -0
  9. acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/c49a2369-aac6-40eb-82cb-07484fd037ff_origin.pdf +3 -0
  10. acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/full.md +0 -0
  11. acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/images.zip +3 -0
  12. acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/layout.json +3 -0
  13. accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/cfae93c8-b98c-473d-992f-48e2c6fdf019_content_list.json +3 -0
  14. accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/cfae93c8-b98c-473d-992f-48e2c6fdf019_model.json +3 -0
  15. accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/cfae93c8-b98c-473d-992f-48e2c6fdf019_origin.pdf +3 -0
  16. accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/full.md +0 -0
  17. accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/images.zip +3 -0
  18. accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/layout.json +3 -0
  19. accidentanticipationviatemporaloccurrenceprediction/e9065540-c358-4771-af21-704cceef0888_content_list.json +3 -0
  20. accidentanticipationviatemporaloccurrenceprediction/e9065540-c358-4771-af21-704cceef0888_model.json +3 -0
  21. accidentanticipationviatemporaloccurrenceprediction/e9065540-c358-4771-af21-704cceef0888_origin.pdf +3 -0
  22. accidentanticipationviatemporaloccurrenceprediction/full.md +518 -0
  23. accidentanticipationviatemporaloccurrenceprediction/images.zip +3 -0
  24. accidentanticipationviatemporaloccurrenceprediction/layout.json +3 -0
  25. accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/cfe4e2ab-fab2-4658-b42e-17015f3cc925_content_list.json +3 -0
  26. accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/cfe4e2ab-fab2-4658-b42e-17015f3cc925_model.json +3 -0
  27. accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/cfe4e2ab-fab2-4658-b42e-17015f3cc925_origin.pdf +3 -0
  28. accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/full.md +0 -0
  29. accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/images.zip +3 -0
  30. accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/layout.json +3 -0
  31. accurateandefficientlowrankmodelmergingincorespace/8ade3df5-965f-48fb-909c-12368106b32c_content_list.json +3 -0
  32. accurateandefficientlowrankmodelmergingincorespace/8ade3df5-965f-48fb-909c-12368106b32c_model.json +3 -0
  33. accurateandefficientlowrankmodelmergingincorespace/8ade3df5-965f-48fb-909c-12368106b32c_origin.pdf +3 -0
  34. accurateandefficientlowrankmodelmergingincorespace/full.md +0 -0
  35. accurateandefficientlowrankmodelmergingincorespace/images.zip +3 -0
  36. accurateandefficientlowrankmodelmergingincorespace/layout.json +3 -0
  37. accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/44562f8a-d0b7-4b34-8a29-f2efb2ef2d63_content_list.json +3 -0
  38. accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/44562f8a-d0b7-4b34-8a29-f2efb2ef2d63_model.json +3 -0
  39. accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/44562f8a-d0b7-4b34-8a29-f2efb2ef2d63_origin.pdf +3 -0
  40. accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/full.md +0 -0
  41. accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/images.zip +3 -0
  42. accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/layout.json +3 -0
  43. accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/c8796483-a35f-4cf7-8ab6-3c6539557a05_content_list.json +3 -0
  44. accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/c8796483-a35f-4cf7-8ab6-3c6539557a05_model.json +3 -0
  45. accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/c8796483-a35f-4cf7-8ab6-3c6539557a05_origin.pdf +3 -0
  46. accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/full.md +681 -0
  47. accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/images.zip +3 -0
  48. accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/layout.json +3 -0
  49. acereasonnemotronadvancingmathandcodereasoningthroughreinforcementlearning/e2ffb506-f6e4-4cc7-9714-e8f9959a8dec_content_list.json +3 -0
  50. acereasonnemotronadvancingmathandcodereasoningthroughreinforcementlearning/e2ffb506-f6e4-4cc7-9714-e8f9959a8dec_model.json +3 -0
acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/a22701cc-3dd4-480f-8a1b-06791826807f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16e6b08abbf084adbdf25c95c2f8fa0e6395eff5051ae04b8c2f5ed3e24df114
3
+ size 360002
acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/a22701cc-3dd4-480f-8a1b-06791826807f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1c6b51f4698201b1b830314fbda1adfdea59cee8138822d80596f0ead8d687b
3
+ size 436707
acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/a22701cc-3dd4-480f-8a1b-06791826807f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb04ad6ab05205451ecad012f6207dfb2341d464e107353849ceeb14f593856d
3
+ size 2282810
acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/full.md ADDED
The diff for this file is too large to render. See raw diff
 
acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b5c4ff20db31ec7fb7015e84ad1c94105da2e92a0eb6a8297e317f852fc58ac
3
+ size 2021346
acceleratingdatadrivenalgorithmselectionforcombinatorialpartitioningproblems/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab45a986a14c0a20847ce1f3798c2cb937de5f1b696ab975fce3170f00bc424f
3
+ size 2795665
acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/c49a2369-aac6-40eb-82cb-07484fd037ff_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d788f6256b50a744d1672abade36feb4e233fdfca6ade627e01e48523c94e3d
3
+ size 184132
acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/c49a2369-aac6-40eb-82cb-07484fd037ff_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35593b419d31449641699d73077267ba5b94b8322c56c59569c8cbb153bf0f94
3
+ size 227324
acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/c49a2369-aac6-40eb-82cb-07484fd037ff_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41c6be8b19d5afcc10590b399c1cba9eb8a53491031ee7b8a276cefefee32a6e
3
+ size 18719068
acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/full.md ADDED
The diff for this file is too large to render. See raw diff
 
acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17c9091185f74f28ae75f333bf6a72f8c3039f4a47cab69a303225e24e704231
3
+ size 1330074
acceleratingvisualpolicylearningthroughparalleldifferentiablesimulation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21123240f10bb90cab63d121ba2262f1d8c4ce82674a3c63315155ec2463fb15
3
+ size 957938
accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/cfae93c8-b98c-473d-992f-48e2c6fdf019_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e89ae79cecffdf0bc8af4161aec1c8117366402be79d0e8579fad7d0ad4ac07
3
+ size 383737
accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/cfae93c8-b98c-473d-992f-48e2c6fdf019_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eca9818bb8efbfc55c398e3894b8256d3606fc62ff0b1070fc10f64bd2d8f9ce
3
+ size 468850
accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/cfae93c8-b98c-473d-992f-48e2c6fdf019_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbb4ad46126cdb43ef4d27e2964c323fcc15688eb43feea1baf7781b4bfbc9e2
3
+ size 2162575
accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/full.md ADDED
The diff for this file is too large to render. See raw diff
 
accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96201355a19692c981bb837f40ae6b8453f078994f0f875d0663cc6e6cd7195b
3
+ size 2098400
accelerationviasilverstepsizeonriemannianmanifoldswithapplicationstowassersteinspace/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b51780e9c50216901fd542b7a5806c3f088e615634d4ee9350ec0feecc2c4e4
3
+ size 2732301
accidentanticipationviatemporaloccurrenceprediction/e9065540-c358-4771-af21-704cceef0888_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f8dd541a8bb0c7515d9c4dd66799487103c618d55cb6e611189df64d7704577
3
+ size 111306
accidentanticipationviatemporaloccurrenceprediction/e9065540-c358-4771-af21-704cceef0888_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48795953957de33edde29f4cd5eb1c31e3869e26a1c085c767512c2631315069
3
+ size 148004
accidentanticipationviatemporaloccurrenceprediction/e9065540-c358-4771-af21-704cceef0888_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:777ec00a82ab6f80fcbc5ea78ce8265aba89f0249b96d33ccacbbb9f12d5a935
3
+ size 5901426
accidentanticipationviatemporaloccurrenceprediction/full.md ADDED
@@ -0,0 +1,518 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accident Anticipation via Temporal Occurrence Prediction
2
+
3
+ Tianhao Zhao $^{1,2}$ Yiyang Zou $^{1}$ Zihao Mao $^{1}$ Peilun Xiao $^{3}$ Yulin Huang $^{3}$
4
+
5
+ Hongda Yang $^{3}$ Yuxuan Li $^{3}$ Qun Li $^{3}$ Guobin Wu $^{3}$ Yutian Lin $^{1*}$
6
+
7
+ $^{1}$ School of Computer Science, Wuhan University $^{2}$ Zhongguancun Academy, Beijing, China $^{3}$ Didi Chuxing
8
+
9
+ {happytianhao,yutian.lin}@whu.edu.cn
10
+
11
+ # Abstract
12
+
13
+ Accident anticipation aims to predict potential collisions in an online manner, enabling timely alerts to enhance road safety. Existing methods typically predict frame-level risk scores as indicators of hazard. However, these approaches rely on ambiguous binary supervision—labeling all frames in accident videos as positive—despite the fact that risk varies continuously over time, leading to unreliable learning and false alarms. To address this, we propose a novel paradigm that shifts the prediction target from current-frame risk scoring to directly estimating accident scores at multiple future time steps (e.g., 0.1s–2.0s ahead), leveraging precisely annotated accident timestamps as supervision. Our method employs a snippet-level encoder to jointly model spatial and temporal dynamics, and a Transformer-based temporal decoder that predicts accident scores for all future horizons simultaneously using dedicated temporal queries. Furthermore, we introduce a refined evaluation protocol that reports Time-to-Accident (TTA) and recall—evaluated at multiple pre-accident intervals (0.5s, 1.0s, and 1.5s)—only when the false alarm rate (FAR) remains within an acceptable range, ensuring practical relevance. Experiments show that our method achieves superior performance in both recall and TTA under realistic FAR constraints. Project page: https://happytianhao.github.io/TOP/
14
+
15
+ # 1 Introduction
16
+
17
+ Driving accidents pose a significant threat to public safety, resulting in substantial human casualties and economic losses. This issue often arises when drivers, due to fatigue or distraction, fail to notice potential hazards in their surroundings, ultimately resulting in accidents. Recently, the task of accident anticipation [1, 2] has been widely studied to analyze the risk in driving scenarios captured by dashcams in an online manner, assessing the likelihood of an impending accident with a risk score, as shown in Figure 1 (a). If the risk score exceeds a preset threshold, the system can promptly alert the driver to take evasive action, thereby reducing the chances of an accident or mitigating its severity.
18
+
19
+ Previous works [3, 4, 5, 6, 7, 8, 9] train models to predict frame-wise risk scores using binary supervision: all frames from accident videos are labeled as 1, and all frames from safe videos as 0. To reflect the intuition that risk increases near the crash, these methods typically assign exponentially decaying loss weights to frames based on their temporal distance to the accident—earlier frames receive lower weights, while those near the crash receive higher ones. However, this approach treats
20
+
21
+ ![](images/cf352de4d2b0958a242e16cbbe07c8ffb5812d4a7b9cb0446f30d8cb3dd54e82.jpg)
22
+ (a) Previous works
23
+
24
+ ![](images/d01cc328c21c8d9d243b7183375680865b745f75a931207ec9787ed2d6c87caf.jpg)
25
+ (b) Ours
26
+ Figure 1: Comparison of accident anticipation paradigms. (a) Previous works predict a single risk score for the current frame, which is ambiguous and hard to supervise accurately. (b) Our method predicts a sequence of accident scores at multiple future time steps (e.g., 0.1s, 0.2s, ..., 2.0s ahead), where each score indicates the likelihood of an accident occurring exactly at that future time.
27
+
28
+ risk as a static binary signal, ignoring its dynamic and continuously varying nature. In reality, risk levels differ significantly across frames even within the same accident sequence, and assigning a uniform label of 1 fails to capture these temporal nuances. Such imprecise supervision misguides learning and often leads to unreliable predictions or false alarms.
29
+
30
+ We observe that, unlike ambiguous risk labels, the timestamp of actual accident occurrence can be precisely annotated in real-world driving videos. To leverage this reliable supervision, we shift the prediction target from per-frame risk scores to directly forecasting future accident timing. Specifically, as shown in Figure 1 (b), our model outputs a sequence of accident scores at multiple future time steps $\{a_1, a_2, \ldots, a_T\}$ (e.g., 0.1s, 0.2s, ..., 2.0s ahead), where each score $a_t$ indicates the model's confidence that an accident occurs exactly at that time. During training, only the score at the ground-truth accident time is labeled as 1; all others are 0. At inference, an alert is triggered if any score exceeds a preset threshold, indicating an impending collision. This formulation offers two key advantages: (1) it uses precise temporal annotations for more stable and accurate training, and (2) by modeling when an accident may occur—rather than just whether the scene is "risky"—it yields more interpretable and actionable predictions.
31
+
32
+ To implement this paradigm online, we adopt an encoder-decoder architecture that processes current and past frames to predict accident scores for multiple future time steps (e.g., 0.1s, 0.2s, ..., 2.0s ahead), as shown in Figure 1 (b). Unlike previous works [3, 4, 5, 6, 7, 9], which typically use frame-level encoders with RNNs to model temporal dynamics, our method employs a snippet-level encoder that jointly captures spatial and temporal information across short clips of consecutive frames, enabling a comprehensive understanding of object positions, speeds, and motion trajectories. Furthermore, we design a Transformer-based temporal decoder that simultaneously predicts accident scores for all future time steps using distinct learnable temporal queries—each corresponding to a specific future horizon—to explicitly model accident likelihood at each time offset, supporting accurate and efficient frame-by-frame online prediction.
33
+
34
+ To evaluate the effectiveness of accident anticipation methods, previous works [3, 4, 5, 6, 7, 9] primarily use AP (Average Precision), AUC (Area Under the ROC Curve), and TTA (Time-to-Accident). However, we observe that in real-world applications, an excessively high false alarm rate (FAR)—e.g., exceeding 1 false alarm per minute—causes disruptive alerts that are unacceptable; under such conditions, high recall may stem from indiscriminate alarming rather than genuine prediction, rendering recall and TTA misleading. To address this, we propose a novel evaluation protocol that reports mean recall and TTA only when FAR is within an acceptable range. Furthermore, we evaluate recall at different pre-accident intervals (0.5s, 1.0s, and 1.5s before crashes) for a more granular assessment of anticipative capability. Finally, we identify limitations in existing TTA calculations that yield inflated values and propose a more reasonable approach, as detailed in Section 4.
35
+
36
+ Our contributions can be summarized as follows:
37
+
38
+ - We propose a novel accident anticipation paradigm that shifts from predicting ambiguous per-frame risk scores to directly estimating accident scores at multiple future time steps (e.g., 0.1s, 0.2s, ..., 2.0s ahead), leveraging precise accident timestamps as supervision for more accurate and interpretable predictions.
39
+
40
+ - We design an effective encoder-decoder architecture featuring a snippet-level encoder to jointly capture spatial and temporal features from driving scenarios, and a Transformer-based temporal decoder that predicts accident scores for all future time steps simultaneously using dedicated temporal queries, enabling online frame-by-frame anticipation.
41
+ - We introduce a refined evaluation protocol that computes recall and Time-to-Accident (TTA) only when the false alarm rate (FAR) is within an acceptable range, evaluates recall at multiple pre-accident intervals (0.5s, 1.0s, 1.5s), and adopts an improved TTA calculation method that avoids inflated values, offering a more reliable and practical assessment.
42
+
43
+ # 2 Related Work
44
+
45
+ # 2.1 Temporal Modeling and Attention-Based Approaches
46
+
47
+ Early works on accident anticipation primarily rely on recurrent architectures to model temporal dynamics in dashcam videos. DSA [3] introduced the first large-scale dataset (DAD) and combined object-level and frame-level features with an RNN to predict per-frame risk scores, using an exponentially decaying loss that emphasizes frames closer to the accident. Subsequent methods enhanced temporal modeling through attention mechanisms. ACRA [2] proposed a soft-attention RNN to capture spatial and appearance interactions between the event-triggering agent and its surroundings. AdaLEA [10] improved early anticipation via an adaptive loss weighting strategy, while DSTA [11] introduced dynamic spatial-temporal attention to focus on relevant regions over time.
48
+
49
+ More recently, transformer-based architectures have emerged. AAT-DA [12] integrates driver attention into a transformer framework to jointly model spatial and temporal cues. LATTE [13] further advances temporal modeling by combining multiscale spatial features with memory-based attention and auxiliary self-attention for long-range dependency capture. Meanwhile, XAI [14] employs a GRU-based network to learn spatio-temporal relations, and RARE [15] achieves efficiency by leveraging intermediate features from a single pre-trained detector. THAT-Net [16] enhances motion understanding by fusing optical flow with spatial-temporal filtering to suppress distracting motions.
50
+
51
+ # 2.2 Graph-Based and Relational Reasoning Methods
52
+
53
+ To explicitly model interactions among traffic participants, several works adopt graph-based representations. GSC [17] formulates accident anticipation as a graph learning problem with spatio-temporal continuity constraints. Graph(Graph) [18] proposes a nested graph architecture to capture hierarchical agent relationships. DAA-GNN [19] introduces a dynamic attention-augmented graph network that adaptively weights interactions among detected entities. CRASH [20] designs an object-aware module that prioritizes high-risk agents by computing their spatial-temporal relationships.
54
+
55
+ CString [6] combines relational learning with uncertainty quantification, using Bayesian neural networks to model the stochasticity in agent interactions on its newly collected CCD dataset. AM-Net [21] employs an attention-guided multistream fusion strategy to localize hazardous agents by integrating appearance and motion cues.
56
+
57
+ # 2.3 Multimodal and Emerging Paradigms
58
+
59
+ Recent efforts explore richer input modalities and novel learning frameworks. AccNet [22] and CCAF-Net [23] incorporate monocular depth cues to enable 3D-aware scene understanding, fusing RGB and depth features for improved risk prediction. DADA [8] and DADA-2000 [24] frame anticipation as a driver attention prediction task, linking gaze behavior to accident likelihood.
60
+
61
+ Other innovative directions include reinforcement learning and language grounding. DRIVE [7] models risk prediction as a Markov decision process and uses deep reinforcement learning with visual explanations. CAP [9] establishes a multimodal benchmark for cognitive accident anticipation, integrating behavioral and visual signals. DEDBM [25] fuses dashcam videos with textual accident reports in a dual-branch architecture, enabling cross-modal knowledge transfer. Most recently, WWW [26] leverages large language models (LLMs) to jointly reason about the what, when, and where of potential accidents, marking a shift toward interpretable, language-augmented anticipation systems.
62
+
63
+ ![](images/8bce0f0ba204c2b3b473abc8c2bb4fbd156591d13ba46885ce570bf47afa1f29.jpg)
64
+ Figure 2: Overview of our encoder-decoder method. We feed the observed frames into the snippet encoder and use the temporal decoder to predict accident scores for unobserved future frames at multiple time steps, where the accident score label is 1 for the accident frame and 0 otherwise.
65
+
66
+ # 3 Method
67
+
68
+ # 3.1 Problem Definition
69
+
70
+ Accident anticipation involves online analysis of dashcam footage to determine whether an accident is likely to occur in the near future. If there is a potential risk of an accident, the system promptly alerts the driver to take precautionary measures, helping to reduce both the likelihood and severity of collisions.
71
+
72
+ Specifically, given the current frame $f_{0}$ and past frames $\{f_{-1}, f_{-2}, \ldots\}$ captured by the dashcam from the driving scenarios, previous works determine whether an accident will occur in the near future by predicting a risk score $r_{0}$ at the current time $t_{0}$ . When the risk score $r_{0}$ exceeds a preset threshold $\tau$ , the system will automatically trigger an alert to warn the driver.
73
+
74
+ In contrast, our method assesses the likelihood of future accidents by predicting a sequence of accident scores $\{a_1,a_2,\ldots \}$ of an accident occurring at multiple future time steps $\{t_1,t_2,\dots \}$ . When any of the accident scores $a_{i}(i\geq 1)$ at a certain moment $t_i$ exceeds a preset threshold $\tau$ , it indicates that the model predicts an accident will occur at time $t_i$ . Consequently, the system will automatically trigger an alert at the current time $t_0$ to warn the driver to take evasive action.
75
+
76
+ In this task, the model's optimization objective is to maximize the recall rate and earliness of accident anticipation while ensuring that the false alarm rate (FAR) does not exceed an acceptable limit.
77
+
78
+ # 3.2 Temporal Occurrence Prediction
79
+
80
+ To anticipate accidents more accurately, we propose a novel paradigm that focuses on predicting a sequence of accident scores of an accident occurring at multiple future time steps, rather than simply outputting a risk score of the current frame, providing a more interpretable and reliable anticipation.
81
+
82
+ To implement this paradigm, we designed an encoder-decoder model, as shown in Figure 2. The structure details are as follows:
83
+
84
+ Snippets encoder. To better understand the movement of objects in driving scenarios, we take the current frame $f_{0}$ along with past frames $\{f_{-1}, f_{-2}, \ldots, f_{-(S - 1)}\}$ as a snippet input to the model, where $S$ is the length of the snippet. Then we employ a 3D CNN instead of architectures of frame-level encoders with RNNs (widely adopted in previous works) as a snippet encoder to simultaneously capture the spatial and temporal information within the snippet. Next, we only apply spatial pooling to the features output by the snippet encoder, in order to preserve their temporal resolution, resulting in features $\{z_{0}, z_{-1}, \ldots, z_{-(S - 1)}\}$ , where each feature corresponds to the time steps $\{t_{0}, t_{-1}, \ldots, t_{-(S - 1)}\}$ , respectively. In this way, the model can not only understand the motion of objects but also establish a one-to-one correspondence between different frames and their corresponding features.
85
+
86
+ Temporal decoder. In order to predict the temporal sequence of accident scores $\{a_1, a_2, \ldots, a_T\}$ of an accident occurring at multiple time steps $\{t_1, t_2, \ldots, t_T\}$ , where $T$ is the length of the sequence to predict, in the future based on features $\{z_0, z_{-1}, \ldots, z_{-(S - 1)}\}$ extracted by the snippet encoder from frames of time steps $\{t_0, t_{-1}, \ldots, t_{-(S - 1)}\}$ , we designed a temporal decoder with reference to the transformer decoder [27]. Specifically, to distinguish between different time steps of the temporal sequence in the future, we define $T$ different temporal queries $\{q_1, q_2, \ldots, q_T\}$ to represent $T$ time steps $\{t_1, t_2, \ldots, t_T\}$ in the future following the current timestamp $t_0$ as the reference. Next, we feed the features $\{z_0, z_{-1}, \ldots, z_{-(S - 1)}\}$ output from the snippet encoder as the memory into the temporal decoder. Then, we feed temporal queries $\{q_1, q_2, \ldots, q_T\}$ into the temporal decoder to predict the temporal sequence of accident scores $\{a_1, a_2, \ldots, a_T\}$ of an accident occurring at time steps $\{t_1, t_2, \ldots, t_T\}$ .
87
+
88
+ Sampling strategy. During training, we randomly sample a continuous segment of $S$ frames from the accident video as the input snippet for the model. Let $f_{-(S-1)}$ be the starting frame of the snippet, then the last frame is $f_0$ , which denotes the current frame. To ensure the relevance of training data, snippets are sampled only from frames occurring at or before the accident, while frames after the accident are excluded.
89
+
90
+ During testing, we adopt a sliding window approach to sample snippets from all available frames in the video, including those before, during, and after the accident. Specifically, we slide a window of $S$ consecutive frames across the entire video sequence with a fixed stride, ensuring comprehensive evaluation of the model's performance over time. This allows the model to make predictions at every time step, reflecting real-world deployment scenarios where the exact timing of an accident is unknown.
91
+
92
+ Labeling strategy. During training, given a snippet across time steps $\{t_0,t_{-1},\ldots ,t_{-(S - 1)}\}$ as input, the model outputs the sequence of accident scores $\{a_{1},a_{2},\dots ,a_{T}\}$ at multiple time steps $\{t_1,t_2,\dots ,t_T\}$ . If the accident occurrence time step $t_A$ falls within this range, i.e., $1\leq A\leq T$ , we assign its label $y_{A}$ as 1, while setting the labels of all other time steps $y_{t}$ ( $1\leq t\leq T$ , $t\neq A$ ) to 0, as shown in Figure 2. The model is then trained using the Binary Cross-Entropy (BCE) loss function to optimize its predictions:
93
+
94
+ $$
95
+ \mathcal {L} _ {B C E} = - \frac {1}{T} \left[ w _ {+} \log a _ {A} + \sum_ {\substack {t = 1 \\ t \neq A}} ^ {T} \log \left(1 - a _ {t}\right) \right], \tag{1}
96
+ $$
97
+
98
+ where $w_{+}$ is the weight of the positive one.
99
+
100
+ # 4 Evaluation Metrics
101
+
102
+ Previous works [9, 28, 29, 30, 31] primarily used AP (Average Precision), AUC (Area Under the ROC Curve), and TTA (Time-To-Accident) to evaluate accident anticipation methods. However, unlike traditional binary classification tasks, we observe that in real-world applications, excessively high false alarm rates (FAR) can cause unacceptable disturbances to drivers. Therefore, when FAR exceeds a reasonable threshold, comparing recall rates and TTA becomes meaningless. Existing metrics allow FAR to range from 0 to 1, which could lead to suboptimal model selection for practical deployment. To address this, we introduce a threshold $\lambda$ for FAR and only evaluate cumulative recall and TTA when FAR remains below $\lambda$ . Furthermore, while current metrics measure overall anticipation capability, they fail to assess performance at specific pre-accident time intervals. Following the approach in [32], we analyze anticipation recall rates at different time intervals before accidents. Finally, we identify limitations in conventional TTA calculation methods and propose an improved alternative. The detailed metrics are described below:
103
+
104
+ Area under the ROC curve (AUC). We employ AUC (Area Under the ROC Curve) to calculate the average recall rate of accident anticipation models under varying false alarm rates (FAR). Notably, when the false alarm rate exceeds a certain level, comparing recall rates across different models becomes meaningless. Therefore, we specifically compute the average recall rate only when the false alarm rate remains below a predefined threshold $\lambda$ , denoted as $\mathrm{AUC}^{\lambda}$ , as illustrated in Equation 2 and
105
+
106
+ $$
107
+ \mathrm {A U C} ^ {\lambda} = \sum_ {i = 1} ^ {n} \frac {\left(\mathrm {T P R} _ {i} + \mathrm {T P R} _ {i - 1}\right)}{2} \cdot \left(\mathrm {F P R} _ {i} - \mathrm {F P R} _ {i - 1}\right), (\mathrm {F P R} _ {n} \leq \lambda), \tag {2}
108
+ $$
109
+
110
+ where FPR is equivalent to the false alarm rate (FAR) and TPR is equivalent to the recall rate, $\lambda$ is set to 0.1 by default.
111
+
112
+ Additionally, to evaluate the capability of the accident anticipation models at different horizons before an accident occurs, we extracted video clips from 0.5s-1.0s, 1.0s-1.5s, and 1.5s-2.0s before the accident as positive samples, while capturing an equal number of 0.5s-long video segments from accident-free driving scenarios as negative samples. We then calculated the $\mathrm{AUC}^{\lambda}$ for different time intervals, denoted as $\mathrm{AUC}_{0.5s}^{\lambda}$ , $\mathrm{AUC}_{1.0s}^{\lambda}$ , and $\mathrm{AUC}_{1.5s}^{\lambda}$ (e.g., $\mathrm{AUC}_{1.5s}^{\lambda}$ represents the model's capability in anticipating accidents 1.5 seconds before they occur). Finally, we computed the model's mean $\mathrm{AUC}^{\lambda}$ using Equation 3:
113
+
114
+ $$
115
+ \mathrm {m A U C} ^ {\lambda} = \frac {\mathrm {A U C} _ {0 . 5 s} ^ {\lambda} + \mathrm {A U C} _ {1 . 0 s} ^ {\lambda} + \mathrm {A U C} _ {1 . 5 s} ^ {\lambda}}{3}. \tag {3}
116
+ $$
117
+
118
+ Time-To-Accident (TTA). We adopt Time-To-Accident (TTA) to evaluate the earliness of the accident anticipation. Specifically, for each frame in an accident video, if the model's predicted anomaly score exceeds a preset threshold $\tau$ , an alarm will be triggered, and the time gap (in seconds) between this alarm moment and the actual accident occurrence is recorded as TTA. Generally, as the threshold $\tau$ decreases, both TTA and the false alarm rate (FAR) increase simultaneously. Therefore, we only compute the mean TTA when the false alarm rate remains below $\lambda$ , as illustrated in Equation 3:
119
+
120
+ $$
121
+ \mathrm {m T T A} ^ {\lambda} = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathrm {T T A} _ {\mathrm {i}}, (\mathrm {F P R} _ {n} \leq \lambda). \tag {4}
122
+ $$
123
+
124
+ However, as illustrated in Figure 3, the TTA calculation method used in previous works can lead to misleadingly high TTA values. Specifically, when a model raises a false alarm in a safe driving scenario—i.e., it predicts an accident that is not causally linked to any actual future crash—the method still computes TTA based on the time until a subsequent accident (even if unrelated). This results in artificially inflated TTA estimates, sometimes exceeding 3 seconds, despite the prediction being a false alarm.
125
+
126
+ To address this issue, we propose a revised TTA calculation method: we only compute TTA for alarms triggered after the anomaly appears. This is because the anomaly appearance time is the earliest moment annotated by humans as reliably indicating an impending accident. Alarms issued
127
+
128
+ before this point are considered false alarms, not valid early warnings, as the perceived risk lacks a causal connection to the eventual accident. Under our method, the maximum achievable average TTA is bounded by the average interval between anomaly appearance and accident occurrence in the dataset—1.86 seconds on CAP [9] and 1.66 seconds on DADA [8]. Moreover, if the model fails to predict an accident at all, its TTA is set to 0, ensuring a fair and meaningful evaluation.
129
+
130
+ ![](images/5f434c020636ea5acab2231803b90fd4d803c0993f013c00d47184624a2e05cc.jpg)
131
+ Figure 4:
132
+ Figure 3: Comparison of the TTA calculation approaches between previous works and our method. We only compute TTA after the anomaly appears, because the moment the anomaly appears is the earliest time annotated by humans as being predictive of the accident. If a model issues an alert earlier than this moment, the perceived risk is typically unrelated to the actual accident that eventually occurs. We consider such alerts as false alarms rather than early warnings. However, previous works did not account for this when calculating TTA, leading to inflated TTA values—sometimes even exceeding 3 seconds.
133
+
134
+ # 5 Experiments
135
+
136
+ # 5.1 Experimental Setup
137
+
138
+ Datasets. We conduct experiments on the MM-AU dataset [33], a large-scale ego-view traffic accident benchmark collected from public sources including existing datasets (CCD [6], A3D [4], DoTA [5], DADA-2000 [24]) and video platforms (YouTube, Bilibili, Tencent), encompassing diverse weather (sunny, rainy, snowy, foggy), lighting (day, night), scenes (highway, tunnel, mountain, urban, rural), and road types (arterial roads, curves, intersections, T-junctions, ramps)—which enables robust evaluation of model generalizability, in contrast to prior works that primarily train on limited datasets like DAD and CCD. MM-AU consists of two subsets: CAP [9] with 11,727 videos (2,195,613 frames) and DADA [8] with 2,000 videos (658,476 frames), both providing annotations for 58 accident categories and temporal labels for key events ("anomaly appear", "accident occur", and "accident end"). We refined and validated the frame rates and annotations for all ego-involved accidents, and selected approximately $20\%$ of the data as the test set; for evaluation, we extract clips from the first frame to the "anomaly appear" frame as negative samples to compute the false alarm rate (FAR), and clips from "anomaly appear" to "accident occur" as positive samples to assess anticipation recall and Time-to-Accident (TTA).
139
+
140
+ Implementation details. We preprocess input videos by resizing each frame to $224 \times 224$ and resampling at 10 FPS, so that each frame corresponds to a 0.1s time interval. During training, we sample snippets only from the period before the accident occurs; during testing, we apply a sliding window over the entire video. Each input snippet consists of $S = 5$ consecutive frames, which are fed into a snippet-level encoder (SlowOnly [34], initialized with ImageNet pre-trained weights) to extract spatiotemporal features. The model then predicts a sequence of accident scores of length $T = 20$ , corresponding to future time steps from 0.1s to 2.0s ahead. To decode these scores, we employ a Transformer-based temporal decoder with 2 layers and cosine positional encodings as queries for each future horizon. We optimize the model using SGD with a batch size of 64 on 8 NVIDIA 4090 GPUs. The binary cross-entropy loss is weighted with $w_{+} = 10$ for positive samples, and the initial learning rate is set to 0.01, decayed to $10\%$ of its value every 20 epochs over 50 total epochs.
141
+
142
+ # 5.2 Quantitative Results
143
+
144
+ Quantitative comparison. We evaluate our method against prior approaches and baselines on the CAP [9] and DADA [8] datasets under a unified experimental protocol. For fair comparison, all methods—including CAP [9], DRIVE [7], DSTA [11], and GSC [17—are trained and tested using the same data splits and evaluation metrics (Section 4). As baselines, we include (1) a ResNet+LSTM architecture that predicts a single per-frame risk score (identical to Experiment I in our ablation study), and (2) a variant of our full model without the temporal decoder.
145
+
146
+ As shown in Table 1, our method achieves significant gains on CAP. At the 0.0s horizon—where the task reduces to precise accident detection—our AUC reaches 0.8381, far surpassing all prior works and the baseline (0.4357). This demonstrates our model's strong capability in recognizing the exact moment of collision. At short-term horizons (0.5s), we obtain an AUC of 0.6747, nearly doubling the best prior result (GSC: 0.4177). The improvement gradually diminishes with increasing number of hours of observation.
147
+
148
+ ishes at longer horizons (1.0s and 1.5s), where risk signals are inherently weaker, yet our method still maintains the highest performance (AUC: 0.3982 and 0.2141, respectively), yielding a mean AUC (mAUC) of 0.4290 and the best mean Time-to-Accident $(\mathrm{mTTA} = 0.8644\mathrm{s})$ . These trends are further visualized in the ROC curves of Figure 4, which show consistently higher recall across all false alarm rates, especially near the accident onset.
149
+
150
+ ![](images/fdf98a0b35ea988d1b97ffa0f3d59eec2e67f98fafb08182ac3ba722d51e2d8d.jpg)
151
+ Figure 4: ROC curves of our method on CAP [9] at different accident anticipation horizons: 0.0s, 0.5s, 1.0s, and 1.5s before the accident.
152
+
153
+ Table 1: Quantitative results comparison of different methods on the CAP [9] dataset.
154
+
155
+ <table><tr><td>Method</td><td>AUC0.1↑</td><td>AUC0.10.5s↑</td><td>AUC0.11.0s↑</td><td>AUC0.11.5s↑</td><td>mAUC0.1↑</td><td>mTTA0.1(s)↑</td></tr><tr><td>CAP [9]</td><td>0.0421</td><td>0.0400</td><td>0.0296</td><td>0.0373</td><td>0.0357</td><td>0.6372</td></tr><tr><td>DRIVE [7]</td><td>0.1288</td><td>0.1167</td><td>0.1079</td><td>0.1231</td><td>0.1159</td><td>0.3954</td></tr><tr><td>DSTA [11]</td><td>0.5593</td><td>0.3862</td><td>0.2817</td><td>0.1913</td><td>0.2864</td><td>0.8039</td></tr><tr><td>GSC [17]</td><td>0.6093</td><td>0.4177</td><td>0.2969</td><td>0.1994</td><td>0.3046</td><td>0.8165</td></tr><tr><td>Baseline</td><td>0.4357</td><td>0.3938</td><td>0.2770</td><td>0.1777</td><td>0.2829</td><td>0.5389</td></tr><tr><td>Ours</td><td>0.8381</td><td>0.6747</td><td>0.3982</td><td>0.2141</td><td>0.4290</td><td>0.8644</td></tr></table>
156
+
157
+ Table 2: Quantitative results comparison of different methods on the DADA [8] dataset.
158
+
159
+ <table><tr><td>Method</td><td>AUC0.1↑</td><td>AUC0.10.5s↑</td><td>AUC0.11.0s↑</td><td>AUC0.11.5s↑</td><td>mAUC0.1↑</td><td>mTTA0.1(s)↑</td></tr><tr><td>CAP [9]</td><td>0.0317</td><td>0.0365</td><td>0.0670</td><td>0.0643</td><td>0.0560</td><td>0.4964</td></tr><tr><td>DRIVE [7]</td><td>0.1005</td><td>0.0628</td><td>0.0770</td><td>0.0885</td><td>0.0761</td><td>0.2257</td></tr><tr><td>DSTA [11]</td><td>0.4728</td><td>0.3276</td><td>0.2207</td><td>0.1345</td><td>0.2276</td><td>0.6952</td></tr><tr><td>GSC [17]</td><td>0.5142</td><td>0.3495</td><td>0.2382</td><td>0.1392</td><td>0.2423</td><td>0.7034</td></tr><tr><td>Baseline</td><td>0.3411</td><td>0.3046</td><td>0.2099</td><td>0.1251</td><td>0.2132</td><td>0.4138</td></tr><tr><td>Ours</td><td>0.7903</td><td>0.5669</td><td>0.2877</td><td>0.1399</td><td>0.3315</td><td>0.8848</td></tr></table>
160
+
161
+ Similar patterns are observed on DADA (Table 2). Our method achieves 0.7903 AUC at 0.0s and 0.5669 at 0.5s, substantially outperforming previous methods. Although the gains at 1.0s-1.5s are more modest, our mAUC (0.3315) and mTTA (0.8848 s) remain the best, confirming the robustness of our approach across datasets. Overall, the results validate that shifting supervision from ambiguous risk labels to precise future accident timing enables more accurate and reliable anticipation, particularly in the critical moments just before a crash.
162
+
163
+ Threshold variation. To evaluate the robustness of our accident anticipation capability under different false alarm constraints, we vary the FAR tolerance threshold $\lambda$ -defined as the maximum allowable false alarm rate for computing metrics. When $\lambda = 1$ , no constraint is applied, and the evaluation aligns with conventional protocols used in prior works. As $\lambda$ decreases (e.g., to 0.1 or 0.01), metrics are computed only over predictions that satisfy the stricter FAR requirement.
164
+
165
+ As shown in Table 3, under the practical setting of $\lambda = 0.1$ (i.e., FAR $\leq 10\%$ ), our method achieves mAUC of 0.4290 on CAP and 0.3315 on DADA, with strong short-term anticipation performance $(\mathrm{AUC}_{0.5s}^{0.1} = 0.6747$ and 0.5669, respectively). Even under a stringent constraint of $\lambda = 0.01$ (FAR $\leq 1\%$ ), our model retains non-trivial performance, particularly at the 0.5s horizon (AUC = 0.3371 on CAP, 0.1183 on DADA).
166
+
167
+ Notably, as $\lambda$ decreases, AUC drops more sharply at longer horizons (1.0s-1.5s) than at shorter ones (0.0s-0.5s), indicating that early false alarms are effectively suppressed under tight FAR constraints. This confirms that our model's early predictions are often spurious, while its near-crash anticipation remains reliable—a behavior aligned with real-world safety requirements. The corresponding reduction in mTTA reflects the inherent trade-off between false alarm suppression and anticipation lead time.
168
+
169
+ # 5.3 Ablation Study
170
+
171
+ Temporal occurrence prediction. Our temporal occurrence prediction (TOP) module replaces the conventional single risk score with a sequence of accident scores over future time steps (0.1s-2.0s), enabling explicit modeling of when an accident may occur. As shown in Table 4, adding TOP (Experiment III vs. I) improves performance at the 0.0s horizon (AUC from 0.4357 to 0.5700), but yields limited gains at longer horizons, suggesting that TOP alone—without strong spatiotemporal modeling—struggles to capture early precursors of accidents. However, when combined with the snippet encoder (Experiment IV), TOP contributes significantly to overall anticipation accuracy, confirming that forecasting future accident timing provides more informative supervision than frame-level risk scoring.
172
+
173
+ Table 3: Quantitative results comparison across different $\lambda$ of our method on the CAP [9] and DADA [8] datasets.
174
+
175
+ <table><tr><td>Dataset</td><td>λ</td><td>AUCλ↑</td><td>AUCλ0.5s↑</td><td>AUCλ1.0s↑</td><td>AUCλ1.5s↑</td><td>mAUCλ↑</td><td>mTTAλ(s)↑</td></tr><tr><td rowspan="3">CAP [9]</td><td>1</td><td>0.9760</td><td>0.9389</td><td>0.8377</td><td>0.7164</td><td>0.8310</td><td>1.5908</td></tr><tr><td>0.1</td><td>0.8381</td><td>0.6747</td><td>0.3982</td><td>0.2141</td><td>0.4290</td><td>0.8644</td></tr><tr><td>0.01</td><td>0.7329</td><td>0.3371</td><td>0.0882</td><td>0.0227</td><td>0.1494</td><td>0.4394</td></tr><tr><td rowspan="3">DADA [8]</td><td>1</td><td>0.9666</td><td>0.8946</td><td>0.7399</td><td>0.6400</td><td>0.7582</td><td>1.4328</td></tr><tr><td>0.1</td><td>0.7903</td><td>0.5669</td><td>0.2877</td><td>0.1399</td><td>0.3315</td><td>0.8848</td></tr><tr><td>0.01</td><td>0.3177</td><td>0.1183</td><td>0.0203</td><td>0.0068</td><td>0.0484</td><td>0.5153</td></tr></table>
176
+
177
+ Table 4: Ablation study on the CAP [9] dataset. TOP: temporal occurrence prediction; SE: snippet encoder.
178
+
179
+ <table><tr><td>Experiment</td><td>TOP</td><td>SE</td><td>AUC0.1↑</td><td>AUC0.10.5s↑</td><td>AUC0.11.0s↑</td><td>AUC0.11.5s↑</td><td>mAUC0.1↑</td><td>mTTA0.1(s)↑</td></tr><tr><td>I (Baseline)</td><td>×</td><td>×</td><td>0.4357</td><td>0.3938</td><td>0.2770</td><td>0.1777</td><td>0.2829</td><td>0.5389</td></tr><tr><td>II</td><td>×</td><td>✓</td><td>0.6027</td><td>0.5550</td><td>0.3607</td><td>0.1931</td><td>0.3696</td><td>0.7330</td></tr><tr><td>III</td><td>✓</td><td>×</td><td>0.5700</td><td>0.3432</td><td>0.2284</td><td>0.1721</td><td>0.2479</td><td>0.4595</td></tr><tr><td>IV (Ours)</td><td>✓</td><td>✓</td><td>0.8381</td><td>0.6747</td><td>0.3982</td><td>0.2141</td><td>0.4290</td><td>0.8644</td></tr></table>
180
+
181
+ Snippet encoder. The snippet encoder (SE) processes short clips of consecutive frames to jointly model spatial and temporal dynamics, which is crucial for understanding motion patterns and scene evolution. Comparing Experiment II (SE only) with the baseline (I), SE alone boosts $\mathrm{AUC}_{0.5s}$ from 0.3938 to 0.5550 and mTTA from 0.5389s to 0.7330s. More importantly, when SE is combined with TOP (Experiment IV), the model achieves the best results across all metrics: $\mathrm{AUC}^{0.1} = 0.8381$ , mAUC $= 0.4290$ , and mTTA $= 0.8644$ s. This demonstrates that SE and TOP are highly complementary—SE provides rich spatiotemporal context, while TOP leverages precise temporal supervision to produce well-calibrated anticipation outputs.
182
+
183
+ # 6 Conclusion
184
+
185
+ In this work, we propose a novel accident anticipation paradigm that shifts the prediction target from ambiguous per-frame risk scores to directly estimating accident scores at multiple future time steps (e.g., 0.1s-2.0s ahead), leveraging precisely annotated accident occurrences as supervision. Our method employs a snippet encoder and a Transformer-based temporal decoder to jointly model spatiotemporal dynamics and enable online anticipation. Furthermore, we introduce a practical evaluation protocol that reports recall and Time-to-Accident (TTA) only under acceptable false alarm rates, aligning metrics with real-world deployment needs. Experiments show that our approach achieves state-of-the-art performance, particularly in the critical moments just before a crash.
186
+
187
+ Limitations. While our method significantly improves anticipation accuracy near the accident onset, its performance at longer horizons (e.g., $>1.0\mathrm{s}$ ) remains limited, indicating challenges in capturing subtle early precursors. Additionally, even under constrained false alarm rates, spurious alerts can still occur in complex scenes, which may affect user trust. These issues point to key directions for future work.
188
+
189
+ Potential societal impacts. Our system has the potential to enhance road safety by providing timely warnings. However, over-reliance on automated alerts might reduce driver vigilance. Careful human-in-the-loop design and user education are essential to maximize safety benefits while mitigating behavioral risks.
190
+
191
+ # Acknowledgments and Disclosure of Funding
192
+
193
+ This work was supported by the National Natural Science Foundation of China (Grant No. 62471344), the Zhongguancun Academy (Project No. 20240304), and the CCF-DiDi GAIA Collaborative Research Funds for Young Scholars.
194
+
195
+ # References
196
+
197
+ [1] J. Fang, J. Qiao, J. Xue, and Z. Li, “Vision-based traffic accident detection and anticipation: A survey,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 4, pp. 1983–1999, 2023.
198
+ [2] K.-H. Zeng, S.-H. Chou, F.-H. Chan, J. Carlos Niebles, and M. Sun, "Agent-centric risk assessment: Accident anticipation and risky region localization," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2222-2230.
199
+ [3] F.-H. Chan, Y.-T. Chen, Y. Xiang, and M. Sun, “Anticipating accidents in dashcam videos,” in ACCV. Springer, 2017, pp. 136–153.
200
+ [4] Y. Yao, M. Xu, Y. Wang, D. J. Crandall, and E. M. Atkins, "Unsupervised traffic accident detection in first-person videos," in IROS. IEEE, 2019, pp. 273-280.
201
+ [5] Y. Yao, X. Wang, M. Xu, Z. Pu, Y. Wang, E. Atkins, and D. J. Crandall, “Dota: Unsupervised detection of traffic anomaly in driving videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 444–459, 2022.
202
+ [6] W. Bao, Q. Yu, and Y. Kong, "Uncertainty-based traffic accident anticipation with spatio-temporal relational learning," in ACMMM, 2020, pp. 2682-2690.
203
+ [7] W. Bao, Y. Qi, and Y. Kong, “Drive: Deep reinforced accident anticipation with visual explanation,” in ICCV, 2021, pp. 7619–7628.
204
+ [8] J. Fang, D. Yan, J. Qiao, J. Xue, and H. Yu, "Dada: Driver attention prediction in driving accident scenarios," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6, pp. 4959-4971, 2021.
205
+ [9] J. Fang, L.-L. Li, K. Yang, Z. Zheng, J. Xue, and T.-S. Chua, "Cognitive accident prediction in driving scenes: A multimodality benchmark," IEEE Intelligent Transportation Systems Magazine, 2024.
206
+ [10] T. Suzuki, H. Kataoka, Y. Aoki, and Y. Satoh, "Anticipating traffic accidents with adaptive loss and large-scale incident db," in CVPR, 2018, pp. 3521-3529.
207
+ [11] M. M. Karim, Y. Li, R. Qin, and Z. Yin, "A dynamic spatial-temporal attention network for early anticipation of traffic accidents," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9590-9600, 2022.
208
+ [12] Y. Kumamoto, K. Ohtani, D. Suzuki, M. Yamataka, and K. Takeda, “Aat-da: Accident anticipation transformer with driver attention,” in 2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), 2025, pp. 1052–1061.
209
+ [13] J. Zhang, Y. Guan, C. Wang, H. Liao, G. Zhang, and Z. Li, "LATTE: A real-time lightweight attention-based traffic accident anticipation engine," Information Fusion, vol. 122, p. 103173, 2025.
210
+ [14] M. M. Karim, Y. Li, and R. Qin, "Toward explainable artificial intelligence for early anticipation of traffic accidents," Transportation Research Record, vol. 2676, no. 6, pp. 743-755, 2022.
211
+ [15] I. Song and J. Lee, "Real-time traffic accident anticipation with feature reuse," in 2025 IEEE International Conference on Image Processing (ICIP), 2025, pp. 2312-2317.
212
+ [16] W. Liu, T. Zhang, Y. Lu, J. Chen, and L. Wei, "That-net: Two-layer hidden state aggregation based two-stream network for traffic accident prediction," Inf. Sci., vol. 634, p. 744-760, 2023.
213
+ [17] T. Wang, K. Chen, G. Chen, B. Li, Z. Li, Z. Liu, and C. Jiang, "Gsc: A graph and spatio-temporal continuity based framework for accident anticipation," IEEE Transactions on Intelligent Vehicles, vol. 9, no. 1, pp. 2249-2261, 2023.
214
+ [18] N. Thakur, P. Gouripeddi, and B. Li, "Graph (graph): A nested graph-based framework for early accident anticipation," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 7533-7541.
215
+ [19] W. Song, S. Li, T. Chang, K. Xie, A. Hao, and H. Qin, "Dynamic attention augmented graph network for video accident anticipation," Pattern Recognition, vol. 147, p. 110071, 2024.
216
+ [20] H. Liao, H. Sun, H. Shen, C. Wang, C. Tian, K. Tam, L. Li, C. Xu, and Z. Li, “Crash: Crash recognition and anticipation system harnessing with context-aware and temporal focus attentions,” in Proceedings of the 32nd ACM International Conference on Multimedia, 2024, pp. 11041–11050.
217
+ [21] M. M. Karim, Z. Yin, and R. Qin, "An attention-guided multistream feature fusion network for early localization of risky traffic agents in driving videos," IEEE Transactions on Intelligent Vehicles, pp. 1-12, 2023.
218
+ [22] H. Liao, Y. Li, Z. Li, Z. Bian, J. Lee, Z. Cui, G. Zhang, and C. Xu, "Real-time accident anticipation for autonomous driving through monocular depth-enhanced 3d modeling," Accident Analysis & Prevention, vol. 207, p. 107760, 2024.
219
+ [23] W. Liu, Y. Li, T. Zhang, Y. Gao, L. Wei, and J. Chen, "Ccaf-net: Cascade complementarity-aware fusion network for traffic accident prediction in dashcam videos," Neurocomput., vol. 624, 2025.
220
+
221
+ [24] J. Fang, D. Yan, J. Qiao, J. Xue, H. Wang, and S. Li, “Dada-2000: Can driving accident be predicted by driver attentionf analyzed by a benchmark,” in ITSC. IEEE, 2019, pp. 4303–4309.
222
+ [25] Y. Guan, H. Liao, C. Wang, B. Wang, J. Zhang, J. Hu, and Z. Li, "Domain-enhanced dual-branch model for efficient and interpretable accident anticipation," Communications in Transportation Research, vol. 5, p. 100214, 2025.
223
+ [26] H. Liao, Y. Li, C. Wang, Y. Guan, K. Tam, C. Tian, L. Li, C. Xu, and Z. Li, "When, where, and what? a novel benchmark for accident anticipation and localization with large language models," ACM MM, 2024.
224
+ [27] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," NeurIPS, vol. 30, 2017.
225
+ [28] L. Chen, J. Lu, Z. Song, and J. Zhou, "Recurrent semantic preserving generation for action prediction," IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 1, pp. 231-245, 2020.
226
+ [29] T. J. Schoonbeek, F. J. Piva, H. R. Abdelhay, and G. Dubbelman, "Learning to predict collision risk from simulated video data," in 2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022, pp. 943-951.
227
+ [30] A. P. Shah, J.-B. Lamare, T. Nguyen-Anh, and A. Hauptmann, "Cadp: A novel dataset for cctv traffic camera based accident analysis," in 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2018, pp. 1-9.
228
+ [31] T. You and B. Han, "Traffic accident benchmark for causality recognition," in ECCV. Springer, 2020, pp. 540-556.
229
+ [32] D. C. Moura, S. Zhu, and O. Zvitia, “Nexar dashcam collision prediction dataset and challenge,” 2025. [Online]. Available: https://arxiv.org/abs/2503.03848
230
+ [33] J. Fang, L.-l. Li, J. Zhou, J. Xiao, H. Yu, C. Lv, J. Xue, and T.-S. Chua, "Abductive ego-view accident video understanding for safe driving perception," in CVPR, 2024, pp. 22030-22040.
231
+ [34] C. Feichtenhofer, H. Fan, J. Malik, and K. He, "Slowfast networks for video recognition," in CVPR, 2019, pp. 6202-6211.
232
+
233
+ # A Qualitative Results
234
+
235
+ ![](images/d2ee53eb24b8fb2ce5c0ba1affed1bb37ccc3596bd88572f19f6a172e7fb758b.jpg)
236
+ (a) A case where the ego vehicle collides with a motorcyclist who suddenly emerges from a blind spot.
237
+
238
+ ![](images/146c2cb6289e86de33ced6b80b1309aaec27446caeb9b6c92d647c4a1d9f17ba.jpg)
239
+ (b) A case where the ego vehicle collides with a cyclist who suddenly cuts across the road.
240
+
241
+ ![](images/366e6848e717acec496cc58fc655bea0fd7c717b005d7b70f1ffd356cdad05de.jpg)
242
+ (c) A case where the ego vehicle collides with a suddenly lane-changing car.
243
+
244
+ ![](images/606cfa98b6dfc9453391122ee39f9c04e685a9a5c901f3bcf072c9e306f1e0b3.jpg)
245
+ (d) A case where the ego vehicle rear-ends the lead car.
246
+ Figure 5: Qualitative results on CAP [9]. Each case shows the trend of the maximum accident score predicted over future time steps; an alarm is triggered if this maximum exceeds the threshold.
247
+
248
+ We present the qualitative results of our method on the CAP dataset [9] in Figure 5, where different colors denote the temporal annotations in the dataset. Four distinct cases are demonstrated: (a) and (b) involve ego-vehicle collisions with vulnerable road users, while (c) and (d) involve collisions with other vehicles.
249
+
250
+ We trigger an alarm if any accident score within the 2.0s prediction horizon exceeds a predefined threshold (e.g., 0.5). In cases (a) and (b), the model issues an alert immediately when a motorcyclist emerges from a blind spot or a cyclist begins to cut across the road. In cases (c) and (d), the model accurately responds to sudden lane changes and abrupt braking of the lead vehicle, demonstrating reliable anticipation under diverse hazardous scenarios.
251
+
252
+ The average Time-to-Accident (TTA) across these four cases is 1.0s, consistent with the typical duration between anomaly onset and collision in real-world accidents. Notably, previously reported TTAs exceeding 3 seconds in prior works [7, 9] stem from flawed calculation methods that count false alarms far before the anomaly as valid early predictions—rather than genuine long-term anticipation capability. This underscores the necessity of our revised TTA metric.
253
+
254
+ Furthermore, we observed inconsistencies in the dataset's "anomaly appear" annotations. For instance, cases (b) and (d) were labeled too early, while case (c) was labeled too late. Such subjectivity introduces noise when using anomaly onset as a training or evaluation boundary.
255
+
256
+ # NeurIPS Paper Checklist
257
+
258
+ # 1. Claims
259
+
260
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
261
+
262
+ Answer: [Yes]
263
+
264
+ Justification: We propose a novel accident anticipation paradigm that shifts from traditional per-frame risk score prediction to directly estimating accident scores at multiple future time steps (e.g., 0.1s-2.0s ahead), leveraging precisely annotated accident occurrences as supervision. This provides more accurate training signals and yields more interpretable and temporally grounded predictions.
265
+
266
+ Guidelines:
267
+
268
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
269
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
270
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
271
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
272
+
273
+ # 2. Limitations
274
+
275
+ Question: Does the paper discuss the limitations of the work performed by the authors?
276
+
277
+ Answer: [Yes]
278
+
279
+ Justification: We discussed the limitations of our work in the conclusion section.
280
+
281
+ Guidelines:
282
+
283
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
284
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
285
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
286
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
287
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
288
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
289
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
290
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
291
+
292
+ # 3. Theory assumptions and proofs
293
+
294
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
295
+
296
+ Answer: [NA]
297
+
298
+ Justification: Our paper does not include theoretical results.
299
+
300
+ Guidelines:
301
+
302
+ - The answer NA means that the paper does not include theoretical results.
303
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
304
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
305
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
306
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
307
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
308
+
309
+ # 4. Experimental result reproducibility
310
+
311
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
312
+
313
+ Answer: [Yes]
314
+
315
+ Justification: The project page, which includes code, model checkpoints, and implementation details, is provided in the abstract (https://happytianhao.github.io/TOP/).
316
+
317
+ Guidelines:
318
+
319
+ - The answer NA means that the paper does not include experiments.
320
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
321
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
322
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
323
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
324
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
325
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
326
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
327
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
328
+
329
+ # 5. Open access to data and code
330
+
331
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
332
+
333
+ Answer: [Yes]
334
+
335
+ Justification: All datasets used in this work (CAP [9], DADA [8], and MM-AU [33]) are publicly available. Our code, model checkpoints, and detailed instructions for reproducing the main results are provided on the project page: https://happytianhao.github.io/TOP/.
336
+
337
+ Guidelines:
338
+
339
+ - The answer NA means that paper does not include experiments requiring code.
340
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
341
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
342
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
343
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
344
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
345
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
346
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
347
+
348
+ # 6. Experimental setting/details
349
+
350
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
351
+
352
+ Answer: [Yes]
353
+
354
+ Justification: We have specified all the training and test details in the implementation details section.
355
+
356
+ Guidelines:
357
+
358
+ - The answer NA means that the paper does not include experiments.
359
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
360
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
361
+
362
+ # 7. Experiment statistical significance
363
+
364
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
365
+
366
+ Answer: [Yes]
367
+
368
+ Justification: All reported results are averaged over three independent runs with different random seeds. While error bars are omitted in the tables for clarity (following common practice in video accident anticipation benchmarks), the use of mean values ensures stable and reproducible comparisons. The performance gaps between our method and baselines are consistent across runs and substantially larger than any observed variance.
369
+
370
+ Guidelines:
371
+
372
+ - The answer NA means that the paper does not include experiments.
373
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
374
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
375
+ - The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
376
+ - The assumptions made should be given (e.g., Normally distributed errors).
377
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
378
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
379
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
380
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
381
+
382
+ # 8. Experiments compute resources
383
+
384
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
385
+
386
+ Answer: [Yes]
387
+
388
+ Justification: We have specified all the training and test details in the implementation details section.
389
+
390
+ Guidelines:
391
+
392
+ - The answer NA means that the paper does not include experiments.
393
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
394
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
395
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
396
+
397
+ # 9. Code of ethics
398
+
399
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
400
+
401
+ Answer: [Yes]
402
+
403
+ Justification: The research conducted in the paper conforms, in every respect, with the NeurIPS Code of Ethics.
404
+
405
+ Guidelines:
406
+
407
+ - The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
408
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
409
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
410
+
411
+ # 10. Broader impacts
412
+
413
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
414
+
415
+ Answer: [Yes]
416
+
417
+ Justification: We discussed both potential positive societal impacts and negative societal impacts of our work in the conclusion section.
418
+
419
+ # Guidelines:
420
+
421
+ - The answer NA means that there is no societal impact of the work performed.
422
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
423
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
424
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
425
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
426
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
427
+
428
+ # 11. Safeguards
429
+
430
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
431
+
432
+ Answer: [NA]
433
+
434
+ Justification: Our work poses no such risks.
435
+
436
+ # Guidelines:
437
+
438
+ - The answer NA means that the paper poses no such risks.
439
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
440
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
441
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
442
+
443
+ # 12. Licenses for existing assets
444
+
445
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
446
+
447
+ Answer: [Yes]
448
+
449
+ Justification: Yes, all third-party assets used in this work are properly credited with full compliance to their licensing terms.
450
+
451
+ # Guidelines:
452
+
453
+ - The answer NA means that the paper does not use existing assets.
454
+ - The authors should cite the original paper that produced the code package or dataset.
455
+
456
+ - The authors should state which version of the asset is used and, if possible, include a URL.
457
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
458
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
459
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
460
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
461
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
462
+
463
+ # 13. New assets
464
+
465
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
466
+
467
+ Answer: [Yes]
468
+
469
+ Justification: Yes, all newly introduced assets in this work are thoroughly documented following FAIR principles (Findable, Accessible, Interoperable, Reusable).
470
+
471
+ Guidelines:
472
+
473
+ - The answer NA means that the paper does not release new assets.
474
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
475
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
476
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
477
+
478
+ # 14. Crowdsourcing and research with human subjects
479
+
480
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
481
+
482
+ Answer: [NA]
483
+
484
+ Justification: Our work includes no crowdsourcing experiments and research with human subjects.
485
+
486
+ Guidelines:
487
+
488
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
489
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
490
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
491
+
492
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
493
+
494
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
495
+
496
+ Answer: [NA]
497
+
498
+ Justification: Our work poses no such risks
499
+
500
+ # Guidelines:
501
+
502
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
503
+ - Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
504
+ - We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
505
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
506
+
507
+ # 16. Declaration of LLM usage
508
+
509
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
510
+
511
+ Answer: [NA]
512
+
513
+ Justification: Our core method development does not involve LLMs as any important, original, or non-standard components.
514
+
515
+ # Guidelines:
516
+
517
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
518
+ - Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
accidentanticipationviatemporaloccurrenceprediction/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d194634fd20d9d6afc578ae651b69aca99a27e6197baa862c436aa86427ebacf
3
+ size 575921
accidentanticipationviatemporaloccurrenceprediction/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91602beed7a441ef07760307a7fd2d02fc2951cef6cf5c9f4a7c8f0af44a081c
3
+ size 586516
accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/cfe4e2ab-fab2-4658-b42e-17015f3cc925_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3240411394fa335de8e10c1253aff1112fa130f6b9e89cdea90604517412e359
3
+ size 192216
accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/cfe4e2ab-fab2-4658-b42e-17015f3cc925_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3425e11b7108f293d50865e43e19247cfe06d18eef4b0ec60fe26f9bbf11271
3
+ size 236572
accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/cfe4e2ab-fab2-4658-b42e-17015f3cc925_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a98ca281af68d537eb7b0d3833aa831fb60b4500a73046116a3bad745052fa3f
3
+ size 21864382
accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/full.md ADDED
The diff for this file is too large to render. See raw diff
 
accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59abbad2df9cd1da03bd48aa4e799458a4ad00c6a20da63433d0f1ec05c07404
3
+ size 3132896
accuquantsimulatingmultipledenoisingstepsforquantizingdiffusionmodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd09dc8ece3b378ac9dbc40a7a9a33625a493bcc9a2751122c341d59814289dc
3
+ size 955056
accurateandefficientlowrankmodelmergingincorespace/8ade3df5-965f-48fb-909c-12368106b32c_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:089e1f105b0ae993860cda768cbd759089f80b4505bc6149071876d44f9a28f1
3
+ size 211696
accurateandefficientlowrankmodelmergingincorespace/8ade3df5-965f-48fb-909c-12368106b32c_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3005d0be398d9339f366fabcf8bbb1c9688245298c530dcf9a545f784fa4577
3
+ size 271556
accurateandefficientlowrankmodelmergingincorespace/8ade3df5-965f-48fb-909c-12368106b32c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0058c42b51182830f2a215ffa85d4e17a5d0ea131d233c80b8481bfb51bd583e
3
+ size 1238162
accurateandefficientlowrankmodelmergingincorespace/full.md ADDED
The diff for this file is too large to render. See raw diff
 
accurateandefficientlowrankmodelmergingincorespace/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f70f2c2228775a344d0145a2c505220fe769f2e67863001301fe149dfc8c76f
3
+ size 1422998
accurateandefficientlowrankmodelmergingincorespace/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe6b8ced8de16498bb9d2d47ed0c122a8658ff699d1d5bbb9a959ae1dd58f473
3
+ size 1277625
accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/44562f8a-d0b7-4b34-8a29-f2efb2ef2d63_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcd5338b740c09aa273fce270da10097e0735c90a7323c28734adea1ac35f1c3
3
+ size 221549
accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/44562f8a-d0b7-4b34-8a29-f2efb2ef2d63_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be0ea830139269973a5906338d419a2bd4f922b8c82efbc85e2ee01cf45fa97
3
+ size 277097
accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/44562f8a-d0b7-4b34-8a29-f2efb2ef2d63_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f855e0bd44ef871c4cb31023e59974fa0199446a1c34765eedf40092fec6b359
3
+ size 4265510
accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/full.md ADDED
The diff for this file is too large to render. See raw diff
 
accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13c894ee33caad1cdf8cf415b3eef45d822429255741a735777238c56b0df2db
3
+ size 2639121
accuratekvcacheevictionviaanchordirectionprojectionforefficientllminference/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:968c286b07f092d8e63a98c71f7c4c71f6554e7899427cbd660b85e2ea0319f0
3
+ size 1153883
accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/c8796483-a35f-4cf7-8ab6-3c6539557a05_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39611993bdd871eb2e01409db0757a27f4f49b6dd3f17680b2cd9060ace619e3
3
+ size 145481
accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/c8796483-a35f-4cf7-8ab6-3c6539557a05_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05cc975441275f197ebe56bacefe9748d5874163fc82bd3e5f75fdd6ae6d22e7
3
+ size 184340
accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/c8796483-a35f-4cf7-8ab6-3c6539557a05_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:559864427d41d7f8f3131637f03aeb354d0fd43a825afff013c2b344f695c4c3
3
+ size 7357644
accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/full.md ADDED
@@ -0,0 +1,681 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accurately Predicting Protein Mutational Effects via a Hierarchical Many-Body Attention Network
2
+
3
+ Dahao Xu $^{1,\ast}$ , Jiahua Rao $^{1,\ast}$ , Mingming Zhu $^{1}$ , Jixian Zhang $^{2}$ , Wei Lu $^{2}$ , Shuangjia Zheng $^{3,\dagger}$ , Yuedong Yang $^{1,\dagger}$
4
+
5
+ *Equal Contribution †Corresponding Authors
6
+
7
+ <sup>1</sup>Sun Yat-sen University <sup>2</sup>Aureka Biotechnologies <sup>3</sup>Shanghai Jiao Tong University
8
+
9
+ shuangjia.zheng@sjtu.edu.cn
10
+
11
+ yangyd25@mail.sysu.edu.cn
12
+
13
+ # Abstract
14
+
15
+ Predicting changes in binding free energy $(\Delta \Delta G)$ is essential for understanding protein-protein interactions, which are critical in drug design and protein engineering. However, existing methods often rely on pre-trained knowledge and heuristic features, limiting their ability to accurately model complex mutation effects, particularly higher-order and many-body interactions. To address these challenges, we propose H3-DDG, a Hypergraph-driven Hierarchical network to capture Higher-order many-body interactions across multiple scales. By introducing a hierarchical communication mechanism, H3-DDG effectively models both local and global mutational effects. Experimental results demonstrate state-of-the-art performance on multiple benchmarks. On the SKEMPI v2 dataset, H3-DDG achieves a Pearson correlation of 0.75, improving multi-point mutations prediction by $12.10\%$ . On the challenging BindingGYM dataset, it outperforms Prompt-DDG and BA-DDG by $62.61\%$ and $34.26\%$ , respectively. Ablation and efficiency analyses demonstrate its robustness and scalability, while a case study on SARS-CoV-2 antibodies highlights its practical value in improving binding affinity for therapeutic design.
16
+
17
+ # 1 Introduction
18
+
19
+ Protein-protein interactions (PPIs) [19, 9, 12, 32] are fundamental to numerous biological processes, driving key cellular functions such as signal transduction [21], immune response [35, 22], and metabolic regulation [40]. A precise understanding of how mutations alter binding free energy $(\Delta \Delta G)$ in PPIs is critical for a wide range of applications, including drug design [2, 7, 20, 31], protein engineering [5], and elucidating the molecular basis of disease [33].
20
+
21
+ Binding free energy quantifies the thermodynamic stability of protein complexes and is inherently governed by the physical interactions between amino acids. These interactions span multiple spatial and structural scales, from pairwise atomic forces—such as hydrogen bonding and van der Waals interactions—to higher-order many-body effects, including hydrogen bond networks and $\pi-\pi$ stacking interactions. Accurately modeling these intricate interactions is essential for predicting the functional and structural consequences of mutations [11, 13]. This becomes particularly challenging in multi-point mutation scenarios, where complex interdependencies between mutation sites often emerge, further complicating the prediction task.
22
+
23
+ Accurately predicting $\Delta \Delta G$ remains a critical yet challenging task, as existing computational methods face inherent limitations. These methods can be categorized into structure-based approaches and inverse folding-based models. Structure-based approaches leverage well-designed training tasks, such as protein inverse folding [39], side-chain modeling [26, 23, 28], masked modeling [37], and data augmentation [38], to extract protein representations from structural data. However, they often
24
+
25
+ fall short in effectively transferring the learned structural knowledge to $\Delta \Delta G$ prediction. This shortcoming arises from their limited capacity to explicitly model higher-order and many-body interactions, which are crucial for capturing the physicochemical impacts of mutations.
26
+
27
+ On the other hand, inverse folding-based models, such as ProteinMPNN-DDG [10] and BA-DDG [18], predict $\Delta \Delta G$ by estimating likelihood differences between native and mutant sequences relative to stability. While these models perform well for single-point mutations, they rely on indirect proxies, such as Boltzmann-Alignment, to approximate binding free energy. This reliance limits their effectiveness in multi-point mutation scenarios, which involve intricate interdependencies among mutation sites and require a deeper understanding of protein energetics and many-body interaction dynamics. These limitations underscore the need for an approach capable of capturing higher-order interactions and adapting to the complexity of multi-point mutation scenarios.
28
+
29
+ In this work, we introduce H3-DDG, a Hypergraph-driven Hierarchical network to capture Higher-order many-body interactions across multiple scales for $\Delta \Delta G$ predictions. By leveraging a many-body attention communication mechanism, H3-DDG effectively models higher-order and many-body interactions across multiple scales, enabling precise predictions of binding free energy changes, particularly in challenging multi-point mutation contexts.
30
+
31
+ Experimental results demonstrate state-of-the-art performance across multiple benchmarks. On the SKEMPI v2 dataset [17], H3-DDG achieves a Pearson correlation of 0.75, improving multi-point mutations prediction by $12.10\%$ . On the challenging BindingGYM dataset [24], it outperforms Prompt-DDG and BA-DDG by $62.61\%$ and $34.26\%$ , respectively. Notably, in ablation studies, the many-body attention mechanism proves to be critical, contributing a $5.4\%$ improvement in prediction accuracy when incorporated. Additionally, the hypergraph-driven hierarchical design enables the model to effectively capture long-range dependencies, which are essential for accurately modeling complex mutation scenarios. A case study on SARS-CoV-2 antibodies further highlights H3-DDG's ability to predict binding affinity changes with high precision, demonstrating its practical utility in real-world applications. These results underscore the capability of H3-DDG to handle intricate mutational landscapes and its potential for broad applicability in protein engineering and drug design. The key contributions of this work are as follows:
32
+
33
+ - We introduce a hierarchical communication mechanism to model local and global interactions, capturing higher-order effects like hydrogen bond networks and $\pi - \pi$ stacking.
34
+ - We propose a many-body attention network to explicitly model higher-order and many-body interactions, enabling robust predictions in complex mutational scenarios.
35
+ - H3-DDG achieves state-of-the-art performance on multiple benchmark datasets, outperforming BA-DDG by $12.10\%$ in multi-point mutation prediction on the SKEMPI v2 dataset and by $34.26\%$ on the BindingGYM dataset.
36
+
37
+ # 2 Related Work
38
+
39
+ Efforts to predict the change in binding free energy $(\Delta \Delta G)$ have resulted in a variety of computational approaches, broadly categorized into two main types: (1) methods leveraging structure-based training tasks and (2) methods utilizing inverse folding models for $\Delta \Delta G$ prediction. While these approaches have shown promise, they also highlight significant limitations, particularly in capturing higher-order interactions and complex multi-point mutation effects [30, 32].
40
+
41
+ The first category focuses on designing training tasks that utilize structural information to enhance the modeling of protein energetics. These include techniques such as protein inverse folding [39], side-chain modeling [26, 23, 28], and masked modeling [37]. These approaches attempt to extract meaningful representations of protein structures by optimizing for tasks aligned with physical principles of molecular interactions. However, their reliance on pre-trained knowledge limits their ability to explicitly model higher-order and many-body interactions, which are critical for understanding the thermodynamic effects of mutations. This gap restricts their utility in directly predicting $\Delta \Delta G$ , particularly in complex mutation scenarios.
42
+
43
+ The second category focuses on adapting inverse folding models to predict $\Delta \Delta G$ [25]. Models such as ProteinMPNN-DDG [10] and BA-DDG [18] leverage the ability of inverse folding models to assess the likelihood of native and mutant amino acids within a structural context. These methods predict $\Delta \Delta G$ by estimating the differences in likelihood scores between native and mutant sequences with
44
+
45
+ ![](images/3668c7bc437cc63c78eb23c0e70c5193c85ddc398c8209b5939490e68a65580b.jpg)
46
+
47
+ ![](images/f247f350e51404b9ac8f2f2611fd51e83cbef3823da1cff4f2b341103fbf5e5a.jpg)
48
+
49
+ ![](images/0acf5d1750655e9a9e8834fbde9c016f5a6fb28b990415f2c9505a3dc797669e.jpg)
50
+ Figure 1: (a) Overall framework of H3-DDG. (b) Many-body attention mechanisms across hierarchical levels and their attention patterns. The first layer constructs a full-residue graph and applies 2-body message passing; the second layer performs spatial pooling to form a mutation-centered hypergraph and applies 3-body attention between hyperedges; the third layer extracts fine-grained subgraphs around mutation sites and applies localized 4-body attention within each mutation subgraph. (c) Detailed computational flow of the 4-body attention module within the mutation subgraph.
51
+
52
+ their relative stability. However, likelihood-based scoring serves as an indirect proxy for binding free energy changes and often fails to capture explicit physical interactions critical to $\Delta \Delta G$ prediction. Moreover, their performance degrades significantly in multi-point mutation scenarios, where the interdependencies between mutations require a nuanced understanding of higher-order effects.
53
+
54
+ To address these challenges, we introduces a hierarchical communication mechanism and a many-body attention network to explicitly model local and global interactions across multiple scales. These innovations capture higher-order and many-body effects, such as hydrogen bond networks and $\pi -\pi$ stacking, while explicitly modeling mutation interdependencies in multi-point mutation scenarios.
55
+
56
+ # 3 Methodology
57
+
58
+ In this section, we introduce H3-DDG, a Hypergraph-driven, Hierarchical, and Higher-order interaction network designed to predict binding free energy changes $(\Delta \Delta G)$ in protein-protein interactions (PPIs). The overall workflow is illustrated in Figure 1.
59
+
60
+ # 3.1 Preliminaries and Notations
61
+
62
+ We represent a protein graph as $G = (V, E)$ , where each node $i \in V$ represents a residue with feature vector $\mathbf{h}_i$ , and each edge $(i, j) \in E$ encodes interactions via feature vector $\mathbf{e}_{ij}$ .
63
+
64
+ Building on ProteinMPNN-DDG [10] and BA-DDG [18], which link $\Delta \Delta G$ prediction with inverse folding, we also fine-tune a pretrained inverse folding model (ProteinMPNN [6]) to tackle this task. H3-DDG extends the capabilities of the inverse folding model by introducing hierarchical graph construction and many-body higher-order attention, enabling it to effectively capture complex dependencies across multiple mutation sites and improve protein representation learning.
65
+
66
+ To predict binding free energy $\Delta G$ , we compute amino acid probabilities and use negative log-likelihoods as a surrogate. For a wild-type (wt) complex and its mutant (mut) counterpart, the binding
67
+
68
+ free energy change is defined as:
69
+
70
+ $$
71
+ \Delta \Delta G = \Delta G _ {\mathrm {m u t}} - \Delta G _ {\mathrm {w t}}. \tag {1}
72
+ $$
73
+
74
+ # 3.2 Hierarchical Graph Construction for Multi-Scale Modeling
75
+
76
+ Residue-Level Graph. As described in Section 3.1, each residue $i$ is represented by a feature vector $\mathbf{h}_i$ , and each edge $(i,j)$ encodes spatial or chemical interactions via feature vector $\mathbf{e}_{ij}$ .
77
+
78
+ Following the approach of ProteinMPNN [6], residues are represented using key atoms: N, $C\alpha$ , C, O, and $C\beta$ . To construct the amino acid-level graph. Edges are formed by identifying the top- $k$ nearest neighbors based on $C\alpha - C\alpha$ distances. Each edge $(i,j)$ incorporates geometric features using radial basis functions (RBFs) [4] over 25 atom pair distances (e.g., N-N, C-C). For a given distance $d_{ij}$ , the RBF is defined as:
79
+
80
+ $$
81
+ \mathrm {R B F} _ {i j} ^ {(k)} = \exp \left(- \left(\frac {d _ {i j} - \mu_ {k}}{\sigma}\right) ^ {2}\right), \quad k = 1, \dots , K, \tag {2}
82
+ $$
83
+
84
+ where $\mu_{k}$ are $K$ predefined centers and $\sigma$ is the bandwidth.
85
+
86
+ These $\mathrm{RBF}_{ij}$ features are concatenated with relative positional encodings $\mathrm{PE}_{ij}$ , which include sequence offsets and chain identities. The final edge features $\mathbf{e}_{ij}$ are computed as:
87
+
88
+ $$
89
+ \mathbf {e} _ {i j} = \text {L a y e r N o r m} \left(W _ {e} \cdot \left[ \mathrm {P E} _ {i j} \| \mathrm {R B F} _ {i j} \right]\right), \tag {3}
90
+ $$
91
+
92
+ where $W_{e}$ is the learnable parameters and $[\cdot \| \cdot ]$ denotes concatenation. This graph effectively captures residue-level spatial and chemical interactions while preserving sequence context.
93
+
94
+ Mutation-Centered Hypergraph. Directly applying higher-order attention on amino acid-level graphs is computationally expensive, with complexity scaling as $\mathcal{O}(N^3)$ or $\mathcal{O}(N^4)$ , as demonstrated in [16, 30]. To address this challenge, we construct a mutation-centered hypergraph that reduces computational complexity while preserving critical local and global interactions.
95
+
96
+ We construct the hypergraph by clustering residues into hyperedges that capture higher-order spatial dependencies. Clustering is initialized at mutation sites to prioritize biologically salient regions, then expanded by iteratively adding residues with maximal Euclidean distance from existing centroids. Each residue is assigned to its nearest centroid based on $\mathrm{C}\alpha$ coordinates. Hyperedge features $\mathbf{h}^{(e)}$ are computed via mean-pooling over constituent residue embeddings $\mathbf{h}$ , yielding a compact representation that preserves structural topology and improves computational efficiency.
97
+
98
+ This mutation-centered hypergraph construction ensures that the influence of mutation sites is retained during graph compression and facilitates the efficient modeling of higher-order interactions. For further implementation details, refer to Appendix A.1.
99
+
100
+ Fine-Grained Mutation Subgraph. To capture the local effects of mutations, we extract fine-grained subgraphs centered on each mutation site. These subgraphs include all residues with $C\alpha$ atoms within an 8Å radius of the mutation centroid in 3D space, preserving key structural and biochemical contexts. Node and edge features in these subgraphs, denoted as $\mathbf{h}^{(m)}$ and $\mathbf{e}^{(m)}$ , respectively, enable localized analysis of conformational changes and energetic impacts, providing a focused view of mutation-induced effects on binding affinity.
101
+
102
+ # 3.3 Many-Body Attention Network
103
+
104
+ In this section, we describe the many-body attention mechanisms applied at different hierarchical levels of the graph. These include pairwise (2-body) message passing over the residue-level graph, triplet (3-body) attention between hyperedges in the mutation-centered hypergraph, and quadruplet (4-body) attention within fine-grained mutation subgraphs, as illustrated in Figure 1(b).
105
+
106
+ 2-body Message Passing in Residue-level Graph. We utilize a message passing mechanism to jointly update node features $\mathbf{h}_i$ and edge features $\mathbf{e}_{ij}$ in the residue-level graph (Section 3.2). Node-wise messages $\Delta \mathbf{h}_i$ are computed by projecting concatenated node and edge features through
107
+
108
+ a multi-layer perceptron (MLP):
109
+
110
+ $$
111
+ \Delta \mathbf {h} _ {i} = \sum_ {j \in \mathcal {N} (i)} W _ {m 3} \cdot \phi \left(W _ {m 2} \cdot \phi \left(W _ {m 1} \cdot \left[ \mathbf {h} _ {i} \| \mathbf {e} _ {i j} \right]\right)\right), \tag {4}
112
+ $$
113
+
114
+ where $\left[\cdot \right\| \cdot \right]$ denotes vector concatenation, $\phi$ is the GELU activation, and $W_{m1}$ , $W_{m2}$ , $W_{m3}$ are learnable projection matrices. The node features $\mathbf{h}_i$ are updated via a residual edge with layer normalization and a position-wise feed-forward network (FFN):
115
+
116
+ $$
117
+ \mathbf {h} _ {i} \leftarrow \text {L a y e r N o r m} \left(\mathbf {h} _ {i} + \operatorname {D r o p o u t} \left(\Delta \mathbf {h} _ {i}\right)\right), \tag {5}
118
+ $$
119
+
120
+ $$
121
+ \mathbf {h} _ {i} \leftarrow \text {L a y e r N o r m} \left(\mathbf {h} _ {i} + \operatorname {D r o p o u t} (\operatorname {F F N} (\mathbf {h} _ {i}))\right). \tag {6}
122
+ $$
123
+
124
+ Edge features $\mathbf{e}_{ij}$ are updated using the updated node features through a similar MLP-based mechanism:
125
+
126
+ $$
127
+ \mathbf {e} _ {i j} \leftarrow \text {L a y e r N o r m} \left( \right.\mathbf {e} _ {i j} + \text {D r o p o u t} \left(W _ {m 3} ^ {\prime} \cdot \phi \left(W _ {m 2} ^ {\prime} \cdot \phi \left(W _ {m 1} ^ {\prime} \cdot \left[ \mathbf {h} _ {i} \| \mathbf {e} _ {i j} \right]\right)\right)\right). \tag {7}
128
+ $$
129
+
130
+ This mechanism enables the model to integrate residue-level spatial and chemical interactions efficiently across the residue-level graph.
131
+
132
+ 3-body Attention Mechanism for Hyperedge Interactions. Building on the global structural context learned from 2-body message passing in the full-residue graph, we implement a 3-body attention mechanism to capture higher-order interactions between hyperedges in the mutation-centered hypergraph (Section 3.2). Here, each hyperedge is represented by its feature vector $\mathbf{h}^{(e)}$ , capturing the residue cluster-level information.
133
+
134
+ To model pairwise relationships between hyperedges, we first construct a pairwise interaction tensor $f_{ij}^{(e)} \in \mathbb{R}^{N \times N \times D}$ , where $N$ is the number of hyperedges and $D$ is the feature dimension. Here, $i$ and $j$ index hyperedges in the hypergraph. Each element $f_{ij}^{(e)}$ is computed as:
135
+
136
+ $$
137
+ f _ {i j} ^ {(e)} = \phi \left(\mathbf {h} _ {i} ^ {(e)}\right) \otimes \phi \left(\mathbf {h} _ {j} ^ {(e)}\right), \tag {8}
138
+ $$
139
+
140
+ where $\phi$ is a non-linear activation function (e.g., GELU) and $\otimes$ denotes the outer product.
141
+
142
+ An asymmetric attention mechanism is employed by linearly projecting $f_{ij}^{(e)}$ through learnable weight matrices and biases to obtain the query vector $\mathbf{q}_{ij} = W_q f_{ij}^{(e)} + b_q$ , key vector $\mathbf{k}_{jk} = W_k f_{jk}^{(e)} + b_k$ , and value vector $\mathbf{v}_{jk} = W_v f_{jk}^{(e)} + b_v$ . In addition, the mechanism introduces a bias term $\mathbf{b}_{ik}$ and a gating vector $\mathbf{g}_{ik}$ via separate linear projections for adaptive modulation. The triplet attention scores $a_{ijk}$ are then computed using a gated softmax function:
143
+
144
+ $$
145
+ a _ {i j k} = \operatorname {S o f t m a x} \left(\frac {\mathbf {q} _ {i j} \cdot \mathbf {k} _ {j k}}{\sqrt {d}} + \mathbf {b} _ {i k}\right) \cdot \sigma \left(\mathbf {g} _ {i k}\right), \tag {9}
146
+ $$
147
+
148
+ where Softmax ensures that the attention weights are normalized over hyperedge neighbors, and $\sigma (\mathbf{g}_{ik})$ applies a sigmoid activation to adaptively modulate the attention.
149
+
150
+ Each hyperedge aggregates attention-weighted messages from its spatial neighbors, and the aggregated features are fused with the corresponding full-residue node feature $\mathbf{h}_{r(i)}$ via residual addition, followed by layer normalization:
151
+
152
+ $$
153
+ \mathbf {h} _ {r (i)} \leftarrow \text {L a y e r N o r m} \left(\mathbf {h} _ {r (i)} + \text {D r o p o u t} \left(\sum_ {j \in \mathcal {N} (i)} \sum_ {k} a _ {i j k} \cdot \mathbf {v} _ {j k}\right)\right). \tag {10}
154
+ $$
155
+
156
+ Here, $r(i)$ denotes the mapping from hyperedge index $i$ to the corresponding residue index in the residue-level graph.
157
+
158
+ 4-body Attention Mechanism around Mutation Sites. To capture higher-order interactions in the mutation environment, we extend the 3-body attention mechanism to 4-body attention on the Fine-Grained Mutation Subgraph (Section 3.2), where $\mathbf{h}^{(m)}$ and $\mathbf{e}^{(m)}$ denote the node and edge features of the mutation subgraph. This subgraph focuses on residues within an $8\AA$ radius of the mutation site, enabling the model to learn localized interactions among residue quadruples. The
159
+
160
+ extension increases computational complexity by only a constant factor, preserving scalability while capturing more complex interactions. The full computational process is illustrated in Figure 1(c).
161
+
162
+ Specifically, we construct two interaction tensors to encode localized multi-residue interactions as inputs to the 4-body attention mechanism:
163
+
164
+ $$
165
+ f _ {i j} ^ {(m)} = \phi \left(\mathbf {h} _ {i} ^ {(m)}\right) \odot \phi \left(\mathbf {h} _ {j} ^ {(m)}\right), \quad f _ {j k l} ^ {(m)} = \phi \left(\mathbf {h} _ {j} ^ {(m)}\right) \odot \phi \left(\mathbf {e} _ {k l} ^ {(m)}\right). \tag {11}
166
+ $$
167
+
168
+ Here, $\phi$ is a non-linear activation function (e.g., GELU), and $\otimes$ denotes the outer product. The tensors $f_{ij}^{(m)}\in \mathbb{R}^{N\times N\times D}$ and $f_{jkl}^{(m)}\in \mathbb{R}^{N\times E\times D}$ respectively encode node-node and node-edge interactions, where $N$ and $E$ are the number of residues and edges on the mutation subgraph, and $D$ is the feature dimension. Indices $i,j,k,l$ refer to nodes within the mutation subgraph.
169
+
170
+ Inspired by cross-attention, the tensors $f_{ij}^{(m)}$ and $f_{jkl}^{(m)}$ are projected into query, key, and value vectors:
171
+
172
+ $$
173
+ \mathbf {q} _ {i j} = W _ {q} \cdot f _ {i j} ^ {(m)}, \quad \mathbf {k} _ {j k l} = W _ {k} \cdot f _ {j k l} ^ {(m)}, \quad \mathbf {v} _ {j k l} = W _ {v} \cdot f _ {j k l} ^ {(m)} \tag {12}
174
+ $$
175
+
176
+ $$
177
+ \mathbf {b} _ {i k l} = W _ {b} \cdot f _ {i k l} ^ {(m)}, \quad \mathbf {g} _ {i k l} = W _ {g} \cdot f _ {i k l} ^ {(m)}, \tag {13}
178
+ $$
179
+
180
+ where $W_{q}, W_{k}, W_{v}, W_{b}$ , and $W_{g}$ are learnable projection matrices.
181
+
182
+ The quadruplet attention scores $a_{ijkl}$ are computed via a gated softmax function:
183
+
184
+ $$
185
+ a _ {i j k l} = \operatorname {S o f t m a x} \left(\frac {\mathbf {q} _ {i j} \cdot \mathbf {k} _ {j k l}}{\sqrt {d}} + \mathbf {b} _ {i k l}\right) \cdot \sigma \left(\mathbf {g} _ {i k l}\right). \tag {14}
186
+ $$
187
+
188
+ Each node in the mutation subgraph aggregates attention-weighted messages from its spatial neighbors, and the aggregated features are fused with the original node features via a residual connection, followed by dropout and layer normalization:
189
+
190
+ $$
191
+ \mathbf {h} _ {i} ^ {(m)} \leftarrow \text {L a y e r N o r m} \left(\mathbf {h} _ {i} ^ {(m)} + \operatorname {D r o p o u t} \left(\sum_ {j \in \mathcal {N} (i)} \sum_ {k, l} a _ {i j k l} \cdot \mathbf {v} _ {j k l}\right)\right). \tag {15}
192
+ $$
193
+
194
+ This 4-body attention mechanism enables the model to encode complex, localized interaction motifs around mutation sites, thereby improving its ability to predict binding free energy changes with structural and biophysical fidelity.
195
+
196
+ # 3.4 Prediction Module and Learning Objective
197
+
198
+ Following the multi-scale representation learning with many-body attention in Section 3.3, we obtain a residue-level representation $\mathbf{h} \in \mathbb{R}^{N \times d}$ . Let $s = \{s_1, \dots, s_N\}$ be the amino acid sequence. Inspired by autoregressive models like ProteinMPNN, we model $P(s_i \mid s_{<i}, \mathbf{h})$ to compute the negative log-likelihoods $\mathcal{E}_{\mathrm{wt}}$ and $\mathcal{E}_{\mathrm{mut}}$ for wild-type and mutant sequences. As in RDE-Network [26] and BA-DDG [18], single-chain contributions are subtracted to isolate binding effects, yielding energy-like scores:
199
+
200
+ $$
201
+ \Delta \Delta G _ {\text {p r e d}} = \Delta G _ {\text {m u t}} - \Delta G _ {\mathrm {w t}} = \left(\mathcal {E} _ {\text {m u t}} ^ {\text {c o m p l e x}} - \mathcal {E} _ {\text {m u t}} ^ {\text {m o n o m e r}}\right) - \left(\mathcal {E} _ {\mathrm {w t}} ^ {\text {c o m p l e x}} - \mathcal {E} _ {\mathrm {w t}} ^ {\text {m o n o m e r}}\right). \tag {16}
202
+ $$
203
+
204
+ We minimize the mean squared error between $\Delta \Delta G_{\mathrm{pred}}$ and $\Delta \Delta G_{\mathrm{true}}$ over $n$ training samples:
205
+
206
+ $$
207
+ \mathcal {L} _ {\mathrm {M S E}} = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\Delta \Delta G _ {\text {p r e d}} - \Delta \Delta G _ {\text {t r u e}}\right) ^ {2}. \tag {17}
208
+ $$
209
+
210
+ # 4 Experiments
211
+
212
+ # 4.1 Experimental Settings
213
+
214
+ Datasets. We used SKEMPI v2 [17], a benchmark with 7,085 mutations across 348 protein complexes, to evaluate $\Delta \Delta G$ prediction. Following prior work [26, 37], we split the data into three non-overlapping folds by complex to avoid data leakage. Additionally, we evaluated on BindingGYM [24], the largest dataset for protein-protein interactions, with 508,962 curated entries and a high proportion of multi-point mutations. We used the hardest inter-assay split, focusing on the fold with the most multi-point mutations for testing (details in Appendix A.2).
215
+
216
+ Table 1: Mean results of 3-fold cross-validation on SKEMPI v2 under single-, multi-, and all-point mutations. Bold and underline indicate the best and second-best results.
217
+
218
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Mutations</td><td colspan="5">Overall</td><td colspan="2">Per-Structure</td></tr><tr><td>Pearson↑</td><td>Spear.↑</td><td>RMSE↓</td><td>MAE↓</td><td>AUROC↑</td><td>Pearson↑</td><td>Spear.↑</td></tr><tr><td rowspan="3">Rosetta</td><td>all</td><td>0.3113</td><td>0.3468</td><td>1.6173</td><td>1.1311</td><td>0.6562</td><td>0.3284</td><td>0.2988</td></tr><tr><td>single</td><td>0.3250</td><td>0.3670</td><td>1.1830</td><td>0.9870</td><td>0.6740</td><td>0.3510</td><td>0.4180</td></tr><tr><td>multiple</td><td>0.1990</td><td>0.2300</td><td>2.6580</td><td>2.0240</td><td>0.6210</td><td>0.1910</td><td>0.0830</td></tr><tr><td rowspan="3">FoldX</td><td>all</td><td>0.3120</td><td>0.4071</td><td>1.9080</td><td>1.3089</td><td>0.6582</td><td>0.3789</td><td>0.3693</td></tr><tr><td>single</td><td>0.3150</td><td>0.3610</td><td>1.6510</td><td>1.1460</td><td>0.6570</td><td>0.3820</td><td>0.3600</td></tr><tr><td>multiple</td><td>0.2560</td><td>0.4180</td><td>2.6080</td><td>1.9260</td><td>0.7040</td><td>0.3330</td><td>0.3400</td></tr><tr><td rowspan="3">RDE-Network</td><td>all</td><td>0.6447</td><td>0.5584</td><td>1.5799</td><td>1.1123</td><td>0.7454</td><td>0.4448</td><td>0.4010</td></tr><tr><td>single</td><td>0.6421</td><td>0.5271</td><td>1.3333</td><td>0.9392</td><td>0.7367</td><td>0.4687</td><td>0.4333</td></tr><tr><td>multiple</td><td>0.6288</td><td>0.5900</td><td>2.0980</td><td>1.5747</td><td>0.7749</td><td>0.4233</td><td>0.3926</td></tr><tr><td rowspan="3">DiffAffinity</td><td>all</td><td>0.6609</td><td>0.5560</td><td>1.5350</td><td>1.0930</td><td>0.7440</td><td>0.4220</td><td>0.3970</td></tr><tr><td>single</td><td>0.6720</td><td>0.5230</td><td>1.2880</td><td>0.9230</td><td>0.7330</td><td>0.4290</td><td>0.4090</td></tr><tr><td>multiple</td><td>0.6500</td><td>0.6020</td><td>2.0510</td><td>1.5400</td><td>0.7840</td><td>0.4140</td><td>0.3870</td></tr><tr><td rowspan="3">Prompt-DDG</td><td>all</td><td>0.6772</td><td>0.5910</td><td>1.5207</td><td>1.0770</td><td>0.7568</td><td>0.4712</td><td>0.4257</td></tr><tr><td>single</td><td>0.6596</td><td>0.5450</td><td>1.3072</td><td>0.9191</td><td>0.7355</td><td>0.4736</td><td>0.4392</td></tr><tr><td>multiple</td><td>0.6780</td><td>0.6433</td><td>1.9831</td><td>1.4837</td><td>0.8187</td><td>0.4448</td><td>0.3961</td></tr><tr><td rowspan="3">ProMIM</td><td>all</td><td>0.6720</td><td>0.5730</td><td>1.5160</td><td>1.0890</td><td>0.7600</td><td>0.4640</td><td>0.4310</td></tr><tr><td>single</td><td>0.6680</td><td>0.5340</td><td>1.2790</td><td>0.9240</td><td>0.7380</td><td>0.4660</td><td>0.4390</td></tr><tr><td>multiple</td><td>0.6660</td><td>0.6140</td><td>1.9630</td><td>1.4910</td><td>0.8250</td><td>0.4580</td><td>0.4250</td></tr><tr><td rowspan="3">BA-DDG</td><td>all</td><td>0.7118</td><td>0.6346</td><td>1.4516</td><td>1.0151</td><td>0.7726</td><td>0.5453</td><td>0.5134</td></tr><tr><td>single</td><td>0.7321</td><td>0.6157</td><td>1.1848</td><td>0.8409</td><td>0.7662</td><td>0.5606</td><td>0.5192</td></tr><tr><td>multiple</td><td>0.6650</td><td>0.6293</td><td>2.0151</td><td>1.4944</td><td>0.7875</td><td>0.4924</td><td>0.4959</td></tr><tr><td rowspan="3">H3-DDG</td><td>all</td><td>0.7501</td><td>0.6604</td><td>1.3665</td><td>0.9612</td><td>0.7920</td><td>0.5686</td><td>0.5281</td></tr><tr><td>single</td><td>0.7471</td><td>0.6374</td><td>1.1560</td><td>0.8080</td><td>0.7803</td><td>0.5750</td><td>0.5295</td></tr><tr><td>multiple</td><td>0.7341</td><td>0.6913</td><td>1.8320</td><td>1.3880</td><td>0.8309</td><td>0.5520</td><td>0.5323</td></tr><tr><td>ΔBA-DDG</td><td>multiple</td><td>+10.39%</td><td>+9.85%</td><td>+9.08%</td><td>+7.12%</td><td>+5.51%</td><td>+12.10%</td><td>+7.34%</td></tr></table>
219
+
220
+ Baselines. We compared H3-DDG against unsupervised and supervised methods. Unsupervised approaches include energy-based models (e.g., Rosetta [1], FoldX [8]), evolutionary sequence models (e.g., ESM-1v [27], Trancection [29]), and structure-guided pretrained models (e.g., ESM-IF [15], MIF-Δlogits [39]). Supervised methods include end-to-end architectures (e.g., DDGPred [34]) and pretraining-finetuning frameworks (e.g., MIF-Network [39], RDE-Network [26], DiffAffinity [23], Prompt-DDG [37], ProMIM [28], Surface-VQMAE [36], MSM-Mut [14], BA-DDG [18]).
221
+
222
+ Metrics. We use five metrics: Pearson, Spearman, RMSE, MAE, and AUROC. For SKEMPI, in addition to the global metrics, we also report per-structure metrics. For BindingGYM, metrics are reported per-DMs (deep mutational scanning). Detailed metric definitions and calculations are provided in Appendix A.3.
223
+
224
+ Implementation details. The hyperparameters and hardware details are provided in Appendix A.4. The code is available at https://github.com/biomed-AI/H3-DDG.
225
+
226
+ # 4.2 Results on SKEMPI v2 Dataset
227
+
228
+ As shown in Table 1, H3-DDG outperforms existing baseline approaches across all evaluation metrics in the all-, single-, and multi-point mutation scenarios. Notably, in practical applications where affinity modulation through multi-point amino acid mutations is particularly critical, H3-DDG demonstrates significant advantages in the multi-mutation prediction task: achieving a Spearman correlation coefficient of 0.7341, which is $10.39\%$ higher than the second-best method; at the per-structure level, this metric reaches 0.5520, surpassing the second-best approach by $12.10\%$ . Additional comparisons with more baseline methods are provided in Appendix B.2, Table 6.
229
+
230
+ ![](images/0dcf3206c621e2547b94e0921c21966ec28523518a7346e4d8f19b28816468e5.jpg)
231
+ Figure 2: Distribution of per-structure Pearson and Spearman correlation coefficients for multi-point mutations, evaluated across five representative methods.
232
+
233
+ ![](images/09a7c6695a14ac09ab657a75a229a1ea8698ced4401d2c3c88bcf533508aefcd.jpg)
234
+
235
+ Table 2: Performance comparison under $< 3, \geq 3$ and all-point mutations on BindingGYM, where bold and underline denotes the best and second-best results under each setting.
236
+
237
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Mutations</td><td colspan="4">Per-DMS</td></tr><tr><td>Pearson↑</td><td>Spearman↑</td><td>AUROC↑</td><td>RMSE↓</td></tr><tr><td rowspan="3">ProteinMPNN</td><td>ALL</td><td>0.0998</td><td>0.2050</td><td>0.5341</td><td>3.4974</td></tr><tr><td>&lt;3</td><td>0.1137</td><td>0.2439</td><td>0.5404</td><td>3.2328</td></tr><tr><td>≥3</td><td>0.0734</td><td>0.1614</td><td>0.5930</td><td>5.5921</td></tr><tr><td rowspan="3">BA-Cycle</td><td>ALL</td><td>0.1320</td><td>0.1217</td><td>0.5658</td><td>1.2419</td></tr><tr><td>&lt;3</td><td>0.1385</td><td>0.1386</td><td>0.5620</td><td>1.0925</td></tr><tr><td>≥3</td><td>0.0830</td><td>0.1955</td><td>0.6190</td><td>2.5822</td></tr><tr><td rowspan="3">Prompt-DDG</td><td>ALL</td><td>0.1880</td><td>0.1818</td><td>0.5198</td><td>1.5216</td></tr><tr><td>&lt;3</td><td>0.2160</td><td>0.2179</td><td>0.5273</td><td>1.3499</td></tr><tr><td>≥3</td><td>0.1841</td><td>0.2008</td><td>0.6070</td><td>3.5747</td></tr><tr><td rowspan="3">BA-DDG</td><td>ALL</td><td>0.2277</td><td>0.2142</td><td>0.5310</td><td>1.1182</td></tr><tr><td>&lt;3</td><td>0.2553</td><td>0.2471</td><td>0.5395</td><td>0.9716</td></tr><tr><td>≥3</td><td>0.2037</td><td>0.2259</td><td>0.6307</td><td>2.5191</td></tr><tr><td rowspan="3">H3-DDG</td><td>ALL</td><td>0.3057</td><td>0.2725</td><td>0.5703</td><td>1.1294</td></tr><tr><td>&lt;3</td><td>0.3322</td><td>0.3031</td><td>0.5745</td><td>1.0758</td></tr><tr><td>≥3</td><td>0.2472</td><td>0.2755</td><td>0.6734</td><td>2.4976</td></tr><tr><td>ΔBA-DDG</td><td>≥3</td><td>+21.35%</td><td>+21.96%</td><td>+6.77%</td><td>+0.85%</td></tr></table>
238
+
239
+ These results are enabled by our introduction of a hierarchical communication mechanism combined with a many-body attention network. This proposed framework effectively models both local and global interactions, capturing higher-order effects such as $\pi -\pi$ stacking and explicitly addressing complex synergistic and many-body interactions in mutational scenarios. As further illustrated in Figure 2, the per-structure distributions of Pearson and Spearman correlation coefficients across five representative methods show that H3-DDG achieves significantly higher correlation values, with distributions concentrated in high-correlation regions, highlighting its robustness compared to other approaches.
240
+
241
+ # 4.3 Results on BindingGYM Dataset
242
+
243
+ To validate the robustness of H3-DDG, we evaluate it on the larger and more challenging BindingGYM dataset, demonstrating its scalability and generalizability. As shown in Table 2, H3-DDG outperforms all baseline methods across all metrics, achieving the highest Pearson correlation (0.3057), which surpasses the second-best method (BA-DDG) by $34.26\%$ . Particularly in the $\geq 3$ mutation scenario, it also achieves the best Spearman correlation (0.2755), exceeding the second-best method by $21.96\%$ . Additionally, H3-DDG achieves the best AUROC across all mutations and demonstrates competitive RMSE. These results highlight H3-DDG's robustness, scalability, and effectiveness for accurately predicting protein-protein binding in complex mutational landscapes, especially in multi-point mutation scenarios.
244
+
245
+ While H3-DDG's RMSE under single-point mutations is slightly higher than the baseline methods (e.g., BA-DDG), this can be attributed to the BindingGYM dataset spanning different DMS experiments, where the absolute $\Delta \Delta G$ values vary significantly across experiments. However, we focus more on the ranking and correlation within each DMS experiment, making the Pearson and Spearman metrics more critical for evaluation.
246
+
247
+ # 4.4 Ablation Study
248
+
249
+ Ablation Study on Pooling Types. As shown in Table 3, H3-DDG outperforms other graph pooling methods like DiffPool [41] and MinCutPool [3], which rely on predicted assignment matrices and fail to predefine mutation sites as central nodes. By explicitly designating mutation sites as cluster centers, H3-DDG effectively captures long-range dependencies, enabling superior performance in modeling many-body interactions.
250
+
251
+ Table 3: Ablation study of pooling and many-body attention mechanisms under multi-point mutations on SKEMPI v2.
252
+
253
+ <table><tr><td rowspan="2">Pooling Type</td><td rowspan="2">Attn. Around Mut. Sites</td><td colspan="4">Overall</td></tr><tr><td>Pearson↑</td><td>Spear↑</td><td>RMSE↓</td><td>MAE↓</td></tr><tr><td colspan="2">BA-DDG</td><td>0.6650</td><td>0.6293</td><td>2.0151</td><td>1.4944</td></tr><tr><td>DiffPool</td><td>-</td><td>0.6959</td><td>0.6558</td><td>1.9375</td><td>1.4843</td></tr><tr><td>MinCutPool</td><td>-</td><td>0.7004</td><td>0.6589</td><td>1.9256</td><td>1.4693</td></tr><tr><td rowspan="3">H3-DDG</td><td>-</td><td>0.7040</td><td>0.6676</td><td>1.9161</td><td>1.4528</td></tr><tr><td>3-body Attn.</td><td>0.7144</td><td>0.6732</td><td>1.8880</td><td>1.4127</td></tr><tr><td>4-body Attn.</td><td>0.7341</td><td>0.6913</td><td>1.8320</td><td>1.3880</td></tr></table>
254
+
255
+ # Ablation Study on Attention Mechanisms. Table 9 also demonstrates
256
+
257
+ the importance of many-body attention in $\Delta \Delta G$ prediction. While 3-body attention improves performance by modeling local interactions around mutation sites, extending to 4-body attention further enhances accuracy by capturing more complex dependencies. These findings highlight many-body attention as a key factor in H3-DDG's strong performance and efficient modeling. Our model strikes a balance between efficiency and performance, influenced by two main factors: the number of hyperedges in the hypergraph and the number of edges in the 4-body attention. Notably, our method achieves a Pearson correlation of 0.7341, surpassing BA-DDG's 0.6650, with a relatively small efficiency trade-off. Additional details are provided in Appendix B.4 and B.5.
258
+
259
+ # 4.5 SARS-CoV-2 Antibody Optimization
260
+
261
+ Predicting $\Delta \Delta G$ is essential for identifying affinity-enhancing mutations. We target five SARS-CoV-2 neutralizing mutations [34] within the heavy-chain CDRs (26 residues, 494 single-point variants). Models are fine-tuned on SKEMPI v2.0 to rank mutations by predicted $\Delta \Delta G$ , with lower values indicating stronger binding. Table 4 benchmarks performance against top baselines, highlighting mutations ranked in the top $10\%$ . Only our model achieves an average rank below $10\%$ , demonstrating strong generalization and practical utility in antibody design.
262
+
263
+ Table 4: Rankings of the five favorable mutations on the human antibody against SARS-CoV-2 by various $\Delta \Delta G$ prediction methods.
264
+
265
+ <table><tr><td>Method</td><td>TH31W</td><td>AH53F</td><td>NH57L</td><td>RH103M</td><td>LH104F</td><td>Average</td></tr><tr><td>MIF-Network</td><td>24.49%</td><td>4.05%</td><td>6.48%</td><td>80.36%</td><td>36.23%</td><td>30.32%</td></tr><tr><td>RDE-Network</td><td>1.62%</td><td>2.02%</td><td>20.65%</td><td>61.54%</td><td>5.47%</td><td>18.26%</td></tr><tr><td>DiffAffinity</td><td>7.28%</td><td>3.64%</td><td>18.82%</td><td>81.78%</td><td>10.93%</td><td>24.49%</td></tr><tr><td>Prompt-DDG</td><td>2.02%</td><td>6.88%</td><td>3.24%</td><td>34.81%</td><td>6.48%</td><td>10.69%</td></tr><tr><td>MSM-Mut</td><td>6.48%</td><td>10.12%</td><td>16.19%</td><td>19.23%</td><td>20.04%</td><td>14.41%</td></tr><tr><td>BA-DDG</td><td>5.26%</td><td>15.58%</td><td>2.22%</td><td>40.28%</td><td>7.69%</td><td>14.21%</td></tr><tr><td>H3-DDG</td><td>3.44%</td><td>7.48%</td><td>2.02%</td><td>32.79%</td><td>2.63%</td><td>9.67%</td></tr></table>
266
+
267
+ # 5 Conclusion
268
+
269
+ In this work, we introduce H3-DDG, a novel framework that leverages hierarchical communication mechanisms and many-body attention to tackle the challenges of protein-protein binding prediction. By explicitly modeling higher-order interactions and designating mutation sites as cluster centers,
270
+
271
+ H3-DDG captures complex dependencies and synergistic effects, particularly excelling in multi-point mutation scenarios. Evaluations on SKEMPI v2 and BindingGYM show that H3-DDG outperforms state-of-the-art methods across most metrics, demonstrating robust performance and scalability. While H3-DDG shows promise for protein design and affinity prediction, its performance on large datasets and diverse mutations needs further study. Integrating it with experimental workflows will be key to validating real-world applicability. Future work will tackle these challenges to expand its impact.
272
+
273
+ # Acknowledgments and Disclosure of Funding
274
+
275
+ This study has been supported by the Guangdong S&T Program [2024B0101040005], the Guangdong S&T Program [2023B1111030002], the National Natural Science Foundation of China [62041209], the Natural Science Foundation of Shanghai [24ZR1440600], the China Postdoctoral Science Foundation [2025M771540, GZB20250391], the Guangdong Basic and Applied Basic Research Foundation [2025A1515060011], and the Lingang Laboratory [LGL-8888].
276
+
277
+ # References
278
+
279
+ [1] Rebecca F Alford, Andrew Leaver-Fay, Jeliazko R Jeliazkov, Matthew J O'Meara, Frank P DiMaio, Hahnbeom Park, Maxim V Shapovalov, P Douglas Renfrew, Vikram K Mulligan, Kalli Kappel, et al. The rosetta all-atom energy function for macromolecular modeling and design. Journal of chemical theory and computation, 13(6):3031–3048, 2017.
280
+ [2] Amro Abd-Al-Fattah Amara. Pharmaceutical and industrial protein engineering: where we are? Pakistan Journal of Pharmaceutical Sciences, 26(1), 2013.
281
+ [3] Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Spectral clustering with graph neural networks for graph pooling. In International conference on machine learning, pages 874-883. PMLR, 2020.
282
+ [4] Martin Dietrich Buhmann. Radial basis functions. Acta numerica, 9:1-38, 2000.
283
+ [5] Huali Cao, Jingxue Wang, Liping He, Yifei Qi, and John Z Zhang. Deepddg: predicting the stability change of protein point mutations using neural networks. Journal of chemical information and modeling, 59(4):1508-1514, 2019.
284
+ [6] Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al. Robust deep learning-based protein sequence design using proteinmpnn. Science, 378(6615):49-56, 2022.
285
+ [7] Carla CCR De Carvalho. Enzymatic and whole cell catalysis: finding new strategies for old processes. Biotechnology advances, 29(1):75-83, 2011.
286
+ [8] Javier Delgado, Leandro G Radusky, Damiano Cianferoni, and Luis Serrano. Foldx 5.0: working withrna, small molecules and a new graphical interface. Bioinformatics, 35(20):4168-4169, 2019.
287
+ [9] Jesse Durham, Jing Zhang, Ian R Humphreys, Jimin Pei, and Qian Cong. Recent advances in predicting and modeling protein-protein interactions. Trends in biochemical sciences, 48(6): 527-538, 2023.
288
+ [10] Oliver Dutton, Sandro Bottaro, Michele Invernizzi, Istvan Redl, Albert Chung, Falk Hoffmann, Louie Henderson, Stefano Ruschetta, Fabio Airoldi, Benjamin MJ Owens, et al. Improving inverse folding models at protein stability prediction without additional training or data. bioRxiv, pages 2024-06, 2024.
289
+ [11] Bowen Gao, Yinjun Jia, Yuanle Mo, Yuyan Ni, Weiying Ma, Zhiming Ma, and Yanyan Lan. Profsa: Self-supervised pocket pretraining via protein fragment-surroundings alignment. arXiv preprint arXiv:2310.07229, 2023.
290
+ [12] Ziqi Gao, Chenran Jiang, Jiawen Zhang, Xiaosen Jiang, Lanqing Li, Peilin Zhao, Huanming Yang, Yong Huang, and Jia Li. Hierarchical graph learning for protein-protein interaction. Nature Communications, 14(1):1093, 2023.
291
+ [13] Jiaqi Guan, Jiahan Li, Xiangxin Zhou, Xingang Peng, Sheng Wang, Yunan Luo, Jian Peng, and Jianzhu Ma. Group ligands docking to protein pockets. arXiv preprint arXiv:2501.15055, 2025.
292
+
293
+ [14] Ruihan Guo, Rui Wang, Ruidong Wu, Zhizhou Ren, Jiahan Li, Shitong Luo, Zuofan Wu, Qiang Liu, Jian Peng, and Jianzhu Ma. Enhancing protein mutation effect prediction through a retrieval-augmented framework. Advances in Neural Information Processing Systems, 37: 49130-49153, 2024.
294
+ [15] Chloe Hsu, Robert Verkuil, Jason Liu, Zeming Lin, Brian Hie, Tom Sercu, Adam Lerer, and Alexander Rives. Learning inverse folding from millions of predicted structures. In International conference on machine learning, pages 8946-8970. PMLR, 2022.
295
+ [16] Md Shamim Hussain, Mohammed J Zaki, and Dharmashankar Subramanian. Triplet interaction improves graph transformers: Accurate molecular graph learning with triplet graph transformers. arXiv preprint arXiv:2402.04538, 2024.
296
+ [17] Justina Jankauskaite, Brian Jiménez-García, Justas Dapkūnas, Juan Fernández-Recio, and Iain H Moal. Skempi 2.0: an updated benchmark of changes in protein-protein binding energy, kinetics and thermodynamics upon mutation. Bioinformatics, 35(3):462–469, 2019.
297
+ [18] Xiaoran Jiao, Weian Mao, Wengong Jin, Peiyuan Yang, Hao Chen, and Chunhua Shen. Boltzmann-aligned inverse folding model as a predictor of mutational effects on protein-protein interactions. arXiv preprint arXiv:2410.09543, 2024.
298
+ [19] Susan Jones and Janet M Thornton. Principles of protein-protein interactions. Proceedings of the National Academy of Sciences, 93(1):13-20, 1996.
299
+ [20] Edward King, Erick Aitchison, Han Li, and Ray Luo. Recent developments in free energy calculations for drug discovery. Frontiers in molecular biosciences, 8:712085, 2021.
300
+ [21] Xue Li, Peifu Han, Gan Wang, Wenqi Chen, Shuang Wang, and Tao Song. Sdnn-ppi: self-attention with deep neural network effect on protein-protein interaction prediction. BMC genomics, 23(1):474, 2022.
301
+ [22] Yang Li, Min Li, Caijie Qu, Yongxi Li, Zhanli Tang, Zhike Zhou, Zengzhao Yu, Xu Wang, Linlin Xin, and Tongxin Shi. The polygenic map of keloid fibroblasts reveals fibrosis-associated gene alterations in inflammation and immune responses. Frontiers in Immunology, 12:810290, 2022.
302
+ [23] Shiwei Liu, Tian Zhu, Milong Ren, Chungong Yu, Dongbo Bu, and Haicang Zhang. Predicting mutational effects on protein-protein binding via a side-chain diffusion probabilistic model. Advances in Neural Information Processing Systems, 36:48994-49005, 2023.
303
+ [24] Wei Lu, Jixian Zhang, Ming Gu, and Shuangjia Zheng. Bindinggym: A large-scale mutational dataset toward deciphering protein-protein interactions. bioRxiv, pages 2024-12, 2024.
304
+ [25] Wei Lu, Jixian Zhang, Jiahua Rao, Zhongyue Zhang, and Shuangjia Zheng. Alphafold3, a secret sauce for predicting mutational effects on protein-protein interactions. bioRxiv, pages 2024-05, 2024.
305
+ [26] Shitong Luo, Yufeng Su, Zuofan Wu, Chenpeng Su, Jian Peng, and Jianzhu Ma. Rotamer density estimator is an unsupervised learner of the effect of mutations on protein-protein interaction. bioRxiv, pages 2023-02, 2023.
306
+ [27] Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu, and Alex Rives. Language models enable zero-shot prediction of the effects of mutations on protein function. Advances in neural information processing systems, 34:29287-29303, 2021.
307
+ [28] Yuanle Mo, Xin Hong, Bowen Gao, Yinjun Jia, and Yanyan Lan. Multi-level interaction modeling for protein mutational effect prediction. arXiv preprint arXiv:2405.17802, 2024.
308
+ [29] Pascal Notin, Mafalda Dias, Jonathan Frazer, Javier Marchena-Hurtado, Aidan N Gomez, Debora Marks, and Yarin Gal. Trancection: protein fitness prediction with autoregressive transformers and inference-time retrieval. In International Conference on Machine Learning, pages 16990-17017. PMLR, 2022.
309
+ [30] Jiahua Rao, Dahao Xu, Wentao Wei, Yicong Chen, Mingjun Yang, and Yuedong Yang. Quadruple attention in many-body systems for accurate molecular property predictions. In *Forty-second International Conference on Machine Learning*.
310
+ [31] Jiahua Rao, Jiancong Xie, Qianmu Yuan, Deqin Liu, Zhen Wang, Yutong Lu, Shuangjia Zheng, and Yuedong Yang. A variational expectation-maximization framework for balanced multi-scale learning of protein and drug interactions. Nature Communications, 15(1):4476, 2024.
311
+
312
+ [32] Jiahua Rao, Deqin Liu, Xiaolong Zhou, Qianmu Yuan, Wentao Wei, Wei Lu, Jixian Zhang, Yu Rong, Yuedong Yang, and Shuangjia Zheng. Accurate protein-protein interactions modeling through physics-informed geometric invariant learning. bioRxiv, pages 2025-07, 2025.
313
+ [33] Jeffrey A Ruffolo, Lee-Shin Chu, Sai Pooja Mahajan, and Jeffrey J Gray. Fast, accurate antibody structure prediction from deep learning on massive set of natural antibodies. Nature communications, 14(1):2389, 2023.
314
+ [34] Sisi Shan, Shitong Luo, Ziqing Yang, Junxian Hong, Yufeng Su, Fan Ding, Lili Fu, Chenyu Li, Peng Chen, Jianzhu Ma, et al. Deep learning guided optimization of human antibody against sars-cov-2 variants with broad neutralization. Proceedings of the National Academy of Sciences, 119(11):e2122954119, 2022.
315
+ [35] Changfa Shu, Jianfeng Li, Jin Rui, Dacheng Fan, Qiankun Niu, Ruiyang Bai, Danielle Cicka, Sean Doyle, Alafate Wahafu, Xi Zheng, et al. Uncovering the rewired iap-jak regulatory axis as an immune-dependent vulnerability of lkb1-mutant lung cancer. Nature Communications, 16 (1):2324, 2025.
316
+ [36] Fang Wu and Stan Z Li. Surface-vqmae: Vector-quantized masked auto-encoders on molecular surfaces. In *Forty-first International Conference on Machine Learning*, 2024.
317
+ [37] Lirong Wu, Yijun Tian, Haitao Lin, Yufei Huang, Siyuan Li, Nitesh V Chawla, and Stan Z Li. Learning to predict mutation effects of protein-protein interactions by microenvironment-aware hierarchical prompt learning. arXiv preprint arXiv:2405.10348, 2024.
318
+ [38] Lirong Wu, Yunfan Liu, Haitao Lin, Yufei Huang, Guojiang Zhao, Zhifeng Gao, and Stan Z Li. A simple yet effective ddg predictor is an unsupervised antibody optimizer and explainer. arXiv preprint arXiv:2502.06913, 2025.
319
+ [39] Fang Yang, Kunjie Fan, Dandan Song, and Huakang Lin. Graph-based prediction of protein-protein interactions with attributed signed graph embedding. BMC bioinformatics, 21:1-16, 2020.
320
+ [40] Deqi Yin, Ning Jiang, Chang Cheng, Xiaoyu Sang, Ying Feng, Ran Chen, and Qijun Chen. Protein lactylation and metabolic regulation of the zoonotic parasite toxoplasma gondii. Genomics, proteomics & bioinformatics, 21(6):1163-1181, 2023.
321
+ [41] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018.
322
+
323
+ # A Additional Details
324
+
325
+ # A.1 Graph Pooling Method of H3-DDG
326
+
327
+ To enable scalable learning on biomolecular graphs, H3-DDG employs a deterministic graph pooling strategy that compresses node representations while preserving spatial and functional context. Given residue-wise features $\mathbf{h}$ and their $\mathrm{C}\alpha$ coordinates $\mathbf{x} \in \mathbb{R}^{N \times 3}$ , we form a reduced set of hyper-nodes via Farthest Point Sampling (FPS) with mutation-aware initialization. Specifically, mutation sites are first selected as initial cluster centroids $c_k$ , and additional centroids are iteratively sampled to maximize coverage under Euclidean distance.
328
+
329
+ Each residue is then assigned to the nearest cluster centroid based on the Euclidean distance between its $\mathbf{C}\alpha$ coordinates and the centroids. Feature aggregation is performed within each cluster by computing the mean of the features of all residues assigned to that cluster:
330
+
331
+ $$
332
+ \mathbf {h} _ {k} = \frac {1}{| S _ {k} |} \sum_ {i \in S _ {k}} \mathbf {h} _ {i}, \tag {18}
333
+ $$
334
+
335
+ where $S_{k}$ is the set of residues assigned to cluster $k$ . This produces a set of $K$ cluster-level representations $\{\mathbf{h}_k\}_{k=1}^K$ , with $K$ dynamically determined as $K = \lfloor L / R \rfloor$ , where $L$ is the number of valid residues and $R$ is a predefined reduction ratio.
336
+
337
+ Anchored to mutation sites and guided by spatial diversity, our pooling method preserves locality and biological relevance. It is non-parametric and efficient, suitable for large or variable-length proteins.
338
+
339
+ # A.2 Details of the Inter-Assay Split in BindingGYM
340
+
341
+ To evaluate generalization to unseen protein-protein interactions, we adopt the inter-assay split strategy from the BindingGYM dataset, following the approach of [24]. In this setting, assays are first clustered into five groups based on the sequences of their mutated proteins. Data from one cluster is held out for testing, while the remaining four are used for training. This split evaluates the models' ability to generalize to new assays, which holds significant practical significance.
342
+
343
+ We evaluate performance under three levels of mutational depth: ALL (all mutants), $< 3$ (mutants with fewer than 3 mutations), and $\geq 3$ (mutants with 3 or more mutations).
344
+
345
+ # A.3 Details of metric definitions and calculations
346
+
347
+ For the SKEMPI v2 dataset, we use seven quantitative metrics, including five standard global criteria: Pearson and Spearman correlation coefficients, root mean square error (RMSE), mean absolute error (MAE), and area under the ROC curve (AUROC). To calculate AUROC, mutations are classified based on the sign of AUROC, mutations are classified based on the sign of $\Delta \Delta G$ . In practice, correlations within individual protein complexes are often more relevant. To this end, following RDE-Network [26], we group mutations by protein structure, exclude groups with fewer than 10 mutations, and compute correlations separately for each structure. For the BindingGYM dataset, we use per-DMs (deep mutational scanning) metrics, including Pearson, Spearman, AUROC, and RMSE.
348
+
349
+ # A.4 Training Details
350
+
351
+ # A.4.1 Hyper-parameters
352
+
353
+ We used the Adam optimizer with a learning rate of 4e-4 and a batch size of 1, 2, depending on GPU memory and graph size. The model was trained for 20,000 iterations with 4 attention heads and a hidden dimension of 128. The number of hyperedges was selected from $L / 10$ , $L / 6$ , $L / 4$ , and the number of edges in the 4-body attention module from $1N$ , $2N$ , $3N$ , where $L$ and $N$ denote the numbers of residues and nodes, respectively. The pre-trained ProteinMPNN module used its default 3-layer configuration.
354
+
355
+ # A.4.2 Hardware
356
+
357
+ Experiments were run on dual Xeon Gold 6248R CPUs and an RTX 4090 GPU under Ubuntu 22.04.
358
+
359
+ # B Additional Results
360
+
361
+ # B.1 H3-DDG Performance on Predicted Complex Structures
362
+
363
+ To evaluate robustness, we used AlphaFold 3 (AF3) to predict the complex structures for SKEMPI v2 and applied H3-DDG for prediction. While AF3 achieves high-quality structure prediction, its accuracy may still be lower than experimental structures. Importantly, H3-DDG showed only a slight decrease in performance on AF3-predicted structures, with a drop of $5.4\%$ in Pearson correlation, demonstrating strong robustness and practical applicability.
364
+
365
+ # B.2 Extended Baseline Comparisons on SKEMPI v2
366
+
367
+ Table 6 reports extended baseline results on SKEMPI v2, including additional methods to complement the main text comparisons. These include energy function-based, sequence-based, and unsupervised approaches, providing a broader context for evaluating performance across diverse methodological categories.
368
+
369
+ # B.3 Ablation Results under Different Mutation Depths
370
+
371
+ Table 7 presents the full ablation results of different pooling strategies and many-body attention mechanisms across mutation types, including single-point, multi-point, and all-point mutations on SKEMPI v2. The results demonstrate that our proposed H3-DDG consistently outperforms baseline pooling methods (e.g., DiffPool [41] and MinCutPool [3]) under all mutation settings. Furthermore, incorporating 4-body attention around mutation sites yields the best performance across all metrics, validating the effectiveness of higher-order attention in capturing complex interaction patterns.
372
+
373
+ # B.4 Ablation study on Edge Count in 4-body Attention
374
+
375
+ We investigate the impact of varying the number of edges in the 4-body attention mechanism around mutation sites on $\Delta \Delta G$ prediction. The number of edges in the 4-body attention is directly proportional to the computational complexity. As shown in Table 8, where $\mathcal{N}$ denotes the number of nodes involved in the 4-body attention computation, selecting $2\cdot \mathcal{N}$ edges yields a computational complexity of $2\cdot |\mathcal{N}|^3$ . The results indicate that increasing the number of edges leads to only a constant-factor increase in computational cost, yet yields sustained improvements in predictive performance. For example, the Pearson correlation, which was already high at 0.7352, further increases to 0.7501. These findings highlight that adding more edges in the many-body attention mechanism near mutation sites significantly enhances the model's capacity to capture complex many-body interactions, while maintaining manageable computational overhead, thereby driving further gains in prediction accuracy.
376
+
377
+ # B.5 Efficiency Analysis
378
+
379
+ Our model balances efficiency and performance. As shown in Table 9, increasing hyperedges from $L / 10$ to $L / 4$ steadily improves Pearson correlation, reaching a peak of 0.7501 with $L / 4$ hyperedges and $3N$ 4-body edges. This setting maintains acceptable efficiency (4.34 it/s), only slightly slower than BA-DDG (6.25 it/s, Pearson: 0.7118). Similarly, increasing 4-body edges from $1.5N$ to $3N$ yields incremental gains, indicating denser subgraph modeling enhances accuracy at manageable cost. Additionally, as shown in Table 10, larger cutoff radii improve performance by capturing richer structural context, but gains diminish beyond $8\AA$ while cost rises. Thus, $8\AA$ is chosen as the optimal cutoff, offering the best accuracy-efficiency trade-off.
380
+
381
+ Table 5: H3-DDG Performance on Experimental and AF3-Predicted Structures.
382
+
383
+ <table><tr><td>Structure</td><td>Pearson↑</td><td>RMSE↓</td><td>AUROC↑</td></tr><tr><td>AlphaFold3</td><td>0.7117</td><td>1.4518</td><td>0.7755</td></tr><tr><td>Experimental</td><td>0.7501</td><td>1.3665</td><td>0.7920</td></tr></table>
384
+
385
+ Table 6: Mean results of 3-fold cross-validation on SKEMPI v2 under single-, multi-, and all-point mutations. Bold and underline indicate the best and second-best results.
386
+
387
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Mutations</td><td colspan="5">Overall</td><td colspan="2">Per-Structure</td></tr><tr><td>Pearson↑</td><td>Spear.↑</td><td>RMSE↓</td><td>MAE↓</td><td>AUROC↑</td><td>Pearson↑</td><td>Spear.↑</td></tr><tr><td rowspan="3">Rosetta</td><td>all</td><td>0.3113</td><td>0.3468</td><td>1.6173</td><td>1.1311</td><td>0.6562</td><td>0.3284</td><td>0.2988</td></tr><tr><td>single</td><td>0.3250</td><td>0.3670</td><td>1.1830</td><td>0.9870</td><td>0.6740</td><td>0.3510</td><td>0.4180</td></tr><tr><td>multiple</td><td>0.1990</td><td>0.2300</td><td>2.6580</td><td>2.0240</td><td>0.6210</td><td>0.1910</td><td>0.0830</td></tr><tr><td rowspan="3">FoldX</td><td>all</td><td>0.3120</td><td>0.4071</td><td>1.9080</td><td>1.3089</td><td>0.6582</td><td>0.3789</td><td>0.3693</td></tr><tr><td>single</td><td>0.3150</td><td>0.3610</td><td>1.6510</td><td>1.1460</td><td>0.6570</td><td>0.3820</td><td>0.3600</td></tr><tr><td>multiple</td><td>0.2560</td><td>0.4180</td><td>2.6080</td><td>1.9260</td><td>0.7040</td><td>0.3330</td><td>0.3400</td></tr><tr><td rowspan="3">DDGPred</td><td>all</td><td>0.6580</td><td>0.4687</td><td>1.4998</td><td>1.0821</td><td>0.6992</td><td>0.3750</td><td>0.3407</td></tr><tr><td>single</td><td>0.6515</td><td>0.4390</td><td>1.3285</td><td>0.9618</td><td>0.6858</td><td>0.3711</td><td>0.3427</td></tr><tr><td>multiple</td><td>0.5938</td><td>0.5150</td><td>2.1813</td><td>1.6699</td><td>0.7590</td><td>0.3912</td><td>0.3896</td></tr><tr><td rowspan="3">End-to-End</td><td>all</td><td>0.6373</td><td>0.4882</td><td>1.6198</td><td>1.1761</td><td>0.7172</td><td>0.3873</td><td>0.3587</td></tr><tr><td>single</td><td>0.6605</td><td>0.4594</td><td>1.3148</td><td>0.9569</td><td>0.7019</td><td>0.3818</td><td>0.3426</td></tr><tr><td>multiple</td><td>0.5858</td><td>0.4942</td><td>2.1971</td><td>1.7087</td><td>0.7532</td><td>0.4178</td><td>0.4034</td></tr><tr><td rowspan="3">RDE-Network</td><td>all</td><td>0.6447</td><td>0.5584</td><td>1.5799</td><td>1.1123</td><td>0.7454</td><td>0.4448</td><td>0.4010</td></tr><tr><td>single</td><td>0.6421</td><td>0.5271</td><td>1.3333</td><td>0.9392</td><td>0.7367</td><td>0.4687</td><td>0.4333</td></tr><tr><td>multiple</td><td>0.6288</td><td>0.5900</td><td>2.0980</td><td>1.5747</td><td>0.7749</td><td>0.4233</td><td>0.3926</td></tr><tr><td rowspan="3">DiffAffinity</td><td>all</td><td>0.6609</td><td>0.5560</td><td>1.5350</td><td>1.0930</td><td>0.7440</td><td>0.4220</td><td>0.3970</td></tr><tr><td>single</td><td>0.6720</td><td>0.5230</td><td>1.2880</td><td>0.9230</td><td>0.7330</td><td>0.4290</td><td>0.4090</td></tr><tr><td>multiple</td><td>0.6500</td><td>0.6020</td><td>2.0510</td><td>1.5400</td><td>0.7840</td><td>0.4140</td><td>0.3870</td></tr><tr><td rowspan="3">Prompt-DDG</td><td>all</td><td>0.6772</td><td>0.5910</td><td>1.5207</td><td>1.0770</td><td>0.7568</td><td>0.4712</td><td>0.4257</td></tr><tr><td>single</td><td>0.6596</td><td>0.5450</td><td>1.3072</td><td>0.9191</td><td>0.7355</td><td>0.4736</td><td>0.4392</td></tr><tr><td>multiple</td><td>0.6780</td><td>0.6433</td><td>1.9831</td><td>1.4837</td><td>0.8187</td><td>0.4448</td><td>0.3961</td></tr><tr><td rowspan="3">ProMIM</td><td>all</td><td>0.6720</td><td>0.5730</td><td>1.5160</td><td>1.0890</td><td>0.7600</td><td>0.4640</td><td>0.4310</td></tr><tr><td>single</td><td>0.6680</td><td>0.5340</td><td>1.2790</td><td>0.9240</td><td>0.7380</td><td>0.4660</td><td>0.4390</td></tr><tr><td>multiple</td><td>0.6660</td><td>0.6140</td><td>1.9630</td><td>1.4910</td><td>0.8250</td><td>0.4580</td><td>0.4250</td></tr><tr><td rowspan="3">BA-DDG</td><td>all</td><td>0.7118</td><td>0.6346</td><td>1.4516</td><td>1.0151</td><td>0.7726</td><td>0.5453</td><td>0.5134</td></tr><tr><td>single</td><td>0.7321</td><td>0.6157</td><td>1.1848</td><td>0.8409</td><td>0.7662</td><td>0.5606</td><td>0.5192</td></tr><tr><td>multiple</td><td>0.6650</td><td>0.6293</td><td>2.0151</td><td>1.4944</td><td>0.7875</td><td>0.4924</td><td>0.4959</td></tr><tr><td rowspan="3">H3-DDG</td><td>all</td><td>0.7501</td><td>0.6604</td><td>1.3665</td><td>0.9612</td><td>0.7920</td><td>0.5686</td><td>0.5281</td></tr><tr><td>single</td><td>0.7471</td><td>0.6374</td><td>1.1560</td><td>0.8080</td><td>0.7803</td><td>0.5750</td><td>0.5295</td></tr><tr><td>multiple</td><td>0.7341</td><td>0.6913</td><td>1.8320</td><td>1.3880</td><td>0.8309</td><td>0.5520</td><td>0.5323</td></tr><tr><td rowspan="3">ΔBA-DDG</td><td>all</td><td>+5.38%</td><td>+4.07%</td><td>+5.86%</td><td>+5.31%</td><td>+2.51%</td><td>+4.27%</td><td>+2.86%</td></tr><tr><td>single</td><td>+2.05%</td><td>+3.52%</td><td>+2.43%</td><td>+3.91%</td><td>+1.84%</td><td>+2.57%</td><td>+1.98%</td></tr><tr><td>multiple</td><td>+10.39%</td><td>+9.85%</td><td>+9.09%</td><td>+7.12%</td><td>+5.51%</td><td>+12.10%</td><td>+7.34%</td></tr></table>
388
+
389
+ # B.6 Scatter Plot of Prediction Results
390
+
391
+ ![](images/5f421d7ceb4463e1a88c3a0eb9007f1265f1cdd1146a87cf630a721490c4a483.jpg)
392
+ In Figure 3, we present scatter plots comparing experimental and predicted $\Delta \Delta G$ values for the three methods under multi-point mutation scenarios.
393
+ Figure 3: Comparison of predicted and experimental $\Delta \Delta G$ across methods.
394
+
395
+ Table 7: Ablation study of pooling types and many-body attention mechanisms across mutation types on SKEMPI v2.
396
+
397
+ <table><tr><td rowspan="2">Pooling Type</td><td rowspan="2">Attn. Between Hyperedges</td><td rowspan="2">Attn. Around Mutation Sites</td><td rowspan="2">Mutations</td><td colspan="4">Overall</td></tr><tr><td>Pearson↑</td><td>Spear↑</td><td>RMSE↓</td><td>MAE↓</td></tr><tr><td rowspan="3">DiffPool</td><td rowspan="3">3-body Attn.</td><td rowspan="3">-</td><td>all</td><td>0.7261</td><td>0.6407</td><td>1.4209</td><td>1.0079</td></tr><tr><td>single</td><td>0.7344</td><td>0.6256</td><td>1.1804</td><td>0.8336</td></tr><tr><td>multiple</td><td>0.6959</td><td>0.6558</td><td>1.9375</td><td>1.4843</td></tr><tr><td rowspan="3">MinCutPool</td><td rowspan="3">3-body Attn.</td><td rowspan="3">-</td><td>all</td><td>0.7275</td><td>0.6456</td><td>1.4178</td><td>1.0027</td></tr><tr><td>single</td><td>0.7324</td><td>0.6333</td><td>1.1841</td><td>0.8342</td></tr><tr><td>multiple</td><td>0.7004</td><td>0.6589</td><td>1.9256</td><td>1.4693</td></tr><tr><td rowspan="3">H3-DDG</td><td rowspan="3">3-body Attn.</td><td rowspan="3">-</td><td>all</td><td>0.7317</td><td>0.6485</td><td>1.4085</td><td>0.9923</td></tr><tr><td>single</td><td>0.7383</td><td>0.6273</td><td>1.1728</td><td>0.8242</td></tr><tr><td>multiple</td><td>0.7040</td><td>0.6676</td><td>1.9161</td><td>1.4528</td></tr><tr><td rowspan="3">H3-DDG</td><td rowspan="3">3-body Attn.</td><td rowspan="3">3-body Attn.</td><td>all</td><td>0.7352</td><td>0.6509</td><td>1.4007</td><td>0.9805</td></tr><tr><td>single</td><td>0.7355</td><td>0.6326</td><td>1.1782</td><td>0.8243</td></tr><tr><td>multiple</td><td>0.7144</td><td>0.6732</td><td>1.8880</td><td>1.4127</td></tr><tr><td rowspan="3">H3-DDG</td><td rowspan="3">3-body Attn.</td><td rowspan="3">4-body Attn.</td><td>all</td><td>0.7501</td><td>0.6604</td><td>1.3665</td><td>0.9612</td></tr><tr><td>single</td><td>0.7471</td><td>0.6374</td><td>1.1560</td><td>0.8080</td></tr><tr><td>multiple</td><td>0.7341</td><td>0.6913</td><td>1.8320</td><td>1.3880</td></tr></table>
398
+
399
+ Table 8: Ablation on the number of edges near mutations within 4-body attention on SKEMPI v2.
400
+
401
+ <table><tr><td rowspan="2">Complexity</td><td rowspan="2">Mutations</td><td colspan="5">Overall</td><td colspan="2">Per-Structure</td></tr><tr><td>Pearson↑</td><td>Spear.↑</td><td>RMSE↓</td><td>MAE↓</td><td>AUROC↑</td><td>Pearson↑</td><td>Spear.↑</td></tr><tr><td rowspan="3">O(|N|3)</td><td>all</td><td>0.7352</td><td>0.6509</td><td>1.4007</td><td>0.9805</td><td>0.7895</td><td>0.5663</td><td>0.5227</td></tr><tr><td>single</td><td>0.7355</td><td>0.6326</td><td>1.1782</td><td>0.8243</td><td>0.7851</td><td>0.5758</td><td>0.5246</td></tr><tr><td>multiple</td><td>0.7144</td><td>0.6732</td><td>1.8880</td><td>1.4127</td><td>0.8098</td><td>0.5372</td><td>0.5156</td></tr><tr><td rowspan="3">O(2·|N|3)</td><td>all</td><td>0.7461</td><td>0.6570</td><td>1.3760</td><td>0.9719</td><td>0.7941</td><td>0.5660</td><td>0.5231</td></tr><tr><td>single</td><td>0.7455</td><td>0.6365</td><td>1.1591</td><td>0.8128</td><td>0.7850</td><td>0.5790</td><td>0.5341</td></tr><tr><td>multiple</td><td>0.7279</td><td>0.6815</td><td>1.8500</td><td>1.4079</td><td>0.8262</td><td>0.5449</td><td>0.5310</td></tr><tr><td rowspan="3">O(3·|N|3)</td><td>all</td><td>0.7501</td><td>0.6604</td><td>1.3665</td><td>0.9612</td><td>0.7920</td><td>0.5686</td><td>0.5281</td></tr><tr><td>single</td><td>0.7471</td><td>0.6374</td><td>1.1560</td><td>0.8080</td><td>0.7803</td><td>0.5750</td><td>0.5295</td></tr><tr><td>multiple</td><td>0.7341</td><td>0.6913</td><td>1.8320</td><td>1.3880</td><td>0.8309</td><td>0.5520</td><td>0.5323</td></tr></table>
402
+
403
+ Table 9: Impact of hypergraph and subgraph configurations on efficiency and performance. $L$ is the residue count, and $L/k$ denotes the number of hyperedges (floor division). ${3N}$ means the 4-body attention has three times as many edges as nodes.
404
+
405
+ <table><tr><td>Method</td><td>Number of Hyperedges</td><td>Number of Edges in 4-body Attn.</td><td>Pearson↑</td><td>Training Speed (iterations/sec)</td><td>Training Time (mins/epoch)</td></tr><tr><td>BA-DDG</td><td>-</td><td>-</td><td>0.7118</td><td>6.25</td><td>5.94</td></tr><tr><td rowspan="6">H3-DDG</td><td>L/10</td><td></td><td>0.7418</td><td>5.11</td><td>7.27</td></tr><tr><td>L/6</td><td>3N</td><td>0.7482</td><td>4.68</td><td>7.93</td></tr><tr><td>L/4</td><td></td><td>0.7501</td><td>4.34</td><td>8.55</td></tr><tr><td rowspan="3">L/4</td><td>1.5N</td><td>0.7420</td><td>4.40</td><td>8.44</td></tr><tr><td>2N</td><td>0.7461</td><td>4.38</td><td>8.48</td></tr><tr><td>3N</td><td>0.7501</td><td>4.34</td><td>8.55</td></tr></table>
406
+
407
+ Table 10: Impact of Cutoff Radius on Model Performance and Efficiency.
408
+
409
+ <table><tr><td>Cutoff of Mut.
410
+ Subgraph(Å)</td><td>Pearson↑</td><td>MAE↓</td><td>Training Speed
411
+ (iterations/sec)</td></tr><tr><td>5</td><td>0.7378</td><td>0.9834</td><td>4.52</td></tr><tr><td>8</td><td>0.7501</td><td>0.9612</td><td>4.32</td></tr><tr><td>12</td><td>0.7511</td><td>0.9602</td><td>4.03</td></tr></table>
412
+
413
+ # NeurIPS Paper Checklist
414
+
415
+ # 1. Claims
416
+
417
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
418
+
419
+ Answer: [Yes]
420
+
421
+ Justification: We confirm that the main claim in the abstract and introduction accurately reflects the paper's contributions and scope.
422
+
423
+ Guidelines:
424
+
425
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
426
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
427
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
428
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
429
+
430
+ # 2. Limitations
431
+
432
+ Question: Does the paper discuss the limitations of the work performed by the authors?
433
+
434
+ Answer: [Yes]
435
+
436
+ Justification: We have discussed the limitations of the work in the conclusion section. Specifically, we note that H3-DDG requires further evaluation on large-scale datasets and highly diverse mutation scenarios. Additionally, integration with experimental workflows is needed to validate its real-world applicability. These challenges are acknowledged as directions for future work.
437
+
438
+ Guidelines:
439
+
440
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
441
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
442
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
443
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
444
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
445
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
446
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
447
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
448
+
449
+ # 3. Theory assumptions and proofs
450
+
451
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
452
+
453
+ Answer: [NA]
454
+
455
+ Justification: Our paper does not include complex theoretical results, and therefore, there are no assumptions or proofs to provide.
456
+
457
+ Guidelines:
458
+
459
+ - The answer NA means that the paper does not include theoretical results.
460
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
461
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
462
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
463
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
464
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
465
+
466
+ # 4. Experimental result reproducibility
467
+
468
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
469
+
470
+ Answer: [Yes]
471
+
472
+ Justification: The training details and datasets are provided in the paper and supplementary material, and we also plan to open-source our model.
473
+
474
+ Guidelines:
475
+
476
+ - The answer NA means that the paper does not include experiments.
477
+
478
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
479
+
480
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
481
+
482
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
483
+
484
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
485
+
486
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
487
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
488
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
489
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
490
+
491
+ # 5. Open access to data and code
492
+
493
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
494
+
495
+ Answer: [Yes]
496
+
497
+ Justification: We have provided the source code in the supplementary material and will open-source our data and code.
498
+
499
+ Guidelines:
500
+
501
+ - The answer NA means that paper does not include experiments requiring code.
502
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
503
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
504
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
505
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
506
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
507
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
508
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
509
+
510
+ # 6. Experimental setting/details
511
+
512
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
513
+
514
+ Answer: [Yes]
515
+
516
+ Justification: We have specified all the training and test details in Section 4.1 and Appendix A.4
517
+
518
+ Guidelines:
519
+
520
+ - The answer NA means that the paper does not include experiments.
521
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
522
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
523
+
524
+ # 7. Experiment statistical significance
525
+
526
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
527
+
528
+ Answer: [Yes]
529
+
530
+ Justification: We have shown the complete distribution of our results in Fig. 2 and Fig. 3, highlighting our robustness compared to other approaches.
531
+
532
+ Guidelines:
533
+
534
+ - The answer NA means that the paper does not include experiments.
535
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
536
+
537
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
538
+ - The method for calculating the error bars should be explained (closed form formula call to a library function, bootstrap, etc.)
539
+ - The assumptions made should be given (e.g., Normally distributed errors).
540
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
541
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
542
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
543
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
544
+
545
+ # 8. Experiments compute resources
546
+
547
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
548
+
549
+ Answer: [Yes]
550
+
551
+ Justification: We have provided sufficient information on the computer resources in Appendix A.4.2 and B.5.
552
+
553
+ Guidelines:
554
+
555
+ - The answer NA means that the paper does not include experiments.
556
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
557
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
558
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
559
+
560
+ # 9. Code of ethics
561
+
562
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
563
+
564
+ Answer: [Yes]
565
+
566
+ Justification: Our research conforms with the NeurIPS Code of Ethics.
567
+
568
+ Guidelines:
569
+
570
+ The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
571
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
572
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
573
+
574
+ # 10. Broader impacts
575
+
576
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
577
+
578
+ Answer: [Yes]
579
+
580
+ Justification: We have discussed it in the introduction section. These results underscore the capability of H3-DDG to handle intricate mutational landscapes and its potential for broad applicability in protein engineering and drug design.
581
+
582
+ Guidelines:
583
+
584
+ - The answer NA means that there is no societal impact of the work performed.
585
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
586
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
587
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
588
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
589
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
590
+
591
+ # 11. Safeguards
592
+
593
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
594
+
595
+ Answer: [NA]
596
+
597
+ Justification: This paper does not have such risks.
598
+
599
+ Guidelines:
600
+
601
+ - The answer NA means that the paper poses no such risks.
602
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
603
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
604
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
605
+
606
+ # 12. Licenses for existing assets
607
+
608
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
609
+
610
+ Answer: [Yes]
611
+
612
+ Justification: All the assets are properly cited in the paper.
613
+
614
+ Guidelines:
615
+
616
+ - The answer NA means that the paper does not use existing assets.
617
+ - The authors should cite the original paper that produced the code package or dataset.
618
+ - The authors should state which version of the asset is used and, if possible, include a URL.
619
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
620
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
621
+
622
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
623
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
624
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
625
+
626
+ # 13. New assets
627
+
628
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
629
+
630
+ Answer: [Yes]
631
+
632
+ Justification: We introduce a new model to capture higher-order many-body interactions across multiple scales for $\Delta \Delta G$ predictions, and we have provided the details of the model in this paper.
633
+
634
+ Guidelines:
635
+
636
+ - The answer NA means that the paper does not release new assets.
637
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
638
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
639
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
640
+
641
+ # 14. Crowdsourcing and research with human subjects
642
+
643
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
644
+
645
+ Answer: [NA]
646
+
647
+ Justification: This paper does not involve crowdsourcing nor research with human subjects.
648
+
649
+ Guidelines:
650
+
651
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
652
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
653
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
654
+
655
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
656
+
657
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
658
+
659
+ Answer: [NA]
660
+
661
+ Justification: This paper does not involve crowdsourcing nor research with human subjects
662
+
663
+ Guidelines:
664
+
665
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
666
+
667
+ - We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
668
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
669
+
670
+ # 16. Declaration of LLM usage
671
+
672
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
673
+
674
+ Answer: [NA]
675
+
676
+ Justification: This paper dose not involve LLMs as any important, original, or non-standard components.
677
+
678
+ Guidelines:
679
+
680
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
681
+ - Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:702221a6d413f2234e1fe4ab1108189f303a50ae72bde5705e5656b21022dd5e
3
+ size 1104516
accuratelypredictingproteinmutationaleffectsviaahierarchicalmanybodyattentionnetwork/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac343e5045694b24f5c5c930ca659f0055b5e041086392fe5e23d6adfdd5b108
3
+ size 757845
acereasonnemotronadvancingmathandcodereasoningthroughreinforcementlearning/e2ffb506-f6e4-4cc7-9714-e8f9959a8dec_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:150e878c4a362ced0aa28aa811e01d9d2c5790292ef0feb200a9a6d9892b9926
3
+ size 146509
acereasonnemotronadvancingmathandcodereasoningthroughreinforcementlearning/e2ffb506-f6e4-4cc7-9714-e8f9959a8dec_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:008a088bf8dc317f83a8036746e1cef5d27b8c954ece1dd44e08edfc7f95631d
3
+ size 189855