Chelsea707 commited on
Commit
eca22f3
·
verified ·
1 Parent(s): fe63e53

Add Batch 0e58a516-d418-41ec-a363-c74937049739

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_content_list.json +3 -0
  2. ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_model.json +3 -0
  3. ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_origin.pdf +3 -0
  4. ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/full.md +295 -0
  5. ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/images.zip +3 -0
  6. ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/layout.json +3 -0
  7. ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/0c89f697-687d-41b7-a130-75534656ad65_content_list.json +3 -0
  8. ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/0c89f697-687d-41b7-a130-75534656ad65_model.json +3 -0
  9. ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/0c89f697-687d-41b7-a130-75534656ad65_origin.pdf +3 -0
  10. ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/full.md +0 -0
  11. ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/images.zip +3 -0
  12. ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/layout.json +3 -0
  13. ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/dcb10e07-9632-4319-b754-2801acc182b7_content_list.json +3 -0
  14. ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/dcb10e07-9632-4319-b754-2801acc182b7_model.json +3 -0
  15. ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/dcb10e07-9632-4319-b754-2801acc182b7_origin.pdf +3 -0
  16. ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/full.md +0 -0
  17. ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/images.zip +3 -0
  18. ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/layout.json +3 -0
  19. ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/592f505a-86ec-4bfc-849e-2e1a62f76390_content_list.json +3 -0
  20. ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/592f505a-86ec-4bfc-849e-2e1a62f76390_model.json +3 -0
  21. ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/592f505a-86ec-4bfc-849e-2e1a62f76390_origin.pdf +3 -0
  22. ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/full.md +0 -0
  23. ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/images.zip +3 -0
  24. ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/layout.json +3 -0
  25. ICLR/2025/Residual Deep Gaussian Processes on Manifolds/531f0646-0657-4be8-8520-e4cf167b7bf2_content_list.json +3 -0
  26. ICLR/2025/Residual Deep Gaussian Processes on Manifolds/531f0646-0657-4be8-8520-e4cf167b7bf2_model.json +3 -0
  27. ICLR/2025/Residual Deep Gaussian Processes on Manifolds/531f0646-0657-4be8-8520-e4cf167b7bf2_origin.pdf +3 -0
  28. ICLR/2025/Residual Deep Gaussian Processes on Manifolds/full.md +627 -0
  29. ICLR/2025/Residual Deep Gaussian Processes on Manifolds/images.zip +3 -0
  30. ICLR/2025/Residual Deep Gaussian Processes on Manifolds/layout.json +3 -0
  31. ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/44dd8cbd-5744-4d91-9f29-85768a064ef2_content_list.json +3 -0
  32. ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/44dd8cbd-5744-4d91-9f29-85768a064ef2_model.json +3 -0
  33. ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/44dd8cbd-5744-4d91-9f29-85768a064ef2_origin.pdf +3 -0
  34. ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/full.md +0 -0
  35. ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/images.zip +3 -0
  36. ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/layout.json +3 -0
  37. ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/5b2ee4cc-09f5-403a-a3c3-3f13f822cf03_content_list.json +3 -0
  38. ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/5b2ee4cc-09f5-403a-a3c3-3f13f822cf03_model.json +3 -0
  39. ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/5b2ee4cc-09f5-403a-a3c3-3f13f822cf03_origin.pdf +3 -0
  40. ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/full.md +0 -0
  41. ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/images.zip +3 -0
  42. ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/layout.json +3 -0
  43. ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/050aa0c8-0272-4b38-a8cd-bc15e39bbc4b_content_list.json +3 -0
  44. ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/050aa0c8-0272-4b38-a8cd-bc15e39bbc4b_model.json +3 -0
  45. ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/050aa0c8-0272-4b38-a8cd-bc15e39bbc4b_origin.pdf +3 -0
  46. ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/full.md +0 -0
  47. ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/images.zip +3 -0
  48. ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/layout.json +3 -0
  49. ICLR/2025/Retrieval Head Mechanistically Explains Long-Context Factuality/b88a8af7-9bc9-4014-ad52-3e112ce4bc09_content_list.json +3 -0
  50. ICLR/2025/Retrieval Head Mechanistically Explains Long-Context Factuality/b88a8af7-9bc9-4014-ad52-3e112ce4bc09_model.json +3 -0
ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32890b6664c85432ef2c59ba0799b33e8c22d02c0e1deffdf37db9c0a23df8e1
3
+ size 95205
ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5aff2c08b7657bb3c0f7568c1c21342036b06baf19fbc6c11eef28e02974815f
3
+ size 119143
ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/9d39cf39-580f-4f0f-82fe-7a5a25e558da_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a93a8f85a341577b61c89053d91981070d340fc0be3f0740b5b0b146a45ed0a4
3
+ size 6213250
ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/full.md ADDED
@@ -0,0 +1,295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # $\mathbf{F}^{3}\mathbf{S}\mathbf{E}\mathbf{T}$ : TOWARDS ANALYZING FAST, FREQUENT, AND FINE-GRAINED EVENTS FROM VIDEOS
2
+
3
+ Zhaoyu Liu $^{1,2}$ , Kan Jiang $^{2}$ , Murong Ma $^{2}$ , Zhe Hou $^{3}$ , Yun Lin $^{4}$ , Jin Song Dong $^{2}$
4
+
5
+ $^{1}$ Ningbo University $^{2}$ National University of Singapore $^{3}$ Griffith University
6
+
7
+ 4 Shanghai Jiao Tong University
8
+
9
+ {liuzy, jiangkan}@nus.edu.sg, murongma@u.nus.edu
10
+
11
+ z.hou@griffith.edu.au, lin-yun@sjtu.edu.cn, dcsdjs@nus.edu.sg
12
+
13
+ # ABSTRACT
14
+
15
+ Analyzing Fast, Frequent, and Fine-grained $(\mathrm{F}^{3})$ events presents a significant challenge in video analytics and multi-modal LLMs. Current methods struggle to identify events that satisfy all the $\mathrm{F}^{3}$ criteria with high accuracy due to challenges such as motion blur and subtle visual discrepancies. To advance research in video understanding, we introduce $\mathrm{F}^{3}$ Set, a benchmark that consists of video datasets for precise $\mathrm{F}^{3}$ event detection. Datasets in $\mathrm{F}^{3}$ Set are characterized by their extensive scale and comprehensive detail, usually encompassing over 1,000 event types with precise timestamps and supporting multi-level granularity. Currently $\mathrm{F}^{3}$ Set contains several sports datasets, and this framework may be extended to other applications as well. We evaluated popular temporal action understanding methods on $\mathrm{F}^{3}$ Set, revealing substantial challenges for existing techniques. Additionally, we propose a new method, $\mathrm{F}^{3}\mathrm{ED}$ , for $\mathrm{F}^{3}$ event detections, achieving superior performance. The dataset, model, and benchmark code are available at https://github.com/F3Set/F3Set.
16
+
17
+ # 1 INTRODUCTION
18
+
19
+ Recognizing sequences of fast (fast-paced), frequent (many actions in a short period), and fine-grained (diverse types) events with precise timestamps (with a tolerance of 1-2 frames) is a challenging problem for both current video analytics methods and multi-modal large language models (LLMs). Despite advances in fine-grained action recognition [31; 58; 51], temporal action localization [60; 6; 40; 59], segmentation [67; 33; 71; 2], and video captioning [63; 56; 49; 36], limited focus has been focused on this problem. This task is critical for various real-world applications, such as sports analytics, where action forecasting [21; 65], strategic and tactical analysis [44; 45; 46; 48], and player performance evaluation [10; 55] depend on understanding detailed of event sequences. Other examples include industrial inspection [42], crucial for detecting subtle irregularities in high-speed production lines to ensure quality and safety; computer vision in autonomous driving [25], essential for accurate and instantaneous vehicle control and obstacle detection; and surveillance [53], important for the precise identification of abnormal or sudden events to enhance security. However, existing methods and datasets foundational to their development only partially address the $\mathbf{F}^3$ scenario.
20
+
21
+ To facilitate the study of $\mathrm{F}^3$ events understanding, we propose a new benchmark, $\mathrm{F}^3\mathrm{Set}$ , for precise temporal events detection and recognition. $\mathrm{F}^3\mathrm{Set}$ datasets usually have a large number of event types (on the order of 1,000), annotated with exact timestamps, and offer multi-level granularity to capture comprehensive event details. Although $\mathrm{F}^3$ is a general problem, creating such a dataset requires domain-specific knowledge for labeling and processing; thus, in this paper, we use tennis as a case study. We also introduce a general annotation pipeline and toolchain to support domain experts in creating new $\mathrm{F}^3$ datasets. Using this pipeline, we have also been building datasets for table tennis and badminton, and a community of users is actively expanding these with other applications.
22
+
23
+ Unlike other video analysis tasks, tennis actions are characterized by their rapid succession and diversity, as illustrated in Figure 1. Understanding detailed event attributes such as shot direction, technique, and outcome is essential. For example, analyzing patterns in serve directions (e.g., "T", "body", "wide", defined in Appendix B) or success rates can reveal players' habits and skill levels, offering strategic insights for competitive advantage [15]. This detailed analysis supports coaches and players in developing tailored strategies against different opponents [16; 47]. However, detecting
24
+
25
+ ![](images/8526ca45bfee79bcfc0c3c5b2b0d7993870af42ca8429903f1f62a82caf83a97.jpg)
26
+ Figure 1: Example of detecting fast, frequent, and fine-grained events with precise moments.
27
+
28
+ $\mathrm{F}^3$ events from videos poses significant challenges, such as subtle visual differences, motion-induced blurring, and the need for precise event localization. Current video understanding methods are inadequately equipped to address these challenges. For instance, traditional fine-grained action recognition [9; 58; 28] assigns a single label to an entire video rather than identifying a sequence of events. Temporal action localization (TAL) and temporal action segmentation (TAS) often depend on pre-trained or modestly fine-tuned input features [39; 14], which lack the specificity required to capture the subtle and domain-specific visual details necessary for recognizing diverse events with temporal precision. Some studies [24; 36; 43] attempt to address these issues through dense frame sampling and end-to-end training. However, this makes targeted events temporally sparse (e.g., only a few events over hundreds of consecutive frames). As a result, long-term temporal correlation modules on dense visual features struggle to capture event-wise causal correlations effectively.
29
+
30
+ Moreover, Large Language Models (LLMs) [54; 61; 38] have expanded their capabilities to include multi-modal inference, encompassing text, visuals, and audio. Recognizing the potential, we conducted preliminary experiments on $\mathbf{F}^3$ Set using GPT-4 and observed that it understood basic video contexts, such as sports types, contextual information (e.g., court type and scoreboard), and simple actions. However, it struggles with understanding $\mathbf{F}^3$ events and temporal relations between frames (e.g., shot directions). See Appendix A for details. Consequently, GPT-4 yields poor results compared to the other methods for $\mathbf{F}^3$ problems, and we do not use it in the experiment. By introducing $\mathbf{F}^3$ Set, we hope it can help advance multi-modal LLM capabilities in $\mathbf{F}^3$ video understanding in the future.
31
+
32
+ Leveraging $\mathrm{F}^3$ Set, we extensively evaluate existing temporal action understanding methods, aiming to reveal the challenges of $\mathrm{F}^3$ event understanding. To provide guidelines for future research, we conduct a number of ablation studies on modeling choices. Addressing the shortcomings of existing methods, we also propose a simple yet efficient model, $\mathrm{F}^3\mathrm{ED}$ , that is designed for $\mathrm{F}^3$ event detection tasks and can be trained quickly on a single GPU. It outperforms existing models and can serve as a baseline for further development.
33
+
34
+ Contributions. The key contributions of this paper are as follows:
35
+
36
+ - We create $\mathrm{F}^3\mathrm{Set}$ , a new benchmark with datasets that feature over 1,000 precisely timestamped event types with multi-level granularity, designed to challenge and advance the state-of-the-art in temporal action understanding.
37
+ - We introduce a general annotation toolchain that enables domain experts to create new $\mathbf{F}^3$ datasets.
38
+ - We propose an end-to-end model named F $^3$ ED, which can accurately detect F $^3$ event sequences from videos through visual features and contextual sequence refinement on a single GPU.
39
+ - We assess the performance of leading temporal action understanding methods on $\mathbf{F}^3$ Set through comprehensive evaluations and ablation studies and analyze the results.
40
+
41
+ # 2 RELATED WORK
42
+
43
+ Existing $\mathbf{F}^3$ related datasets. Although datasets have been developed for temporal action understanding, few focus on the $\mathrm{F}^3$ events. Table 1 compares existing datasets with $\mathrm{F}^3\mathrm{Set}$ by scale ("# Vid", "# Clips") and characteristics like action speed ("Evt. Len"), frequency ("Evt. / sec"), and granularity ("# Classes"), which correspond to "fast", "frequent", and "fine-grained" respectively. Datasets such as THUMOS14 [27] and Breakfast [30] focus on coarse-grained actions, where background context provides clear cues, and actions span seconds to minutes. In contrast, FineAction [41] and ActivityNet [4] cover a wide range of daily activities with diverse action categories, while FineGym [58] delves into detailed action types within gymnastics. Like FineGym, $\mathrm{F}^3\mathrm{Set}$ emphasizes domain-specific
44
+
45
+ Table 1: Comparison of existing $\mathbf{F}^3$ related datasets and $\mathrm{F^3Set}$ . "Evt. Len." is the average duration of each event, and "# Evt. / sec" is the average number of events per second.
46
+
47
+ <table><tr><td>Datasets</td><td># Vid.</td><td># Clips.</td><td>Avg. Clip Len.</td><td># Classes</td><td>Evt. Len.</td><td># Evt. / sec</td></tr><tr><td colspan="7">(a) Fine-grained</td></tr><tr><td>FineAction [41]</td><td>-</td><td>16,732</td><td>149.5s</td><td>101</td><td>6.9s</td><td>0.3</td></tr><tr><td>ActivityNet [4]</td><td>-</td><td>19,994</td><td>116.7s</td><td>200</td><td>49.2s</td><td>0.01</td></tr><tr><td>FineGym [58]</td><td>303</td><td>32,697</td><td>50.3s</td><td>530</td><td>1.7s</td><td>0.3</td></tr><tr><td colspan="7">(b) Fast</td></tr><tr><td>CCTV-Pipe [42]</td><td>575</td><td>575</td><td>549.3s</td><td>16</td><td>&lt; 0.1s</td><td>0.02</td></tr><tr><td>SoccerNetV2 [11]</td><td>9</td><td>9</td><td>99.6min</td><td>12</td><td>&lt; 0.1s</td><td>0.3</td></tr><tr><td colspan="7">(c) Frequent</td></tr><tr><td>FineDiving [69]</td><td>135</td><td>3,000</td><td>4.2s</td><td>29</td><td>1.1s</td><td>~1</td></tr><tr><td colspan="7">(d) Fast &amp; Frequent</td></tr><tr><td>ShuttleSet [66]</td><td>44</td><td>3,685</td><td>10.9s</td><td>18</td><td>&lt; 0.1s</td><td>~1</td></tr><tr><td>P2ANet [3]</td><td>200</td><td>2,721</td><td>360.0s</td><td>14</td><td>&lt; 0.1s</td><td>~2</td></tr><tr><td colspan="7">(d) Fast &amp; Frequent &amp; Fine-grained</td></tr><tr><td>F3Set</td><td>114</td><td>11,584</td><td>8.4s</td><td>1,108</td><td>&lt; 0.1s</td><td>~1</td></tr></table>
48
+
49
+ granularity with subtle visual differences but encounters additional challenges due to faster and more frequent actions. Besides, unlike FineGym's typical single-player focus, $\mathrm{F}^3\mathrm{Set}$ (e.g., tennis) features two players and a fast-moving ball, with both players rapidly moving across the court, occupying only small portions of the scene, thus increasing task difficulty. CCTV-Pipe [42] targets temporal defect detection in urban pipe systems, providing single-frame annotations for rapid event detection, though it is limited in frequency and event types. Research in the sports domain has explored the detection of fast and frequent actions. FineDiving [69] segments diverse diving events, while ShuttleSet [66] and $\mathrm{P}^2\mathrm{ANet}$ [3] focus on identifying strokes in fast-paced racket sports. Volleyball [26] and NSVA (basketball) [68] focus on team sports understanding and video captioning, while SoccerNetV2 [11] ball action spotting task focus on identifying the timing and type of ball-related actions. However, these datasets typically cover coarser event types and are limited to specific $\mathrm{F}^3$ aspects.
50
+
51
+ In contrast, our proposed $\mathbf{F}^3\mathrm{Set}$ is characterized by 1) rapid events that occur instantaneously, 2) high frequency of approximately one event per second, and 3) extensive granularity with a larger number of detailed event classes. These attributes introduce novel challenges.
52
+
53
+ $\mathbf{F}^3$ event understanding Detecting $\mathrm{F}^3$ events poses unique challenges due to their rapid temporal dynamics, high occurrence rates, and subtle visual distinctions, requiring precise temporal and contextual understanding. Fine-grained action detection has been explored in tasks covering diverse daily activities [4; 41], using features extracted by video encoders pre-trained on datasets like Kinetics-400 [29] and a detection head for classification. However, such pre-trained extractors often miss domain-specific nuances. Domain-specific methods in FineGym [58] and FineDiving [58] utilize end-to-end training to incorporate domain knowledge. These methods often encode videos into non-overlapping snippets or downsample frames, yielding coarse temporal features insufficient for detecting fast-paced events spanning only 1–2 frames. Related works such as ShuttleSet [66] and $\mathrm{P}^2\mathrm{ANet}$ [3] address fast and frequent event detection in packet sports by employing end-to-end models that extract frame-wise features and use detection heads (e.g., BMN [37] or GRU [8]) to classify each frame. To address class imbalance, the loss weight of the foreground classes is set higher than the background during training [24]. While these approaches achieve precise temporal spotting, their scalability to larger action classes is limited by challenges like long-tail class distributions and inadequate modeling of event-wise correlations. Our proposed $\mathrm{F}^3\mathrm{ED}$ overcomes these issues through frame-wise dense processing, a multi-label classification head to handle minor event differences and class imbalances, and a contextual module to refine predictions by leveraging event-wise causal relationships, enhancing both precision and robustness in $\mathrm{F}^3$ event detection.
54
+
55
+ # 3 F $^3$ SET: A BENCHMARK DATASET FOR F $^3$ EVENT DETECTION
56
+
57
+ Recognizing the limitations in existing video datasets for $\mathrm{F}^3$ event understanding, we introduce $\mathrm{F}^3\mathrm{Set}$ , a new benchmark for precise temporal $\mathrm{F}^3$ events detection and recognition. Given the need for
58
+
59
+ ![](images/c86553b6db71a6a3151810a53a9050194253efdec49b531e25fa07e31eb7d152.jpg)
60
+ Figure 2: Breakdown of $\mathrm{F}^3$ Set event class annotation.
61
+
62
+ domain-specific expertise in creating $\mathbf{F}^3$ datasets, this section uses tennis as a case study to illustrate $\mathrm{F}^3\mathrm{Set}$ 's event descriptions, construction process, and key properties. We also propose a general annotation pipeline and toolchain that empowers domain experts to develop new $\mathrm{F}^3$ datasets for diverse applications. Applying the same approach, we have also built $\mathrm{F}^3$ datasets for other domains, including tennis doubles, badminton, and table tennis (see link).
63
+
64
+ # 3.1 F $^3$ SET EVENT DESCRIPTION
65
+
66
+ We use tennis to illustrate $\mathbf{F}^3$ event descriptions, introducing key lexicon and defining $\mathbf{F}^3$ events. Datasets have been built for other $\mathbf{F}^3$ domains, including tennis doubles, badminton, and table tennis, with similar event definitions. Details are in Appendix C.
67
+
68
+ Lexicon. A tennis court is divided into deuce, middle, and ad regions. The initial shot, a "serve," targets the T, Body (B), or Wide (W) areas. A "return" follows if the receiver's shot lands in bounds. Subsequent shots, or "strokes", can be directed "cross-court" (CC), "down the line" (DL), "down the middle" (DM), "inside-in" (II), or "inside-out" (IO) using either "forehand" (fh) or "backhand" (bh). Players may "approach" (apr) the net on shorter balls. Shot techniques include "ground stroke/top spin" (gs), "slice", "volley", and "lob", with outcomes: "in-bound", "winner", "forced error", or "unforced error". More detailed definitions can be found in Appendix B.
69
+
70
+ $\mathbf{F}^3$ events. Formally, each event consists of 8 sub-classes, denoted as $sc_{1},sc_{2},\dots,sc_{8}$
71
+
72
+ $sc_{1} - hit$ by which player: (1) near- or (2) far-end player;
73
+
74
+ $sc_2 - hit$ from which court location: (3) deuce, (4) middle, or (5) ad court;
75
+
76
+ $sc_{3} - \text{hit}$ at which side of the body: (6) forehand or (7) backhand;
77
+
78
+ ${sc}{4}_{ - }$ shot type: (8) serve,(9) return,or (10) stroke;
79
+
80
+ $sc_{5} - \text{shot direction}$ : (11) T, (12) B, (13) W, (14) CC, (15) DL, (16) DM, (17) II, or (18) IO;
81
+
82
+ $s c_{6}-$ shot technique: (19) gs, (20) slice, (21) volley, (22) lob, (23) drop, or (24) smash;
83
+
84
+ $sc_7$ -player movement: (25) approach;
85
+
86
+ $sc_{8}$ - shot outcome: (26) in, (27) winner, (28) forced error, or (29) unforced error.
87
+
88
+ Altogether, there are 29 elements and 1,108 event types based on various combinations (Figure 2).
89
+
90
+ Similarly, for other domains, badminton contains 6 sub-classes, 28 elements and 1008 event types; table tennis contains 7 sub-classes, 23 elements and 1296 event types; and tennis doubles contain 26 elements and 744 event types. Compared to existing racket sports video datasets [3; 66], $\mathrm{F}^3\mathrm{Set}$ offers additional dimensions, such as shot direction and outcomes, which are crucial for identifying playing patterns and success rates. Please refer to Appendix C for more details.
91
+
92
+ # 3.2 F $^3$ SET DATASET CONSTRUCTION
93
+
94
+ Video collection. For tennis, we collected publicly available high-resolution singles matches (2012-2023) from YouTube, including Grand Slams, Olympics, and major ATP/WTA tournaments. The dataset includes various court surfaces (hard, clay, grass), male and female players, and both right- and left-handed competitors. These videos feature complete rallies, match footage, and detailed player data. Similar criteria were used for tennis doubles, badminton, and table tennis videos.
95
+
96
+ ![](images/b1626f68e6860e5b0eb4a1ec3d4efc08329b29e311b94e02aa9fd6a0b3eabbe4.jpg)
97
+
98
+ ![](images/077821c2e3fe6df38744f547a59563d836c7bf77ca2d711bc971eb46de3d4d8b.jpg)
99
+ Figure 3: An interface of the labeling tool. The panel on the right is application-customizable.
100
+
101
+ ![](images/8128374de07374ce4a6d69081829832542998298ea65d7cc87360926b8196fbd.jpg)
102
+
103
+ Annotation pipeline and toolchain. After data collection, we use a three-stage annotation process designed to maximize automation and minimize manual effort. This pipeline is adaptable to various sports broadcast videos and broader domains:
104
+
105
+ (1) Video segmentation: The first stage is to segment a full broadcast video into shorter clips using a context-aware scene detector [1] that automatically identifies jump cuts within the video.
106
+ (2) Clip selection: The second stage is to select targeted clips (e.g., clips contain tennis rallies) using a Siamese network to compare each clip with a "base image" indicative of the scene of interest.
107
+ (3) $F^3$ event annotation: The final stage is to identify the precise event moments (e.g., frames when a player hits the ball) and record the corresponding event types through an annotation tool.
108
+
109
+ The first two steps are automated and applicable to a range of sports videos, facilitating the efficient breakdown of lengthy videos into relevant clips. For the final phase, we developed an interactive annotation interface, shown in Figure 3. The tool allows users to navigate clips quickly (e.g., 1-second increments) or review them frame by frame, enabling efficient identification of key events (e.g., hitting moments). It supports selecting shot types and identifying court positions through direct clicks on the video, with each click displayed for immediate verification. Object-level detection can assist the process, and a foolproof design minimizes errors from accidental clicks or misjudgments. This tool is adaptable to other sports by incorporating domain-specific knowledge, broadening its applicability.
110
+
111
+ Our annotation team consists of 8 members. We provided them with specialized training and rigorous pre-tests before beginning the official annotation work, along with supporting materials such as slides and demonstrations. Each annotator was assigned an equal portion of the dataset, totaling 1,450 clips (rallies) each. The manual labeling takes roughly 30 hours to finish all 1,450 clips. Following the initial annotation phase, we conducted multiple rounds of cross-validation involving random sampling of rallies and quality checks among annotators to ensure the accuracy of the event-based labels. In cases where conflicting annotations arose, annotators were asked to input the labels they believed to be correct. The final label was determined based on a majority vote among the annotators.
112
+
113
+ # 3.3 F $^{3}$ SET DATASET STATISTICS AND PROPERTIES
114
+
115
+ Key statistics for $\mathrm{F}^3$ Set tennis dataset are summarized in Table 2. Statistics for other $\mathrm{F}^3$ datasets, including badminton, table tennis, and tennis doubles, are provided in the Appendix D. We employ a training, validation, and testing split of 3:1:1, with the training and validation sets drawn from the same video sources, while the test set features clips from distinct videos.
116
+
117
+ Event Timestamp. Unlike typical TAL and TAS tasks, where an action spans several frames or seconds, the duration of actions in packet sports is often ambiguous. Thus, stroke actions are defined as instantaneous events, recording only the moment of ball-racket contact [62] as shown in Figure 1.
118
+
119
+ Table 2: Summary of ${\mathrm{F}}^{3}$ Set tennis dataset statistics.
120
+
121
+ <table><tr><td>Category</td><td>Details</td></tr><tr><td>Matches</td><td>114 broadcast matches</td></tr><tr><td>Players</td><td>75 (30 men, 45 women)</td></tr><tr><td>Handedness</td><td>68 right-handed, 7 left-handed</td></tr><tr><td>Frame Rate (FPS)</td><td>25–30 FPS</td></tr></table>
122
+
123
+ <table><tr><td>Category</td><td>Details</td></tr><tr><td>Clips</td><td>11,584 rallies</td></tr><tr><td>Average Clip Duration</td><td>8.4 sec</td></tr><tr><td>Total Shots</td><td>42,846</td></tr><tr><td>Shots Per Rally</td><td>1 to 34</td></tr></table>
124
+
125
+ Multi-level granularity. Depending on the requirements of the analytics task, $\mathrm{F}^3\mathrm{Set}$ can focus on a subset of sub-classes, enabling flexible granularity. We define a parameter $G \in \mathcal{P}(\{sc_1, \ldots, sc_8\})$ where $\mathcal{P}(\{sc_1, \ldots, sc_8\})$ is the power set of $\{sc_1, \ldots, sc_8\}$ , to select sub-classes and form different levels of granularity. We define 3 granularity levels using $\mathrm{F}^3\mathrm{Set}$ tennis as an example. At the coarse level, $G_{\mathrm{low}} = \{sc_1, sc_3, sc_4, sc_8\}$ includes 4 sub-classes, 11 elements, and 38 event types. This level captures essential but broad information. At a finer level, $G_{\mathrm{mid}} = \{sc_1, \ldots, sc_6\}$ consists of 6 sub-classes, 24 elements, and 365 event types. This granularity provides more detailed event representations. At the most detailed level, $G_{\mathrm{high}} = \{sc_1, \ldots, sc_8\}$ encompasses all 8 sub-classes, 29 elements, and 1,108 event types. This level is ideal for precise and comprehensive event analysis. This multi-level granularity enhances $\mathrm{F}^3\mathrm{Set}$ 's flexibility for diverse real-world tasks.
126
+
127
+ # 3.4 ETHICAL CONSIDERATIONS
128
+
129
+ $\mathrm{F}^3$ Set is constructed from publicly available sports broadcasts, ensuring compliance with ethical and legal standards. We do not redistribute video content, providing only YouTube links to maintain adherence to copyright policies. The dataset focuses on professional players in public tournaments, avoiding private or off-court data and ensuring it is used strictly for academic research. While anonymization is not applied, as these players are public figures, we emphasize that the dataset should not be used for non-research purposes. A more detailed discussion on privacy, consent, and bias mitigation is provided in Appendix E.
130
+
131
+ # 4 OUR PROPOSED APPROACH: F $^3$ ED
132
+
133
+ Acknowledging the challenges and limitations of existing approaches, we propose a simple yet effective method named Fast Frequent Fine-grained Event Detection network (F $^3$ ED), illustrated in Figure 4. It is designed for F $^3$ event detection and can serve as a baseline for further development.
134
+
135
+ Problem formulation. Let $X \in \mathbb{R}^{H \times W \times 3 \times N}$ denote the input, consisting of $N$ RGB frames of size $H \times W$ . The output is a sequence of $M$ event-timestamp pairs $((E_1, t_1), \ldots, (E_M, t_M))$ , where $E_i$ is the event type with $C$ classes and $t_i$ is the corresponding timestamp for $i \in \{1, \ldots, M\}$ . Additionally, each event $E_i$ can also be expressed as a vector $[e_{i,1}, \ldots, e_{i,K}]$ , with each element $e_{i,j} \in \{0,1\}$ indicating the presence or absence of the $j^{th}$ element in event $E_i$ , where $j$ is an integer $j \in \{1, \ldots, K\}$ . The parameter $K$ , which defines the number of elements in each event vector.
136
+
137
+ Video Encoder (VE). The first stage of both baselines and our model will extract spatial-temporal frame-wise features. The video encoder (VE) consists of a visual backbone, followed by a bidirectional GRU to capture long-term visual dependencies: $\mathbf{F}_{emb} = \mathrm{VE}(X)$ , with $\mathbf{F}_{emb} \in R^{N \times d'}$ .
138
+
139
+ Event Localization (LCL). Utilizing the frame-wise features $\mathbf{F}_{emb}$ , the event localizer (LCL) employs a fully connected network with a Sigmoid activation function to perform dense binary classification, aiming to accurately identify specific event instances. For an $N$ -frame clip, the output is represented as $(\hat{p}_1,\dots ,\hat{p}_N)$ , where each $\hat{p}_i$ denotes the probability that an event occurs at the corresponding timestamp: $(\hat{p}_1,\dots ,\hat{p}_N) = \text{Sigmoid} (\mathrm{LCL}(\mathbf{F}_{emb}))$ . Ground truth labels $(p_1,\ldots ,p_N)$ with $p_i\in \{0,1\}$ are used to compute the discrepancy between the predicted probabilities and the actual values using binary cross-entropy loss as: $L_{LCL} = \frac{1}{N}\sum_{i = 1}^{N}p_i\cdot \log (\hat{p}_i) + (1 - p_i)\cdot \log (1 - \hat{p}_i)$ .
140
+
141
+ Multi-label Event Classifier (MLC). Upon detecting events, we proceed to categorize them into specific types using a multi-label classification module (MLC). This module, a fully connected network, takes the identified event features $f_{i}$ from $\mathbf{F}_{emb}$ as inputs to predict the event types: $\hat{E}_i = \text{Sigmoid}(\text{MLC}(f_i)) = [\hat{e}_{i,1},\dots,\hat{e}_{i,K}]$ , where $K$ denotes the number of elements, $f_{i}$ represents the features for the event at the $i^{th}$ frame, $\hat{E}_i$ is the predicted event type, and $\hat{e}_{i,j} \in [0,1]$ is the probability of $\hat{E}_i$ containing the $j^{th}$ element. For a video clip with $M$ events, the ground truths are
142
+
143
+ ![](images/9770433b10da7af04cefc1115fcf319e1b694285c63e83da659445c907a5d0eb.jpg)
144
+ Figure 4: Overview of $\mathbf{F}^3\mathbf{ED}$ . RGB images are processed by VE to capture frame-wise spatial-temporal features, which are passed to LCL to identify event timestamps and MLC to predict labels. Outputs from LCL and MLC are combined ('plus' symbol) to form an event representation sequence and refined by CTX module. 'Red squares' represent errors from purely visual predictions.
145
+
146
+ given as $(E_1, \ldots, E_M)$ with each $E_i$ represented as a vector of $K$ elements $[e_{i,1}, \ldots, e_{i,K}]$ . The loss can be represented by $L_{MLC} = \frac{1}{M}\sum_{i=1}^{M}\left(\frac{1}{K}\sum_{j=1}^{K}e_{i,j}\cdot\log(\hat{e}_{i,j}) + (1-e_{i,j})\cdot\log(1-\hat{e}_{i,j})\right)$ .
147
+
148
+ Contextual module (CTX) Video encoders often struggle to extract insightful visual features from fast-paced videos due to motion blur, and objects of interest, such as players, may only occupy a small portion of the frame. This can result in the loss of crucial visual details for fine-grained action classification, particularly when resizing images to $224 \times 224$ . Selecting the best-predicted event types naively might, therefore, produce invalid event sequences. To address this, we introduce a contextual module (CTX), designed to concurrently learn contextual knowledge from event sequences during end-to-end training: $(\mathbb{E}_1, \ldots, \mathbb{E}_M) = \mathrm{CTX}(\hat{E}_1, \ldots, \hat{E}_M)$ . CTX employs a bidirectional GRU to process the predicted event sequence $\hat{E}$ and outputs a refined sequence $\mathbb{E}_i = [\mathbb{E}_1, \ldots, \mathbb{E}_k]$ , integrating both visual-based predictions and contextual correlations across events. The loss is calculated for each refined event: $L_{CTX} = \frac{1}{M} \sum_{i=1}^{M} \left( \frac{1}{K} \sum_{j=1}^{K} e_{i,j} \cdot \log(\mathbb{E}_{i,j}) + (1 - e_{i,j}) \cdot \log(1 - \mathbb{E}_{i,j}) \right)$ .
149
+
150
+ # 5 EXPERIMENTS
151
+
152
+ In this section, we benchmark existing temporal action understanding methods, including TAL, TAS, and TASpot, on the $\mathrm{F}^3$ Set dataset and conduct a series of ablation studies.
153
+
154
+ Evaluation metrics. The evaluation metrics used in our work are carefully chosen to comprehensively assess both the temporal precision and classification accuracy of detected events, which are critical for $\mathrm{F}^3$ event detection. These metrics align with evaluation standards in similar tasks. Edit Score [32] measures the similarity between predicted and ground truth event sequences using Levenshtein distance, capturing errors in event sequence structure, such as missing, additional, or misordered events. This metric is particularly valuable for evaluating models where the temporal order and completeness of event sequences are essential [23]. Mean F1 Score with Temporal Tolerance evaluates both classification and temporal localization accuracy [24; 23]. By considering a prediction correct only when its timestamp aligns within a strict temporal tolerance (e.g., $\pm 1$ frame) and its class correctly identifies, this metric ensures that models are assessed on their ability to achieve precise temporal spotting alongside accurate classification. Given the long-tail distribution of event types in the dataset, where some events are extremely rare, we report two variants of the mean F1 score to ensure a balanced evaluation: $F1_{evt}$ , the average F1 score across all event types, and $F1_{elm}$ , the average F1 score across all elements, which typically presents a more balanced distribution.
155
+
156
+ Baselines. Existing temporal action understanding frameworks typically incorporate two key components: a video encoder for visual feature extraction and a head module for specific tasks such as detection or segmentation. Applying these models directly to our study presents challenges, as they generally utilize a two-stage training process—employing a static, pre-trained video encoder for feature extraction and training only the head module. This approach often fails to capture fine-grained, domain-specific events due to its reliance on temporally coarse, non-overlapping, or downsampled video segments. To address these limitations, we have adapted these temporal action understanding methods to develop new baselines better suited for detecting $\mathrm{F}^3$ events. Given the rapid pace and short duration of tennis shots, it is crucial to utilize frame-wise feature extraction [7] (discussed in Section 5.2). Besides, end-to-end training with video encoder fine-tuning is required to capture the subtle event differences. Moreover, the classification of some sub-classes (e.g., shot direction, outcome) demands long-term temporal reasoning to integrate information from subsequent frames.
157
+
158
+ Consequently, we focus on established feature extractors: TSN [64], SlowFast [20], I3D [5], VTN [52], and TSM [35], which enable frame-wise feature extraction and end-to-end training. We pair
159
+
160
+ Table 3: Experimental results on ${\mathrm{F}}^{3}$ Set (tennis) with 3 levels of granularity. Full table in Appendix G.
161
+
162
+ <table><tr><td rowspan="2">Video encoder</td><td rowspan="2">Head arch.</td><td colspan="3">F3Set (Ghigh)</td><td colspan="3">F3Set (Gmid)</td><td colspan="3">F3Set (Glow)</td></tr><tr><td>F1evt</td><td>F1elm</td><td>Edit</td><td>F1evt</td><td>F1elm</td><td>Edit</td><td>F1evt</td><td>F1elm</td><td>Edit</td></tr><tr><td rowspan="3">TSN [64]</td><td>MS-TCN [19]</td><td>15.9</td><td>59.8</td><td>53.5</td><td>23.2</td><td>60.9</td><td>65.8</td><td>45.7</td><td>70.4</td><td>72.8</td></tr><tr><td>ActionFormer [72]</td><td>18.4</td><td>60.6</td><td>55.2</td><td>24.8</td><td>61.9</td><td>67.3</td><td>48.7</td><td>70.6</td><td>72.2</td></tr><tr><td>E2E-Spot [24]</td><td>24.7</td><td>65.3</td><td>60.1</td><td>31.5</td><td>66.2</td><td>71.0</td><td>53.5</td><td>73.6</td><td>75.0</td></tr><tr><td rowspan="3">SlowFast [20]</td><td>G-TAD [70]</td><td>23.0</td><td>66.1</td><td>64.0</td><td>29.6</td><td>66.5</td><td>74.2</td><td>53.3</td><td>76.0</td><td>77.9</td></tr><tr><td>ActionFormer [72]</td><td>28.7</td><td>70.0</td><td>67.6</td><td>35.5</td><td>70.9</td><td>76.4</td><td>59.3</td><td>77.1</td><td>81.5</td></tr><tr><td>E2E-Spot [24]</td><td>25.9</td><td>69.4</td><td>65.7</td><td>33.8</td><td>70.4</td><td>75.4</td><td>55.5</td><td>76.5</td><td>79.5</td></tr><tr><td>I3D [5]</td><td>E2E-Spot [24]</td><td>22.7</td><td>59.7</td><td>68.7</td><td>27.1</td><td>60.7</td><td>74.2</td><td>51.9</td><td>67.7</td><td>78.3</td></tr><tr><td>VTN [52]</td><td>E2E-Spot [24]</td><td>14.8</td><td>58.3</td><td>56.7</td><td>20.0</td><td>59.4</td><td>68.2</td><td>39.7</td><td>63.1</td><td>73.1</td></tr><tr><td rowspan="5">TSM [35]</td><td>MS-TCN [19]</td><td>21.7</td><td>67.3</td><td>58.6</td><td>30.4</td><td>69.5</td><td>73.0</td><td>50.2</td><td>74.0</td><td>75.3</td></tr><tr><td>ASformer [71]</td><td>17.6</td><td>61.9</td><td>57.5</td><td>25.5</td><td>64.0</td><td>74.2</td><td>46.0</td><td>72.9</td><td>74.0</td></tr><tr><td>G-TAD [70]</td><td>16.9</td><td>62.5</td><td>55.2</td><td>29.8</td><td>66.9</td><td>74.8</td><td>39.8</td><td>70.1</td><td>67.2</td></tr><tr><td>ActionFormer [72]</td><td>22.4</td><td>65.7</td><td>60.3</td><td>31.0</td><td>68.2</td><td>74.7</td><td>52.4</td><td>73.8</td><td>74.9</td></tr><tr><td>E2E-Spot [24]</td><td>31.4</td><td>71.4</td><td>68.7</td><td>39.5</td><td>72.3</td><td>77.9</td><td>60.6</td><td>78.4</td><td>82.1</td></tr><tr><td>TSM[35]</td><td>F3ED</td><td>40.3</td><td>75.2</td><td>74.0</td><td>48.0</td><td>76.5</td><td>82.4</td><td>68.4</td><td>80.0</td><td>87.2</td></tr></table>
163
+
164
+ each encoder with five representative head module architectures from existing methods: MS-TCN [19] and ASFormer [71] from TAS, G-TAD [70] and ActionFormer [72] from TAL and E2E-Spot [24] from TASpot, to establish a set of new baseline models for our study. To identify hitting moments and their respective event types, frame-wise dense multi-class classification is applied to identify each frame as either background or one of the event types.
165
+
166
+ Implementation details. We implement and train models on $\mathbf{F}^3$ Set in an end-to-end manner. The video encoder takes video clip $X$ down-scaled and cropped to $224\times 224$ to extract frame-wise visual features. Subsequently, each head module processes per-frame features to identify a sequence of $\mathbf{F}^3$ events and their timestamps. For more implementation details, please refer to Appendix F.
167
+
168
+ # 5.1 RESULTS AND ANALYSIS
169
+
170
+ The evaluation results presented in Table 3 provide several critical insights into the performance of various methods across different levels of granularity ( $G_{low}$ , $G_{mid}$ , and $G_{high}$ ). A general trend emerges where performance decreases as granularity increases, underscoring the growing challenges associated with finer granularity. While certain methods demonstrate some robustness, the overall efficacy across all approaches remains suboptimal, particularly at higher levels of granularity, indicating the challenge of precise $\mathrm{F}^3$ event detection task.
171
+
172
+ Simple 2D CNNs (e.g., TSN), which process frames independently, are inadequate for $\mathrm{F}^3$ event detection due to their inability to capture critical spatial-temporal correlations between frames. Lacking temporal modeling, they struggle to distinguish visually similar events, resulting in poor performance, especially at higher granularity levels. Advanced video encoders such as I3D [5], SlowFast [20], and transformer-based VTN [52], which excel in other video understanding tasks, face significant challenges with $\mathrm{F}^3\mathrm{Set}$ . These models process video data using techniques like non-overlapping snippets or frame downsampling, resulting in coarse temporal features. While effective for long-duration actions, such approaches struggle to detect the rapid, short-duration events in $\mathrm{F}^3$ , which rely on precise temporal cues spanning only 1-2 frames. This suggests that increasing video encoder complexity does not necessarily improve performance for fast-action detection in $\mathrm{F}^3\mathrm{Set}$ . Notably, simpler models like TSM, paired with advanced 2D CNNs such as RegNet-Y [57], outperform these complex encoders. This highlights the importance of capturing subtle visual differences over short temporal spans, demonstrating that the ability to extract fine-grained temporal cues is more impactful than model complexity.
173
+
174
+ Head modules such as transformer-based ActionFormer, and GRU-based E2E-Spot, generally outperform other methods. This advantage highlights their effectiveness in capturing long-term temporal dependencies through end-to-end training. Notably, E2E-Spot consistently outperforms ActionFormer across most settings, suggesting that GRU-based architectures may offer an advantageous trade-off between efficiency and representational power for certain types of temporal correlations.
175
+
176
+ Our proposed F $^3$ ED model, leveraging the TSM video encoder, achieves the best performance among all granularity levels. This is attributable to two key design choices: the multi-label classifier and the contextual module. Detailed discussions of these design elements are presented in the next section.
177
+
178
+ Table 4: Ablation and analysis experiments. The default model takes stride size 2 and clip length 96.
179
+
180
+ <table><tr><td rowspan="2">Experiment</td><td colspan="3">F3Set (Ghigh)</td><td colspan="3">F3Set (Gmid)</td><td colspan="3">F3Set (Glow)</td></tr><tr><td>F1evt</td><td>F1elm</td><td>Edit</td><td>F1evt</td><td>F1elm</td><td>Edit</td><td>F1evt</td><td>F1elm</td><td>Edit</td></tr><tr><td>TSM + E2E-Spot</td><td>31.4</td><td>71.4</td><td>68.7</td><td>39.5</td><td>72.3</td><td>77.9</td><td>60.6</td><td>78.4</td><td>82.1</td></tr><tr><td>(a) Feature extractor</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>I3D [5] (clip-wise)</td><td>22.7</td><td>59.7</td><td>68.7</td><td>27.1</td><td>60.7</td><td>74.2</td><td>51.9</td><td>67.7</td><td>78.3</td></tr><tr><td>VTN [52] (video transformer)</td><td>14.8</td><td>58.3</td><td>56.7</td><td>20.0</td><td>59.4</td><td>68.2</td><td>39.7</td><td>63.1</td><td>73.1</td></tr><tr><td>ST-GCN++ [17] (skeleton-based)</td><td>25.4</td><td>62.1</td><td>56.1</td><td>32.4</td><td>63.9</td><td>63.5</td><td>55.1</td><td>69.4</td><td>73.2</td></tr><tr><td>PoseConv3D [18] ((skeleton-based))</td><td>20.1</td><td>54.5</td><td>53.2</td><td>26.0</td><td>55.4</td><td>61.9</td><td>48.8</td><td>63.0</td><td>69.7</td></tr><tr><td>(b) Stride size = 4</td><td>25.9</td><td>69.2</td><td>62.7</td><td>33.4</td><td>69.9</td><td>73.0</td><td>60.0</td><td>77.9</td><td>78.8</td></tr><tr><td>Stride size = 8</td><td>14.0</td><td>56.7</td><td>44.3</td><td>18.5</td><td>57.4</td><td>54.8</td><td>40.4</td><td>67.0</td><td>59.2</td></tr><tr><td>(c) without GRU</td><td>27.6</td><td>69.0</td><td>60.6</td><td>38.0</td><td>71.3</td><td>75.3</td><td>54.7</td><td>74.1</td><td>73.4</td></tr><tr><td>(d) Clip Length = 32</td><td>26.3</td><td>67.4</td><td>54.5</td><td>35.5</td><td>69.4</td><td>71.8</td><td>53.2</td><td>75.1</td><td>68.9</td></tr><tr><td>Clip Length = 64</td><td>30.7</td><td>71.2</td><td>67.4</td><td>38.6</td><td>72.4</td><td>77.5</td><td>58.4</td><td>77.9</td><td>81.1</td></tr><tr><td>Clip Length = 192</td><td>29.3</td><td>70.3</td><td>65.7</td><td>37.3</td><td>71.4</td><td>77.0</td><td>58.8</td><td>77.1</td><td>80.4</td></tr><tr><td>(e) Multi-label</td><td>37.9</td><td>74.3</td><td>71.7</td><td>45.9</td><td>75.6</td><td>80.1</td><td>66.6</td><td>80.1</td><td>85.1</td></tr><tr><td>(f) Multi-label + CTX (Transformer)</td><td>39.0</td><td>74.3</td><td>72.8</td><td>50.5</td><td>75.5</td><td>81.8</td><td>63.4</td><td>79.6</td><td>86.8</td></tr><tr><td>Multi-label + CTX (BiGRU)</td><td>40.3</td><td>75.2</td><td>74.0</td><td>48.0</td><td>76.5</td><td>82.4</td><td>68.4</td><td>80.0</td><td>87.2</td></tr></table>
181
+
182
+ # 5.2 ABLATION STUDY
183
+
184
+ We selected the highest-performing baseline model (TSM + E2E-Spot) as our default configuration for the subsequent ablation studies. More ablation studies can be found in Appendix H.
185
+
186
+ Feature extractor. An effective feature extractor is crucial for accurate $\mathbf{F}^3$ event detection. Below, we summarize some key findings (details in Appendix H). (1) Frame-wise feature extraction outperforms clip-wise methods, which divide inputs into non-overlapping segments. Experiments show clip-wise methods produce temporally coarse features and hinder precise event detection. (2) Transformer-based video encoders such as VTN [52] struggle on $\mathrm{F}^3\mathrm{Set}$ due to high computational costs and limited ability to effectively capture short-term temporal correlations. (3) In addition to RGB inputs, we also experimented with skeleton-based pose estimation methods, including ST-GCN++ [17] and PoseConv3D [18] with human key points as input. While they excel in efficiency and interpretability, they lack critical details like shot direction, limiting performance on $\mathrm{F}^3\mathrm{Set}$ .
187
+
188
+ Sparse sampling. Increasing the stride size allows for a broader temporal coverage within a fixed sequence length. This sparse sampling technique is prevalent in many video understanding tasks [40; 34], offering high efficiency and reasonable accuracy. However, this approach proves inadequate for our task, where events are characterized by their rapid occurrence, frequency, and fine granularity. As illustrated in Table 4(b), increasing the stride size to 4 and 8 leads to a marked decline in performance, underscoring the importance of dense sampling for detecting $\mathrm{F}^3$ .
189
+
190
+ Long-term temporal reasoning. The default model employs a spatio-temporal video encoder (TSM), complemented by a bidirectional Gated Recurrent Unit [13] (GRU) head for enhanced long-term temporal integration. To assess the necessity of long-term temporal reasoning, we replaced the GRU module with a fully connected layer. The results, presented in Table 4(c), indicate a significant performance decline relative to the original configuration. This finding highlights the essential role of long-term temporal reasoning in analyzing sub-classes such as shot direction, outcomes, and player movements that require information from subsequent frames.
191
+
192
+ **Clip length.** The sensitivity of sequence models to varying input clip lengths, which encapsulate different temporal contexts, is notable. In F $^3$ Set, the incidence of F $^3$ events correlates directly with clip length. Table 4(d) shows that shorter clips result in fewer events per sequence, hindering the model's ability to leverage long-term dependencies among consecutive events effectively. Conversely, while longer clip lengths yield improved results, the marginal gains diminish with increasing length.
193
+
194
+ Multi-class versus multi-label classification. The challenge of modeling over 1,000 possible event type combinations as a multi-class classification problem is formidable. For example, consider two events, $E_{1}$ (far_ad_bh_stroke_DL Slice_apr_in) and $E_{2}$ (far_ad_bh_stroke_DL_drop_apr_in), which differ only in shot technique (slice vs. drop). Although similar, multi-class classification treats these as distinct classes, thus reducing training efficiency and exacerbating the long-tail distribution bias towards more frequent classes. A more natural approach is multi-label classification, where each event can belong to multiple sub-class elements (e.g., ['far', 'ad', 'serve', 'W', 'in']) Thus, $E1$ and
195
+
196
+ Table 5: Experimental results on other "semi-F $^{3}$ " datasets.
197
+
198
+ <table><tr><td rowspan="2">Head arch.</td><td colspan="2">ShuttleSet [66]</td><td colspan="2">FineDiving [69]</td><td colspan="2">FineGym [58]</td><td colspan="2">SoccerNetV2 [11]</td><td colspan="2">CCTV-Pipe [42]</td></tr><tr><td>F1evt</td><td>Edit</td><td>F1evt</td><td>Edit</td><td>F1evt</td><td>Edit</td><td>F1evt</td><td>Edit</td><td>F1evt</td><td>Edit</td></tr><tr><td>MS-TCN [19]</td><td>70.3</td><td>74.4</td><td>65.7</td><td>92.2</td><td>57.6</td><td>65.3</td><td>43.4</td><td>74.5</td><td>25.8</td><td>31.3</td></tr><tr><td>ASformer [71]</td><td>55.9</td><td>70.6</td><td>49.9</td><td>87.6</td><td>53.6</td><td>66.3</td><td>46.3</td><td>76.1</td><td>15.4</td><td>33.4</td></tr><tr><td>G-TAD [70]</td><td>48.2</td><td>61.1</td><td>52.1</td><td>82.6</td><td>45.8</td><td>51.4</td><td>42.3</td><td>72.3</td><td>31.3</td><td>33.6</td></tr><tr><td>ActionFormer [72]</td><td>62.1</td><td>67.5</td><td>68.3</td><td>92.4</td><td>54.0</td><td>59.7</td><td>43.0</td><td>64.6</td><td>18.8</td><td>29.5</td></tr><tr><td>E2E-Spot [24]</td><td>70.2</td><td>75.0</td><td>75.8</td><td>93.7</td><td>62.1</td><td>65.4</td><td>46.2</td><td>72.9</td><td>27.2</td><td>35.2</td></tr><tr><td>F3ED</td><td>70.7</td><td>77.1</td><td>77.6</td><td>95.1</td><td>70.9</td><td>70.7</td><td>48.1</td><td>76.6</td><td>37.0</td><td>39.5</td></tr></table>
199
+
200
+ $E2$ only differ in shot technique but are identical in other aspects. This adjustment facilitates more effective training and shows an increase in performance, as shown in Table 4(e).
201
+
202
+ Contextual knowledge. Beyond the statistical results in Table 3, analysis of predicted event sequences reveals that current baselines may produce invalid sequences due to logical errors or uncommon practices. For instance, a right-handed player cannot logically direct a forehand shot from the deuce court as "II" or "IO". Similarly, an event ending in a winner or error should logically conclude the sequence. Additionally, it is uncommon for a player to hit with backhand when the ball is played to their forehand side. Further examples are detailed in Appendix I. These observations indicate that existing baselines fail to effectively capture event-wise contextual correlations. By adding the CTX module, the performance further increases as shown in Table 4(f). We also compared BiGRU and Transformer Encoder for the CTX module. BiGRU performed slightly better, likely due to its efficiency in modeling short event sequences (usually $< 20$ per clip) with fewer parameters.
203
+
204
+ # 5.3 GENERALIZABILITY TO "SEMI-F" DATA
205
+
206
+ $\mathrm{F}^3$ task possesses broad applicability across numerous real-world domains, such as sports, autonomous driving, surveillance, and production line inspection. Nevertheless, creating such a $\mathrm{F}^3$ dataset necessitates substantial expertise and extensive labeling efforts. We have found that existing video datasets often fail to fully address all three dimensions of the $\mathrm{F}^3$ task—"fast", "frequent", and "fine-grained". In this section, we conducted experiments on several "semi- $\mathrm{F}^3$ " datasets that partially meet these criteria, including Shuttleset [66] for badminton (racket sport), FineDiving [69] for diving (individual sports), FineGym [58] for gymnastics (individual sports), SoccerNetV2 [50] (team sports), and CCTV-Pipe [42] for pipe defect detection (industrial application). We report only the $\mathrm{F1}_{evt}$ and Edit score, as not all datasets necessitate multi-label classification given their limited event types. For the video encoder, we chose TSM, which consistently outperforms the others on average.
207
+
208
+ Performance across different domains can vary significantly depending on the difficulty of tasks and the scale of datasets. For instance, the CCTV-Pipe dataset, targeting temporal defect localization in urban pipe systems, shows suboptimal performance due to factors such as ambiguous single-frame annotations for each defect, multiple defects at the same time, long-tailed distribution of defect types, and limited dataset size. Our performance is better than the results reported in [42]. Generally, methods that effectively handle $\mathrm{F}^3\mathrm{Set}$ tend to perform well across other applications, as indicated in Table 5. Our $\mathrm{F}^3\mathrm{ED}$ outperforms existing baselines in all datasets, demonstrating its robust generalizability for detecting "semi- $\mathrm{F}^3$ " events across various domains. While $\mathrm{F}^3$ event detection benefits from accurate event localization, a high-performing LCL module is not a hard prerequisite (see Appendix J). Therefore, our method can be generalized and benefit broader applications.
209
+
210
+ # 6 CONCLUSION AND FUTURE WORK
211
+
212
+ In this study, we addressed the challenge of analyzing fast, frequent, and fine-grained $(\mathrm{F}^3)$ events from videos by introducing $\mathrm{F}^3\mathrm{Set}$ , a benchmark for precise temporal $\mathrm{F}^3$ event detection. $\mathrm{F}^3\mathrm{Set}$ datasets usually feature detailed event types (approximately 1,000), annotated with precise timestamps, and provide multi-level granularity. We have also developed a general annotation toolchain that enables domain experts to create $\mathrm{F}^3$ datasets, thereby facilitating further research in this field. Moreover, we proposed $\mathrm{F}^3\mathrm{ED}$ , an end-to-end model that effectively detects complex event sequences from videos, using a combination of visual features and contextual sequence refinement. Our comprehensive evaluations and ablation studies of leading methods in temporal action understanding on $\mathrm{F}^3\mathrm{Set}$ highlighted their performance and provided critical insights into their capabilities and limitations. Moving forward, we aim to extend the scope of $\mathrm{F}^3$ task to more real-world scenarios and advance the development of $\mathrm{F}^3$ video understanding.
213
+
214
+ # ACKNOWLEDGMENTS
215
+
216
+ This research is supported by the AI Singapore (AISG3-RP-2022-030). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of funding bodies.
217
+
218
+ # REFERENCES
219
+
220
+ [1] Pyscenedetect. https://www.scenedetect.com/.
221
+ [2] Nadine Behrmann, S Alireza Golestaneh, Zico Kolter, Juergen Gall, and Mehdi Noroozi. Unified fully and timestamp supervised temporal action segmentation via sequence to sequence translation. In European Conference on Computer Vision, pp. 52-68. Springer, 2022.
222
+ [3] Jiang Bian, Xuhong Li, Tao Wang, Qingzhong Wang, Jun Huang, Chen Liu, Jun Zhao, Feixiang Lu, Dejing Dou, and Haoyi Xiong. P2anet: A large-scale benchmark for dense action detection from table tennis match broadcasting videos. ACM Transactions on Multimedia Computing, Communications and Applications, 20(4):1-23, 2024.
223
+ [4] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 961-970, 2015.
224
+ [5] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In The IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299-6308, 2017.
225
+ [6] Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David A Ross, Jia Deng, and Rahul Sukthankar. Rethinking the faster r-cnn architecture for temporal action localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1130–1139, 2018.
226
+ [7] Minghao Chen, Fangyun Wei, Chong Li, and Deng Cai. Frame-wise action representations for long videos via sequence contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13801-13810, 2022.
227
+ [8] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
228
+ [9] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of the European conference on computer vision (ECCV), pp. 720-736, 2018.
229
+ [10] Tom Decroos, Lotte Bransen, Jan Van Haaren, and Jesse Davis. Actions speak louder than goals: Valuing player actions in soccer. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 1851-1861, 2019.
230
+ [11] Adrien Deliege, Anthony Cioppa, Silvio Giancola, Meisam J Seikavandi, Jacob V Dueholm, Kamal Nasrollahi, Bernard Ghanem, Thomas B Moeslund, and Marc Van Droogenbroeck. Soccernet-v2: A dataset and benchmarks for holistic understanding of broadcast soccer videos. In The IEEE/CVF conference on computer vision and pattern recognition, pp. 4508-4519, 2021.
231
+ [12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. IEEE, 2009.
232
+ [13] Rahul Dey and Fathi M Salem. Gate-variants of gated recurrent unit (gru) neural networks. In 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), pp. 1597–1600. IEEE, 2017.
233
+
234
+ [14] Guodong Ding, Fadime Sener, and Angela Yao. Temporal action segmentation: An analysis of modern techniques. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
235
+ [15] Jin Song Dong, Ling Shi, Kan Jiang, Jing Sun, et al. Sports strategy analytics using probabilistic reasoning. In 2015 20th International Conference on Engineering of Complex Computer Systems (ICECCS), pp. 182-185. IEEE, 2015.
236
+ [16] Jin Song Dong, Kan Jiang, Zhaoyu Liu, Chen Dong, Zhe Hou, Rajdeep Singh Hundal, Jingyu Guo, and Yun Lin. Sports analytics using probabilistic model checking and deep learning. In 2023 27th International Conference on Engineering of Complex Computer Systems (ICECCS), pp. 7-11. IEEE, 2023.
237
+ [17] Haodong Duan, Jiaqi Wang, Kai Chen, and Dahua Lin. Pyskl: Towards good practices for skeleton action recognition. In Proceedings of the 30th ACM International Conference on Multimedia, pp. 7351-7354, 2022.
238
+ [18] Haodong Duan, Yue Zhao, Kai Chen, Dahua Lin, and Bo Dai. Revisiting skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2969-2978, 2022.
239
+ [19] Yazan Abu Farha and Jurgen Gall. Ms-tcn: Multi-stage temporal convolutional network for action segmentation. In The IEEE/CVF conference on computer vision and pattern recognition, pp. 3575-3584, 2019.
240
+ [20] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In The IEEE/CVF international conference on computer vision, pp. 6202-6211, 2019.
241
+ [21] Panna Felsen, Pulkit Agrawal, and Jitendra Malik. What will happen next? forecasting player moves in sports videos. In Proceedings of the IEEE international conference on computer vision, pp. 3342-3351, 2017.
242
+ [22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
243
+ [23] Yuchen He, Zeqing Yuan, Yihong Wu, Liqi Cheng, Dazhen Deng, and Yingcai Wu. Vistec: Video modeling for sports technique recognition and tactical analysis. arXiv preprint arXiv:2402.15952, 2024.
244
+ [24] James Hong, Haotian Zhang, Michael Gharbi, Matthew Fisher, and Kayvon Fatahalian. Spotting temporally precise, fine-grained events in video. In European Conference on Computer Vision, pp. 33-51. Springer, 2022.
245
+ [25] Xinyu Huang, Xinjing Cheng, Qichuan Geng, Binbin Cao, Dingfu Zhou, Peng Wang, Yuanqing Lin, and Ruigang Yang. The apolloscape dataset for autonomous driving. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 954-960, 2018.
246
+ [26] Mostafa S Ibrahim, Srikanth Muralidharan, Zhiwei Deng, Arash Vahdat, and Greg Mori. A hierarchical deep temporal model for group activity recognition. In The IEEE conference on computer vision and pattern recognition, pp. 1971-1980, 2016.
247
+ [27] Haroon Idrees, Amir R Zamir, Yu-Gang Jiang, Alex Gorban, Ivan Laptev, Rahul Sukthankar, and Mubarak Shah. The thumos challenge on action recognition for videos "in the wild". Computer Vision and Image Understanding, 155:1-23, 2017.
248
+ [28] Kan Jiang, Masoumeh Izadi, Zhaoyu Liu, and Jin Song Dong. Deep learning application in broadcast tennis video annotation. In 2020 25th International Conference on Engineering of Complex Computer Systems (ICECCS), pp. 53-62. IEEE, 2020.
249
+ [29] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
250
+
251
+ [30] Hilde Kuehne, Juergen Gall, and Thomas Serre. An end-to-end generative framework for video segmentation and recognition. In Proc. IEEE Winter Applications of Computer Vision Conference (WACV 16), Lake Placid, Mar 2016.
252
+ [31] Colin Lea, René Vidal, and Gregory D Hager. Learning convolutional action primitives for fine-grained action recognition. In 2016 IEEE international conference on robotics and automation (ICRA), pp. 1642-1649. IEEE, 2016.
253
+ [32] Colin Lea, Michael D Flynn, Rene Vidal, Austin Reiter, and Gregory D Hager. Temporal convolutional networks for action segmentation and detection. In The IEEE Conference on Computer Vision and Pattern Recognition, pp. 156-165, 2017.
254
+ [33] Zhe Li, Yazan Abu Farha, and Jurgen Gall. Temporal action segmentation from timestamp supervision. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8365-8374, 2021.
255
+ [34] Chuming Lin, Chengming Xu, Donghao Luo, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, and Yanwei Fu. Learning salient boundary feature for anchor-free temporal action localization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3320-3329, 2021.
256
+ [35] Ji Lin, Chuang Gan, and Song Han. Tsm: Temporal shift module for efficient video understanding. In The IEEE/CVF international conference on computer vision, pp. 7083-7093, 2019.
257
+ [36] Kevin Lin, Linjie Li, Chung-Ching Lin, Faisal Ahmed, Zhe Gan, Zicheng Liu, Yumao Lu, and Lijuan Wang. Swinbert: End-to-end transformers with sparse attention for video captioning. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17949-17958, 2022.
258
+ [37] Tianwei Lin, Xiao Liu, Xin Li, Errui Ding, and Shilei Wen. Bmn: Boundary-matching network for temporal action proposal generation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3889-3898, 2019.
259
+ [38] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024.
260
+ [39] Meng Liu, Liqiang Nie, Yunxiao Wang, Meng Wang, and Yong Rui. A survey on video moment localization. ACM Computing Surveys, 55(9):1-37, 2023.
261
+ [40] Xiaolong Liu, Song Bai, and Xiang Bai. An empirical study of end-to-end temporal action detection. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20010-20019, 2022.
262
+ [41] Yi Liu, Limin Wang, Yali Wang, Xiao Ma, and Yu Qiao. Fineaction: A fine-grained video dataset for temporal action localization. IEEE transactions on image processing, 31:6937-6950, 2022.
263
+ [42] Yi Liu, Xuan Zhang, Ying Li, Guixin Liang, Yabing Jiang, Lixia Qiu, Haiping Tang, Fei Xie, Wei Yao, Yi Dai, et al. Videopipe 2022 challenge: Real-world video understanding for urban pipe inspection. In 2022 26th International Conference on Pattern Recognition (ICPR), pp. 4967-4973. IEEE, 2022.
264
+ [43] Zhaoyu Liu, Jingyu Guo, Mo Wang, Ruicong Wang, Kan Jiang, and Jin Song Dong. Recognizing a sequence of events from tennis video clips: Addressing timestep identification and subtle class differences. In 2023 IEEE 28th Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 337-341. IEEE, 2023.
265
+ [44] Zhaoyu Liu, Kan Jiang, Zhe Hou, Yun Lin, and Jin Song Dong. Insight analysis for tennis strategy and tactics. In 2023 IEEE International Conference on Data Mining (ICDM), pp. 1175-1180. IEEE, 2023.
266
+
267
+ [45] Zhaoyu Liu, Chen Dong, Chen Wang, Tian Yu Dong, and Kan Jiang. Exploring team strategy dynamics in tennis doubles matches. In International Sports Analytics Conference and Exhibition, pp. 104-115. Springer, 2024.
268
+ [46] Zhaoyu Liu, Murad Durrani, Leong Yu Xuan, Julian-Frederik Simon, and Tan Yong Feng Deon. Strategy analysis in nfl using probabilistic reasoning. In International Sports Analytics Conference and Exhibition, pp. 116-128. Springer, 2024.
269
+ [47] Zhaoyu Liu, Murong Ma, Kan Jiang, Zhe Hou, Ling Shi, and Jin Song Dong. Pcsp# denotational semantics with an application in sports analytics. In The Application of Formal Methods: Essays Dedicated to Jim Woodcock on the Occasion of His Retirement, pp. 71–102. Springer, 2024.
270
+ [48] Zhaoyu Liu, Chen Dong, Jia Wei Chen, Alvin Min Jun Jiang, Guanzhou Chen, Aayan Faraz Shaikh, Tian Yu Dong, Chen Wang, Kan Jiang, and Jin Song Dong. Analyzing the formation strategy in tennis doubles game. SN Computer Science, 6(2):100, 2025.
271
+ [49] Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Jason Li, Taroon Bharti, and Ming Zhou. Univl: A unified video and language pre-training model for multimodal understanding and generation. arXiv preprint arXiv:2002.06353, 2020.
272
+ [50] Hassan Mkhallati, Anthony Cioppa, Silvio Giancola, Bernard Ghanem, and Marc Van Droogenbroeck. Soccertnet-caption: Dense video captioning for soccer broadcasts commentaries. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5073-5084, 2023.
273
+ [51] Jonathan Munro and Dima Damen. Multi-modal domain adaptation for fine-grained action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 122-132, 2020.
274
+ [52] Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3163-3172, 2021.
275
+ [53] Sangmin Oh, Anthony Hoogs, Amitha Perera, Naresh Cuntoor, Chia-Chih Chen, Jong Taek Lee, Saurajit Mukherjee, JK Aggarwal, Hyungtae Lee, Larry Davis, et al. A large-scale benchmark dataset for event recognition in surveillance video. In CVPR 2011, pp. 3153-3160. IEEE, 2011.
276
+ [54] OpenAI. Gpt-4 technical report, 2023.
277
+ [55] Luca Pappalardo, Paolo Cintia, Paolo Ferragina, Emanuele Massucco, Dino Pedreschi, and Fosca Giannotti. Playerank: data-driven performance evaluation and player ranking in soccer via a machine learning approach. ACM Transactions on Intelligent Systems and Technology (TIST), 10(5):1-27, 2019.
278
+ [56] Wenjie Pei, Jiyuan Zhang, Xiangrong Wang, Lei Ke, Xiaoyong Shen, and Yu-Wing Tai. Memory-attended recurrent network for video captioning. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8347–8356, 2019.
279
+ [57] Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dólar. Designing network design spaces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10428-10436, 2020.
280
+ [58] Dian Shao, Yue Zhao, Bo Dai, and Dahua Lin. Finegym: A hierarchical video dataset for fine-grained action understanding. In The IEEE/CVF conference on computer vision and pattern recognition, pp. 2616-2625, 2020.
281
+ [59] Dingfeng Shi, Yujie Zhong, Qiong Cao, Lin Ma, Jia Li, and Dacheng Tao. Tridet: Temporal action detection with relative boundary modeling. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18857-18866, 2023.
282
+ [60] Zheng Shou, Dongang Wang, and Shih-Fu Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1049-1058, 2016.
283
+
284
+ [61] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
285
+ [62] Roman Voeikov, Nikolay Falaleev, and Ruslan Baikulov. Ttnet: Real-time temporal and spatial video analysis of table tennis. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 884-885, 2020.
286
+ [63] Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. Reconstruction network for video captioning. In The IEEE conference on computer vision and pattern recognition, pp. 7622-7631, 2018.
287
+ [64] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Van Gool. Temporal segment networks for action recognition in videos. IEEE transactions on pattern analysis and machine intelligence, 41(11):2740-2755, 2018.
288
+ [65] Wei-Yao Wang, Hong-Han Shuai, Kai-Shiang Chang, and Wen-Chih Peng. Shuttlenet: Position-aware fusion of rally progress and player styles for stroke forecasting in badminton. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 4219-4227, 2022.
289
+ [66] Wei-Yao Wang, Yung-Chang Huang, Tsi-Ui Ik, and Wen-Chih Peng. Shuttleset: A human-annotated stroke-level singles dataset for badminton tactical analysis. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 5126-5136, 2023.
290
+ [67] Zhenzhi Wang, Ziteng Gao, Limin Wang, Zhifeng Li, and Gangshan Wu. Boundary-aware cascade networks for temporal action segmentation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXV 16, pp. 34-51. Springer, 2020.
291
+ [68] Dekun Wu, He Zhao, Xingce Bao, and Richard P Wildes. Sports video analysis on large-scale data. In European Conference on Computer Vision, pp. 19-36. Springer, 2022.
292
+ [69] Jinglin Xu, Yongming Rao, Xumin Yu, Guangyi Chen, Jie Zhou, and Jiwen Lu. Finediving: A fine-grained dataset for procedure-aware action quality assessment. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2949-2958, 2022.
293
+ [70] Mengmeng Xu, Chen Zhao, David S Rojas, Ali Thabet, and Bernard Ghanem. G-tad: Sub-graph localization for temporal action detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10156-10165, 2020.
294
+ [71] Fangqiu Yi, Hongyu Wen, and Tingting Jiang. Asformer: Transformer for action segmentation. arXiv preprint arXiv:2110.08568, 2021.
295
+ [72] Chen-Lin Zhang, Jianxin Wu, and Yin Li. Actionformer: Localizing moments of actions with transformers. In European Conference on Computer Vision, pp. 492-510. Springer, 2022.
ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc65e08ddc8b5f1c7cc975b40abf602d78946ab5c76244d1fd49fc8a2abc1544
3
+ size 508089
ICLR/2025/$F^3Set$_ Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:038d5d906411872d62534e6e1bc73dcf63c883607ca011496a200cd7f5318731
3
+ size 517390
ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/0c89f697-687d-41b7-a130-75534656ad65_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e524e07ee35dff45318ee3854a6a9ff414be646149fd8896fc432b5b423bc048
3
+ size 210893
ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/0c89f697-687d-41b7-a130-75534656ad65_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43534797dad9fa8759a772b9fe88f38aa62fe3c9634da69c2957167902493595
3
+ size 257255
ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/0c89f697-687d-41b7-a130-75534656ad65_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:032be7779a55d71f5bb4c7900121fbdfc41e4de47014968c0a794d7545cb8ccc
3
+ size 1631984
ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c96f0e8280a53cc912e84823fe6de2f1f910eaa2a1473d3edbb5fd90d1f6c289
3
+ size 2606898
ICLR/2025/ReGenesis_ LLMs can Grow into Reasoning Generalists via Self-Improvement/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4689feb2535e45bf52d194499a00763bfb2b8f540299eb15937142c3bd2a4f38
3
+ size 972359
ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/dcb10e07-9632-4319-b754-2801acc182b7_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3877b7035f61abcee7465b2eef8a5fb4d44d7c5848b92d6027e3b28db8908a2
3
+ size 225037
ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/dcb10e07-9632-4319-b754-2801acc182b7_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c66e47bf5e6cc38c63dc3135052868e838e2c367012c0ff3d41eea3d07b54819
3
+ size 281606
ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/dcb10e07-9632-4319-b754-2801acc182b7_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b38b8c6dcfcf4e361e287ad122875b9ead7b612d2c29411964b261e8657bd220
3
+ size 6497733
ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc282364a92f53d05a245df537188c904bb33e203094c6d2258f21e04788f888
3
+ size 1561575
ICLR/2025/Reasoning Elicitation in Language Models via Counterfactual Feedback/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:622393658470766bd9a1ce6142b281225db253c904d50669f339cc4bc108bf7b
3
+ size 1323647
ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/592f505a-86ec-4bfc-849e-2e1a62f76390_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3eed9d7aa1532cbbd590611a6cd7c27ca681296387430d5f787bb73649d27026
3
+ size 196801
ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/592f505a-86ec-4bfc-849e-2e1a62f76390_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11f375353320d0274fb04b3785ea44f55008095406140bff09013098b17e4282
3
+ size 244488
ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/592f505a-86ec-4bfc-849e-2e1a62f76390_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f30070fc5308875d409d6ae2076b2c8e801a82a49300ca48020efa2d1e5d1439
3
+ size 34875449
ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60a2d91c69ee87ca5f20a2957e93e46b92a8de8d16c2dcc6435edcef9c42c020
3
+ size 4612000
ICLR/2025/Representation Alignment for Generation_ Training Diffusion Transformers Is Easier Than You Think/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d4cfe5ea2915bb846bb15b0dcdf0840326e147e5cf85d60ef2d7680e8da7cce
3
+ size 1107795
ICLR/2025/Residual Deep Gaussian Processes on Manifolds/531f0646-0657-4be8-8520-e4cf167b7bf2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5d8ed028d298baf7e299ba8295fb0b12c8d161b3ea047b9cf7dce3b2f46b503
3
+ size 149150
ICLR/2025/Residual Deep Gaussian Processes on Manifolds/531f0646-0657-4be8-8520-e4cf167b7bf2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09ba5edbb564e2d1256891e2e55ab070fc0c56d6f8b848d684dc6433757d36b4
3
+ size 177466
ICLR/2025/Residual Deep Gaussian Processes on Manifolds/531f0646-0657-4be8-8520-e4cf167b7bf2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65183897ae4cd877502095d25264b727caa289e08431bca8a15d397cab518607
3
+ size 9261041
ICLR/2025/Residual Deep Gaussian Processes on Manifolds/full.md ADDED
@@ -0,0 +1,627 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RESIDUAL DEEP GAUSSIAN PROCESSES ON MANIFOLDS
2
+
3
+ Kacper Wyrwal
4
+
5
+ ETH Zürich
6
+
7
+ University of Edinburgh
8
+
9
+ Andreas Krause
10
+
11
+ ETH Zürich
12
+
13
+ Viacheslav Borovitskiy
14
+
15
+ ETH Zürich
16
+
17
+ # ABSTRACT
18
+
19
+ We propose practical deep Gaussian process models on Riemannian manifolds, similar in spirit to residual neural networks. With manifold-to-manifold hidden layers and an arbitrary last layer, they can model manifold- and scalar-valued functions, as well as vector fields. We target data inherently supported on manifolds, which is too complex for shallow Gaussian processes thereon. For example, while the latter perform well on high-altitude wind data, they struggle with the more intricate, nonstationary patterns at low altitudes. Our models significantly improve performance in these settings, enhancing prediction quality and uncertainty calibration, and remain robust to overfitting, reverting to shallow models when additional complexity is unneeded. We further showcase our models on Bayesian optimisation problems on manifolds, using stylised examples motivated by robotics, and obtain substantial improvements in later stages of the optimisation process. Finally, we show our models to have potential for speeding up inference for nonmanifold data, when, and if, it can be mapped to a proxy manifold well enough.
20
+
21
+ # 1 INTRODUCTION
22
+
23
+ Gaussian processes (GPs) are a widely adopted model class for learning functions within the Bayesian framework (Rasmussen and Williams, 2006). They offer accurate uncertainty estimates and perform well even when data is scarce. Consequently, GPs have found success in decision-making tasks, where well-calibrated uncertainty is key, including Bayesian optimisation (Snoek et al., 2012), active (Krause et al., 2008) and reinforcement (Kamthe and Deisenroth, 2018) learning.
24
+
25
+ In recent years, substantial work went into developing the analogues of practical GP models on various non-Euclidean domains (Borovitskiy et al., 2021; 2023; 2020; Fichera et al., 2023). By virtue of being geometry-aware, these analogues have demonstrated improved performance in a variety of tasks on non-Euclidean spaces. Their notable applications include Bayesian optimisation on manifolds for robotics (Jaquier et al., 2022), traffic flow interpolation on road networks (Borovitskiy et al., 2021), and wind velocity prediction on the globe (Hutchinson et al., 2021; Robert-Nicoud et al., 2024). These models were also used to speed up inference for Euclidean GPs by transferring data to a hypersphere and leveraging the attractive structure GPs thereon possess (Dutordoir et al., 2020).
26
+
27
+ Despite their advantages, GPs can sometimes fall short in modelling complex, irregular functions. To address this, deep Gaussian processes have been introduced as a sequential composition of GPs (Damianou and Lawrence, 2013), providing improved flexibility through their layered structure (Dai et al., 2016; Mattos et al., 2016). Many techniques developed for shallow GPs, such as variational inference (Salimbeni and Deisenroth, 2017) and efficient sampling techniques (Wilson et al., 2020), can be adapted for deep GPs, enabling them to be efficiently trained and deployed on large datasets. This scalability is vital when dealing with data with complex, irregular patterns.
28
+
29
+ Deep Gaussian processes have demonstrated success in handling complex data of Euclidean nature, often competing with Bayesian neural networks and deep ensembles. However, there has been limited work on expressive uncertainty-quantifying models on manifolds beyond shallow GPs. This gap leads to the natural question: how can we construct deep Gaussian processes on manifolds?
30
+
31
+ By analogy with the Euclidean case, a deep GP on a manifold should be a composition of GP layers which take inputs and produce outputs on the manifold of interest. While significant advancements have been made in handling manifold-input GPs, outputs on manifolds conflict with the fundamental
32
+
33
+ Hidden Layer $\times L$
34
+
35
+ ![](images/b2cdcfbea842d71866db5adeee1d714fabf7301b718817fd0872a1f8a181d4d4.jpg)
36
+ Figure 1: Schematic illustration of a scalar-valued residual deep GP with $L$ hidden layers. The last layer is a scalar-valued GP on the manifold. If it is not present, the model is manifold-valued. If it is replaced with a Gaussian vector field (GVF), the model is a vector field on the manifold.
37
+
38
+ concept of a GP, which dictates that outputs must be Gaussian and thus Euclidean. Designing GPs with inputs or outputs on a manifold is challenging; with both, it is even more so.
39
+
40
+ Our solution to this problem is in part inspired by residual neural networks, thus we term our models residual deep Gaussian processes. Instead of constructing a manifold-to-manifold GP layer directly, we represent it as a Gaussian vector field (GVF) combined with an exponential map. The former represents a displacement vector, a deviation from the identity map, or a residual, while the latter translates the input by the given displacement vector. The mean of a layer is always the output of the previous layer. We visualise this architecture in Figure 1, with sphere as the manifold of interest. Notably, by changing the last layer only, one can get manifold-valued or vector-valued deep GPs. As Section 3 will show, our residual deep Gaussian processes generalise the deep GP architecture of Salimbeni and Deisenroth (2017), perhaps the most successful architecture in the Euclidean case.
41
+
42
+ We examine residual deep GPs through synthetic and real-world experiments, demonstrating our models' superior performance over shallow geometry-aware GPs on tasks where complex data inherently lies on a manifold. Additionally, we show that our models offer prospective avenues for accelerating inference for inherently Euclidean data in the context of deep GPs. Our main focus is on hypersphere manifolds $\mathbb{S}_d$ , due to their importance in key applications such as climate modelling and robotics, as well as their particularly simple structure that allows for more powerful and specialised GVFs (Robert-Nicoud et al., 2024). However, the applicability of our model extends to all Riemannian manifolds, including ones represented by meshes, indicating an even broader potential.
43
+
44
+ # 2 BACKGROUND
45
+
46
+ Mathematically, a Gaussian process (GP) is a real-valued random function $f$ whose marginals are jointly Gaussian. The same term is also used for the respective distribution over functions. For such $f$ there always exist a mean $\mu \colon X \to \mathbb{R}$ on the input domain $X$ of $f$ and a kernel $k \colon X \times X \to \mathbb{R}$ such that $f(\pmb{x}) \sim \mathcal{N}(\mu(\pmb{x}), k(\pmb{x}, \pmb{x}))$ for all $\pmb{x} \subseteq X$ . In this case, we write $f \sim \mathcal{GP}(\mu, k)$ .
47
+
48
+ GPs define useful priors for learning functions from noisy observations $\pmb{y} \in \mathbb{R}^n$ at given input locations $\pmb{x} \subseteq X$ within the Bayesian framework. In fact, if the observation likelihood is assumed Gaussian $\pmb{y} \mid f(\pmb{x}) \sim \mathcal{N}(f(\pmb{x}), \pmb{\Sigma})$ , then the posterior is a GP (Rasmussen and Williams, 2006) with
49
+
50
+ $$
51
+ \mu_ {f | \boldsymbol {y}} (\cdot) = \mu (\cdot) + k (\cdot , \boldsymbol {x}) (k (\boldsymbol {x}, \boldsymbol {x}) + \boldsymbol {\Sigma}) ^ {- 1} (\boldsymbol {y} - \mu (\boldsymbol {x})), \tag {1}
52
+ $$
53
+
54
+ $$
55
+ k _ {f | \boldsymbol {y}} (\cdot , \cdot^ {\prime}) = k (\cdot , \cdot^ {\prime}) - k (\cdot , \boldsymbol {x}) (k (\boldsymbol {x}, \boldsymbol {x}) + \boldsymbol {\Sigma}) ^ {- 1} k (\boldsymbol {x}, \cdot^ {\prime}). \tag {2}
56
+ $$
57
+
58
+ The input domain $X$ can, in principle, be any set. However, a good kernel $k\colon X\times X\to \mathbb{R}$ is necessary to define practical GP models on $X$ . If $X = \mathbb{R}^d$ , the most widely used kernels are from the Matérn family (Rasmussen and Williams, 2006), including, as a limiting case, the especially popular squared exponential kernel. These kernels are attractive because they implement two natural inductive biases: (1) model behaviour should not change under any symmetries of the data—translations, in the case of $\mathbb{R}^d$ —and (2) the unknown function possesses a certain degree of smoothness. It turns out that the Matérn family can be generalised to various non-Euclidean domains $X$ in such a way that it still implements (1) and (2). We now discuss such a generalisation to Riemannian manifolds.
59
+
60
+ <sup>1</sup> Nevertheless, some practical heuristics for this exist in the literature (Mallasto and Feragen, 2018).
61
+ 2This kernel has many names; it is also known as the RBF, Gaussian, heat, or diffusion kernel.
62
+ For more details, see Azangulov et al. (2024), Borovitskiy et al. (2020), and Hutchinson et al. (2021).
63
+
64
+ # 2.1 GAUSSIAN PROCESSES ON RIEMANNIAN MANIFOLDS
65
+
66
+ A principled way of generalising the family of Matérn kernels to Riemannian manifolds was proposed by Lindgren et al. (2011) based on the ideas dating back to Whittle (1963). Borovitskiy et al. (2020) showed that the resulting kernels can be represented as the following infinite series
67
+
68
+ $$
69
+ k _ {\nu , \kappa , \sigma^ {2}} (x, x ^ {\prime}) = \frac {\sigma^ {2}}{C _ {\nu , \kappa}} \sum_ {j = 0} ^ {\infty} \Phi_ {\nu , \kappa} (\lambda_ {j}) \phi_ {j} (x) \phi_ {j} \left(x ^ {\prime}\right), \quad \Phi_ {\nu , \kappa} (\lambda) = \left\{ \begin{array}{l l} \left(\frac {2 \nu}{\kappa^ {2}} + \lambda\right) ^ {- \nu - \frac {d}{2}} & \nu < \infty \\ e ^ {- \frac {\kappa^ {2}}{2} \lambda} & \nu = \infty \end{array} \right. \tag {3}
70
+ $$
71
+
72
+ where $-\lambda_{j},\phi_{j}$ are the eigenpairs of the Laplace-Beltrami operator on $X$ $d$ is the dimension of $X$ and $C_{\nu ,\kappa}$ is a normalisation constant ensuring $\frac{1}{\mathrm{vol}X}\int_Xk_{\nu ,\kappa ,\sigma^2}(x,x)\mathrm{d}x = \sigma^2$ . The infinite sum must be truncated for computational tractability; nevertheless, the rapid decay of the coefficients $\Phi_{\nu ,\kappa}(\lambda_j)$ makes this a sensible approximation with convergence guarantees (Rosa et al., 2023).
73
+
74
+ Such models were used, e.g., in robotics (Jaquier et al., 2022; 2024) and medical (Coveney et al., 2020) applications. Their vector field counterparts, to which we return later in the paper, were used for modelling wind velocities on the globe (Hutchinson et al., 2021; Robert-Nicoud et al., 2024).
75
+
76
+ These models tend to perform well, especially when data is scarce and uncertainty quantification is crucial, but they can struggle to capture complex irregular patterns. One potential way to improve on this is to consider deep Gaussian processes, which—in the Euclidean case—we now review.
77
+
78
+ # 2.2 DEEP GAUSSIAN PROCESSES AND APPROXIMATE INFERENCE
79
+
80
+ A deep Gaussian process $F$ is a composition $F = f^{L} \circ \dots \circ f^{1}$ of multiple shallow GPs $f^{l}$ (Damianou and Lawrence, 2013). To allow for richer structure, the hidden layers $f^{l}$ , for $1 \leq l \leq L - 1$ , are typically vector-valued GPs $f^{l} \colon \mathbb{R}^{d} \to \mathbb{R}^{d}$ , i.e. vectors of scalar-valued GPs stacked together and potentially correlated with each other (Álvarez et al., 2012). The resulting random function $F$ is itself not a GP. Thus, even for Gaussian likelihoods $p(\boldsymbol{y} \mid F(\boldsymbol{x}))$ , inference for $F \mid \boldsymbol{y}$ is intractable.
81
+
82
+ To overcome this, various approximate inference techniques for deep GPs were proposed, perhaps the most popular being doubly stochastic variational inference (Salimbeni and Deisenroth, 2017). In it, the intractable posterior $F \mid \mathbf{y}$ is approximated in terms of the KL divergence metric $D_{\mathrm{KL}}$ by the elements of a certain variational family of tractable distributions. This family itself consists of deep GPs whose layers are sparse GPs (Hensman et al., 2013; Titsias, 2009) which we now discuss.
83
+
84
+ Sparse GPs were originally proposed as a variational family for approximating shallow GPs. For them, approximate inference helps scale to big datasets or accommodate non-Gaussian likelihoods. Take some $f \sim \mathcal{GP}(\mu, k)$ . A sparse GP $f_{z,m,S}$ is a family of GPs parameterised by a set of $m$ inducing locations $z \subseteq X$ , as well as a mean vector $m \in \mathbb{R}^m$ and a covariance matrix $\mathbf{S} \in \mathbb{R}^{m \times m}$ which determine the corresponding inducing variable distribution $q(\boldsymbol{u}) = \mathcal{N}(\boldsymbol{m}, \mathbf{S})$ . Specifically,
85
+
86
+ $$
87
+ p \left(f _ {\boldsymbol {z}, \boldsymbol {m}, \boldsymbol {S}} (\cdot)\right) = \mathbb {E} _ {\boldsymbol {u} \sim q (\boldsymbol {u})} p (f (\cdot) \mid \boldsymbol {u}, z), \quad q (\boldsymbol {u}) = \mathcal {N} (\boldsymbol {m}, \boldsymbol {S}), \tag {4}
88
+ $$
89
+
90
+ where $p(f(\cdot) \mid u, z)$ is the prior $f$ conditioned on $f(z) = u$ . Intuitively, $z$ are the pseudo-inputs, $u$ are random pseudo-observations, and $p(f_{z,m,S}(\cdot))$ is a kind of pseudo-posterior. It is a GP with
91
+
92
+ $$
93
+ \mu_ {z, m, S} (\cdot) = \mu (\cdot) + k (\cdot , z) k (z, z) ^ {- 1} (m - \mu (z)), \tag {5}
94
+ $$
95
+
96
+ $$
97
+ k _ {z, \boldsymbol {m}, \boldsymbol {S}} (\cdot , \cdot^ {\prime}) = k (\cdot , \cdot^ {\prime}) - k (\cdot , \boldsymbol {z}) k (\boldsymbol {z}, \boldsymbol {z}) ^ {- 1} (k (\boldsymbol {z}, \boldsymbol {z}) - \mathbf {S}) k (\boldsymbol {z}, \boldsymbol {z}) ^ {- 1} k (\boldsymbol {z}, \cdot^ {\prime}). \tag {6}
98
+ $$
99
+
100
+ These can be readily generalised to the vector-valued setting, with $\mathbf{m} \in \mathbb{R}^{md}$ and $\mathbf{S} \in \mathbb{R}^{md \times md}$ .
101
+
102
+ For a deep GP, the respective variational family is the distribution of a composition of sparse GPs:
103
+
104
+ $$
105
+ F _ {\boldsymbol {\theta}} = f _ {\boldsymbol {z} ^ {L}, \boldsymbol {m} ^ {L}, \mathbf {S} ^ {L}} \circ \dots \circ f _ {\boldsymbol {z} ^ {1}, \boldsymbol {m} ^ {1}, \mathbf {S} ^ {1}}, \quad \boldsymbol {\theta} = \left\{\boldsymbol {z} ^ {l}, \boldsymbol {m} ^ {l}, \mathbf {S} ^ {l} \right\} _ {l = 1} ^ {L}. \tag {7}
106
+ $$
107
+
108
+ The variational parameters $\theta$ are found by minimising $D_{\mathrm{KL}}(p(F\mid \boldsymbol {y})\parallel p(F_{\boldsymbol{\theta}}))$ , which is equivalent (Salimbeni and Deisenroth, 2017) to maximising the following evidence lower bound (ELBO)
109
+
110
+ $$
111
+ \operatorname {E L B O} = \sum_ {i = 1} ^ {n} \mathbb {E} _ {F \left(x _ {i}\right) \sim p \left(F _ {\boldsymbol {\theta}} \left(x _ {i}\right)\right)} \log p \left(y _ {i} \mid F \left(x _ {i}\right)\right) - \sum_ {l = 1} ^ {L} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {u} ^ {l}\right) \| p \left(\boldsymbol {u} ^ {l}\right)\right), \tag {8}
112
+ $$
113
+
114
+ where $q(\pmb{u}^l) \sim \mathcal{N}(\pmb{m}^l, \mathbf{S}^l)$ and $p(\pmb{u}^l) \sim \mathcal{N}(\mu^l(\pmb{z}^l), k^l(\pmb{z}^l, \pmb{z}^l))$ . The second term can be computed exactly using the formula for $D_{\mathrm{KL}}$ between two Gaussian vectors. The first term is intractable, but can be efficiently approximated by drawing a sample from $p(F_\theta)$ —this can be done in a layerwise fashion, as Salimbeni and Deisenroth (2017) suggest—and subsampling the sum over a mini-batch of inputs $\pmb{x}$ . This way, optimisation proceeds by stochastic gradient descent.
115
+
116
+ # 3 RESIDUAL DEEP GAUSSIAN PROCESSES ON MANIFOLDS
117
+
118
+ In this section, we introduce the new model class of residual deep Gaussian processes on manifolds. It generalises the notion of deep GPs to Riemannian manifolds, allowing for the modelling of scalar-and vector-valued functions, vector fields, and functions taking values in the input manifold itself.
119
+
120
+ # 3.1 THE ARCHITECTURE
121
+
122
+ Let $X$ be a Riemannian manifold. The key challenge in building a deep Gaussian process $F$ on $X$ is finding a practical notion of manifold-to-manifold GPs $f^l$ to serve as its hidden layers:
123
+
124
+ $$
125
+ F = f ^ {L} \circ f ^ {L - 1} \circ \dots \circ f ^ {2} \circ f ^ {1}, \quad f ^ {l}: X \rightarrow X \text {f o r} 1 \leq l \leq L - 1, \tag {9}
126
+ $$
127
+
128
+ where, to simplify exposition, we assume that the last layer $f^L$ is real-valued, although it can just as well be $X$ -valued, vector-valued, or it can be a vector field, depending on the problem at hand.
129
+
130
+ While building a GP with inputs in $X$ amounts to finding an appropriate kernel $k\colon X\times X\to \mathbb{R}$ , handling outputs in $X$ requires redefining the inherently Euclidean notion of a Gaussian. We aim to circumvent this difficulty. To explain how, we start by considering the popular Euclidean deep GP architecture of Salimbeni and Deisenroth (2017). There, each layer $f^l\colon \mathbb{R}^d\to \mathbb{R}^d$ is of the form
131
+
132
+ $$
133
+ f ^ {l} (x) = x + g ^ {l} (x), \tag {10}
134
+ $$
135
+
136
+ where $g^{l}$ is a zero-mean GP. That is, each layer $f^{l}$ displaces its input $x$ by a residual vector $g^{l}(x) = f^{l}(x) - x$ , the difference to the identity transform, much like a residual connection in neural networks (He et al., 2016), which is modelled by a GP. On a manifold $X \neq \mathbb{R}^d$ , when $x \in X$ and $g^{l}(x) \in \mathbb{R}^{d}$ , the addition operation in $f^{l}(x) = x + g^{l}(x)$ is undefined. However, there is a natural generalisation:
137
+
138
+ $$
139
+ f ^ {l} (x) = \exp_ {x} \left(g ^ {l} (x)\right). \tag {11}
140
+ $$
141
+
142
+ Here, $\exp_x\colon T_xX\to X$ is the exponential map, the canonical mapping of the tangent space $T_{x}X$ at $x\in X$ -i.e., the linear space of vectors tangent to $X$ at point $x$ -back to $X$ itself. That is, a point $x$ is still displaced by the vector $g^{l}(x)$ , but in a geometrically sound manner.
143
+
144
+ The beauty of Equation (11) is that it reduces modelling $f^l$ to modelling $g^l$ . The latter is vector-valued, and thus compatible with the traditional notion of a Gaussian, making the problem conceptually much simpler. Still, a major technical difficulty remains: for different inputs $x$ , the value $g^l(x)$ must lie in the different spaces $T_xX$ . The mappings behaving like this are called vector fields, and their random Gaussian counterparts are called Gaussian vector fields, which we proceed to discuss.
145
+
146
+ # 3.2 KEY BUILDING BLOCKS: GAUSSIAN VECTOR FIELDS
147
+
148
+ A vector field on a manifold $X$ is a function that takes each $x \in X$ to an element of the tangent space $T_{x}X$ of $X$ at the point $x$ . If $X$ is a submanifold of $\mathbb{R}^D$ like the 2-sphere is a submanifold of $\mathbb{R}^3$ the difference between a vector field and a general vector-valued function on $X$ is very intuitive: in the latter, a vector attached to a point $x \in X$ can point in any direction, while in the former it must always be tangential to the manifold $X$ at $x$ . This difference can be seen in Figure 2a, which features a vector-valued function on the left and an actual vector field on the right.
149
+
150
+ A Gaussian vector field (GVF) can be thus thought of as a vector-valued GP whose outputs always happen to be tangential vectors. This notion is rigorously formalised in the appendices of Hutchinson et al. (2021). However, for simplicity, we do not dwell on the formalism here. Instead, we proceed to discuss three practicable GVF constructions that have been put forward in recent research.
151
+
152
+ Projected GVFs Hutchinson et al. (2021) propose a simple idea, to build a GVF $g$ from any given vector-valued GP $h\colon X\subset \mathbb{R}^D\to \mathbb{R}^D$ by projecting its outputs onto the appropriate tangent spaces. Such a projection $P_{(\cdot)}\colon \mathbb{R}^D\to T_{(\cdot)}X$ exists because, if $X$ is a submanifold of $\mathbb{R}^D$ , then any tangent space $T_{(\cdot)}X$ can be identified with a linear subspace of $\mathbb{R}^D$ . Thus, $g(x) = P_xh(x)$ defines a random vector field (see Figure 2a), which turns out to be Gaussian because of the linearity of $P_{(\cdot)}$ .
153
+
154
+ ![](images/a254a5f3e1d73bc6e40d3e95c115f9ba55199088ad6a798d62afa298034ad974.jpg)
155
+ (a) Projected GVF
156
+
157
+ ![](images/fe7dc67bfb900a1ae69ad22e72b5c62b5c0bc6f5b14d7a659e1504f6524346cc.jpg)
158
+ Figure 2: Gaussian vector field constructions on the sphere. In (b), orange vectors depict the frame.
159
+
160
+ ![](images/e034dafe114a8d3cb372f59bfa62615eb21efa61e1a6483f13d22fe2e4a7a426.jpg)
161
+ (b) Coordinate frame GVF
162
+ (c) Hodge GVF
163
+
164
+ Coordinate-frame-based GVFs Given any coordinate frame $\{e_i\}_{i=1}^d$ that is, a set of functions such that $\{e_i(\cdot)\}_{i=1}^d$ is a linear basis of $T_{(\cdot)}X$ and a vector-valued GP $h \colon X \to \mathbb{R}^d$ with components $h_i \colon X \to \mathbb{R}$ , the equation $g(x) = \sum_{i=1}^d h_i(x)e_i(x)$ defines a GVF. This is shown in Figure 2b.
165
+
166
+ Hodge GVFs Most recently, Robert-Nicoud et al. (2024) extended the generalisation of Matérn GPs from Section 2.1 to the setting of vector fields on compact manifolds. They derive an analogue of Equation (3), representing the respective Hodge Matérn kernels, as an infinite series<sup>5</sup>
167
+
168
+ $$
169
+ \boldsymbol {k} _ {\nu , \kappa , \sigma^ {2}} (x, x ^ {\prime}) = \frac {\sigma^ {2}}{C _ {\nu , \kappa}} \sum_ {j = 0} ^ {\infty} \Phi_ {\nu , \kappa} (\lambda_ {j}) s _ {j} (x) \otimes s _ {j} \left(x ^ {\prime}\right). \tag {12}
170
+ $$
171
+
172
+ Here, $s_j$ are the eigenfields of the Hodge Laplacian on $X$ that correspond to the eigenvalues $-\lambda_j$ , $\otimes$ is the tensor product, and $\Phi_{\nu,\kappa}$ is exactly as in Equation (3). This family of kernels can be made more expressive by using different hyperparameters $\sigma^2$ , $\kappa$ , and $\nu$ for different types of eigenfields $s_j$ : the pure-divergence $s_j$ , the pure-curl $s_j$ , and the harmonic $s_j$ . The result is called Hodge-compositional Matérn kernels (Robert-Nicoud et al., 2024; Yang et al., 2023). For example, it can represent the inductive bias of divergence-free vector fields, i.e. vector fields having no "sinks" and "sources", like the wind velocity field at certain altitudes. This can be seen in Figure 2c.
173
+
174
+ The first two constructions are universal. This means that by choosing an appropriate $h$ , one can obtain any possible GVF. Although this might seem advantageous, this is also a major curse, as it is often unclear which particular $h$ to take to get good inductive biases. What is more, simple solutions, such as $h$ with IID components, may lead to undesirable artefacts (Robert-Nicoud et al., 2024). On the other hand, the third construction is canonical, in the same way as the Matérn family is canonical in the scalar Euclidean case, and it is based on the same simple and natural inductive biases.
175
+
176
+ Hodge GVFs seem to be the most attractive building blocks for deep GPs. However, although they are generally applicable in theory, Robert-Nicoud et al. (2024) only provide practical expressions for the eigenfields $s_n$ when $X$ is the circle $\mathbb{S}_1$ , the 2-sphere $\mathbb{S}_2$ , or any finite product of those. Thus, for the manifolds that go beyond this simple form, other GVF constructions have to be used. With this, we finish introducing our models, and proceed to discuss Bayesian inference for them.
177
+
178
+ # 3.3 INFERENCE
179
+
180
+ Like their Euclidean counterparts, residual deep GPs constitute complex non-Gaussian priors, making exact Bayesian inference with them impossible. However, the doubly stochastic variational inference approach described in Section 2.2 is applicable to them after a few adjustments we detail below. What is more, for compact manifolds, the approach can be further modified to use certain interdomain inducing variables, which, as Section 4 shows, tends to offer superior performance.
181
+
182
+ Doubly stochastic variational inference Consider the analogue of the variational family $p(F_{\theta})$ in Equation (7) with the sparse GP layers $f_{z^l,m^l,\mathbf{S}^l}$ replaced by
183
+
184
+ $$
185
+ f _ {\boldsymbol {z} ^ {l}, \boldsymbol {m} ^ {l}, \mathbf {S} ^ {l}} (\cdot) = \exp_ {(\cdot)} \left(g _ {\boldsymbol {z} ^ {l}, \boldsymbol {m} ^ {l}, \mathbf {S} ^ {l}} (\cdot)\right), \tag {13}
186
+ $$
187
+
188
+ where $g_{\mathbf{z}^l,\mathbf{m}^l,\mathbf{S}^l}$ are sparse Gaussian vector fields. Again, intuitively and in practice, treating the manifold $X$ as a submanifold of $\mathbb{R}^D$ , GVFs can be thought of, and worked with, as a special kind of
189
+
190
+ vector-valued GPs. For $g_{\pmb{z}^l,\pmb{m}^l,\mathbf{S}^l}$ , however, $\pmb{z}^{l} = (z_{1}^{l},\dots,z_{m}^{l})$ with inducing locations $z_{j}^{l}\in X$ , and
191
+
192
+ $$
193
+ \boldsymbol {m} ^ {l} = \left(\boldsymbol {m} _ {1} ^ {l}, \dots , \boldsymbol {m} _ {m} ^ {l}\right), \quad \boldsymbol {m} _ {j} ^ {l} \in T _ {z _ {j} ^ {l}} X, \quad \operatorname {i m} \left(\mathbf {S} ^ {l}\right) \subseteq T _ {z _ {1} ^ {l}} X \times \dots \times T _ {z _ {m} ^ {l}} X, \tag {14}
194
+ $$
195
+
196
+ where $\operatorname{im}$ denotes the image of a linear operator, and the last two constraints ensure that the random pseudo-observations $\mathbf{u} \sim \mathcal{N}(\mathbf{m}, \mathbf{S})$ are tangent vectors which lie in the appropriate tangent spaces.
197
+
198
+ To satisfy the aforementioned constraints during optimisation, one can represent
199
+
200
+ $$
201
+ \boldsymbol {m} ^ {l} = P _ {\boldsymbol {z} ^ {l}} \widetilde {\boldsymbol {m}} ^ {l}, \quad \mathbf {S} ^ {l} = P _ {\boldsymbol {z} ^ {l}} \widetilde {\mathbf {S}} ^ {l} P _ {\boldsymbol {z} ^ {l}} ^ {\top}, \quad P _ {\boldsymbol {z} ^ {l}} = P _ {z _ {1} ^ {l}} \oplus \dots \oplus P _ {z _ {m} ^ {l}}, \tag {15}
202
+ $$
203
+
204
+ with arbitrary $\widetilde{\boldsymbol{m}}^l\in \mathbb{R}^{mD}$ , arbitrary positive semi-definite $\widetilde{\mathbf{S}}^l\in \mathbb{R}^{mD\times mD}$ , and $P_{z}\colon \mathbb{R}^{D}\to T_{z}X$ denoting the projection of vectors in $\mathbb{R}^D$ onto the tangent space $T_{z}X$ , as discussed in Section 3.2.
205
+
206
+ Instead of using such representations, one can fix a (locally) smooth frame and optimise the coefficients of $\mathbf{m}^l$ and $\mathbf{S}^l$ represented in this frame. In any case, one needs to make sure $z_j^l$ always remain on the manifold $X$ during optimisation, which can be done by using specialised libraries, such as PYMANOPT (Townsend et al., 2016) or GEOOPT (Kochurov et al., 2020). For low-dimensional manifolds, a fixed grid on $X$ or a set of cluster centroids can be an effective alternative to optimising $z_j^l$ .
207
+
208
+ Finally, to approximate ELBO, we need to sample $F(x_{i}) \sim p(F_{\pmb{\theta}}(x_{i}))$ . As in Salimbeni and Deisenroth (2017), this can be done sequentially. Specifically, if $\hat{F} (x_{i})$ denotes the desired sample, then
209
+
210
+ $$
211
+ \hat {F} \left(x _ {i}\right) = \hat {f} _ {i} ^ {L}, \quad \hat {f} _ {i} ^ {l} = \exp_ {\hat {f} _ {i} ^ {l - 1}} \left(g _ {\boldsymbol {z} ^ {l}, \boldsymbol {m} ^ {l}, \mathbf {S} ^ {l}} \left(\hat {f} _ {i} ^ {l - 1}\right)\right) \text {f o r} 1 \leq l \leq L, \quad \hat {f} _ {i} ^ {0} = x _ {i}, \tag {16}
212
+ $$
213
+
214
+ and, given $\hat{f}_i^{l-1}$ , each individual $g_{\mathbf{z}^l,\mathbf{m}^l,\mathbf{S}^l}(\hat{f}_i^{l-1})$ can be sampled in the usual manner.
215
+
216
+ Interdomain inducing variables on manifolds On compact manifolds, an alternative variational family can be used that speeds up the inference and can often lead to better predictive performance. It is constructed by replacing the inducing locations $z$ by inducing linear functionals $\zeta = (\zeta_{1},\dots,\zeta_{m})$ . Each $\zeta_{j}$ takes in a vector field $g$ and outputs a real number. These $\zeta$ define a sparse GVF through
217
+
218
+ $$
219
+ p \left(g _ {\zeta , \boldsymbol {m}, \boldsymbol {S}} (\cdot)\right) = \mathbb {E} _ {\boldsymbol {u} \sim q (\boldsymbol {u})} p (g (\cdot) \mid \boldsymbol {u}, \zeta), \quad q (\boldsymbol {u}) = \mathcal {N} (\boldsymbol {m}, \boldsymbol {S}), \tag {17}
220
+ $$
221
+
222
+ where $p(g(\cdot)\mid \boldsymbol {u},\zeta)$ is the prior $g$ conditioned on $\zeta (g) = \mathbf{u}$ , where $\zeta (g) = (\zeta_{1}(g),\dots ,\zeta_{m}(g))$ . For example, linear functionals of the form $\zeta (g) = \langle g(z),e_i(z)\rangle_{T_zX}$ here, $\{e_i\}_{i = 1}^d$ is a coordinate frame—can be used to recover the usual doubly stochastic variational inference considered above.
223
+
224
+ The mean and covariance of $g_{\zeta ,m,S}$ are given by Equation (5), with $\textit{\textbf{z}}$ replaced by $\zeta$ and
225
+
226
+ $$
227
+ k (\zeta , \cdot) = \operatorname {C o v} (\zeta (g), g (\cdot)), \quad k (\zeta , \zeta^ {\prime}) = \operatorname {C o v} (\zeta (g), \zeta (g)), \tag {18}
228
+ $$
229
+
230
+ see, for example, Lázaro-Gredilla and Figueiras-Vidal (2009) and van der Wilk et al. (2020).
231
+
232
+ Now, if the kernel of $g$ can be expressed as $\sum a_{j}\phi_{j}(x)\otimes \phi_{j}(x^{\prime})$ where $\{\phi_j\}$ is an orthonormal basis—this is obviously so for Hodge GVFs, but is also often the case for other GVFs on compact manifolds—the inducing functionals $\zeta_j(\cdot) = \langle \cdot ,\phi_j\rangle_{L^2} / a_j$ yield very simple covariance matrices
233
+
234
+ $$
235
+ k \left(\zeta_ {j}, \cdot\right) = \phi_ {j} (\cdot), \quad k \left(\zeta_ {i}, \zeta_ {j}\right) = \delta_ {i, j} / a _ {i}. \tag {19}
236
+ $$
237
+
238
+ In particular, $k(\zeta, \zeta)$ is diagonal, making it trivial to invert. Dutordoir et al. (2020) report that this can yield significant acceleration in practice. For residual deep GPs, this phenomenon affects every individual layer, thus making the cumulative effect even more pronounced. We refer the reader to Appendix B for further practical and theoretical considerations regarding this variational family.
239
+
240
+ Posterior mean, variance, and samples Expectations $\mathbb{E}F_{\theta}(x)$ and variances $\operatorname{Var}F_{\theta}(x)$ of the approximate posterior $F_{\theta}$ cannot be computed exactly. Instead, they can be estimated by appropriate Monte Carlo averages, with Equation (16) providing a way to sample $F_{\theta}(x)$ . However, since these estimates ignore the correlation between $F_{\theta}(x)$ and $F_{\theta}(x')$ , they are not continuous as functions of $x$ . When continuity or differentiability of $\mathbb{E}F_{\theta}(x)$ and $\operatorname{Var}F_{\theta}(x)$ are desirable, another method can be used. The key idea in this case is to draw (approximate) samples from $F_{\theta}(\cdot)$ which happen to be actual functions, for example linear combinations of some analytic basis functions or compositions of such. This can be done by applying the pathwise conditioning of Wilson et al. (2020) and Wilson et al. (2021) in a sequential manner, akin to Equation (16). This approach is useful for visualisation, performance metric estimation, and for working with downstream quantities, such as acquisition functions in Bayesian optimisation, for which differentiability is key for efficiently finding their maxima.
241
+
242
+ ![](images/62be128529305cb0f68c749e7f6bb77e723cebb42af373b5d50d577898f5c29d.jpg)
243
+ Figure 3: NLPD of different residual deep GP variants and the baseline model, on the regression problem for the synthetic benchmark function visualised in Figure 4a. Different subplots correspond to different training set sizes $N$ . The solid lines represent the mean, while the shaded areas represent the $\pm 1$ standard deviation region around it. All statistics are computed over 5 randomised runs.
244
+
245
+ # 4 EXPERIMENTS
246
+
247
+ We begin this section by examining how various GVF and variational family choices impact the regression performance of residual deep GPs in synthetic experiments, as discussed in Section 4.1. Throughout, we compare our models to a baseline with Euclidean hidden layers. Next, in the robotics-inspired experiments of Section 4.2, we demonstrate that residual deep GPs can significantly enhance Bayesian optimisation on a manifold when the optimised function is irregular. Following this, in Section 4.3, we show state-of-the-art predictive and uncertainty calibration performance of residual deep GPs in wind velocity modelling on the globe, achieving interpretable patterns even at low altitudes where data is more complex and irregular. Finally, in Section 4.4, we explore potential avenues for using residual deep GPs to accelerate inference on inherently Euclidean data.
248
+
249
+ # 4.1 SYNTHETIC EXAMPLES
250
+
251
+ Setup Deep GPs bear the promise of outperforming their shallow counterparts in modelling complex, irregular functions. To test this, we construct a benchmark function $f^{*}$ on the 2-sphere $\mathbb{S}_2$ with multiple singularities, which is visualised in Figure 4a. We take $N \in \{100, 200, 400, 800, 1600\}$ training inputs $x$ on a Fibonacci lattice on $\mathbb{S}_2$ and put $y = f^{*}(x) + \varepsilon$ , $\varepsilon \sim \mathcal{N}(0, 10^{-4}\mathbf{I})$ . Then, we regress $f^{*}$ from $x$ and $y$ . On this problem, we compare different modifications of the residual deep GPs amongst themselves and to a baseline, in terms of the negative log predictive density (NLPD) and the mean squared error (MSE) metrics on the test set of 5000 points, also on a Fibonacci lattice. All runs are conducted 5 times with different random seeds for the observation noise $\varepsilon$ .
252
+
253
+ Models The baseline is a deep GP with Euclidean (rather than manifold-to-manifold) layers. It is constructed by composing a vector-valued Matérn GP whose signature is $X \to \mathbb{R}^3$ with the Euclidean deep GP of Salimbeni and Deisenroth (2017) on $\mathbb{R}^3$ . For the residual deep GPs, we consider two different types of GVFs, projected and Hodge, and two types of variational families, the one based on inducing locations (IL) and the one based on interdomain variables (IV). To ensure comparability, we match the number of optimised parameters between models as closely as possible.
254
+
255
+ Results The NLPD values are presented in Figure 3. The MSE values exhibit the same trends and can be found in Figure 8 in Appendix A. We observe three key patterns. First, residual deep GPs are never worse than their shallow counterparts, recovering the single-layer solution when data is sparse. Second, as data becomes more abundant and thus captures more complexity of $f^{*}$ , residual deep GPs outperform the shallow GPs. Third, the IV variational family almost always improves over the IL one, and the best model—with considerable margin—is obtained by combining Hodge GVFs with the IV variational family. The residual deep GP based on projected GVFs and using the IV variational family is the second best model, which still outperforms the baseline in most cases.
256
+
257
+ # 4.2 GEOMETRY-AWARE BAYESIAN OPTIMISATION
258
+
259
+ Motivation GPs are a widely used model class for Bayesian optimisation, a technique for optimising expensive-to-evaluate black-box functions that leverages uncertainty estimates to balance exploration and exploitation (Shahriari et al., 2016). In robotics, such problems arise, for example, when a control policy needs to be fine-tuned to a specific real-world environment. This task
260
+
261
+ ![](images/89d800b2987a0ed065845bfcde60900d09770c040b5a1bb9b3d4c72544bb491f.jpg)
262
+ (a) Irregular benchmark function.
263
+
264
+ ![](images/443833e26aedbeb56b48d6ecb018c84f506f3f4ce9800777031131a04dd19dea.jpg)
265
+ (b) Bayesian optimisation performance.
266
+ Figure 4: The irregular benchmark function, and Bayesian optimisation performance comparison. The target functions for Bayesian optimisation are: the aforementioned benchmark function, modified to have a single global minimum ( $\mathbb{S}_2$ Irregular), and the smooth Ackley function on the 3-sphere ( $\mathbb{S}_3$ Ackley). In (b), the solid lines represent the median regret, while the shaded areas around them span $\pm 1$ standard deviation. The statistics are computed over 15 randomised runs.
267
+
268
+ was shown to benefit from treating the optimisation space as a manifold and using geometry-aware Gaussian processes to drive Bayesian optimisation (Jaquier et al., 2022). The functions upon which the technique was tested are rather regular, which is not always the case in reality, especially when dealing with increasingly complex systems. Motivated by this challenge, we explore if residual deep GPs can offer improved performance in optimising complex irregular functions on manifolds.
269
+
270
+ Setup We consider two target functions to optimise. The first is the irregular function from Section 4.1, visualised in Figure 4a, modified to have only one global minimum. The second is the much more regular Ackley function, projected onto $\mathbb{S}_3$ , one of the benchmarks in Jaquier et al. (2022). In each Bayesian optimisation run, we perform the first 180 iterations using a shallow geometry-aware GP, followed by 20 iterations using a residual deep GP—both employing the expected improvement acquisition function (see, e.g., Frazier (2018)). In this experiment, we showcase the coordinate-frame-based GVFs, as described in Appendix A. We do not use deep GPs in the initial iterations because, as intuition suggests and Section 4.1 affirms, deep GPs start outperforming their shallow counterparts only when data becomes more abundant. Although deep GPs show comparable performance even for small datasets, training them is more computationally demanding, making their use less efficient in the early stages of optimisation. We repeat each run 15 times to account for the stochasticity of initialisation, optimisation of the acquisition function, and training of GP models.
271
+
272
+ Results The optimisation performance, measured in terms of the logarithm of regret, is reported in Figure 4b. We find that residual deep GPs significantly improve performance in the Bayesian optimisation of the irregular function. Specifically, switching to a residual deep GP in the latter stages of optimisation greatly, and often immediately, reduces the gap between the true optimum and the found optimum. This trend is consistent across most runs, with only one outlier showing no improvement due to insufficient data collection near the singularity during the shallow GP phase. In contrast, for the Ackley function, we observe no substantial difference in performance between the two methods: both approaches replicate results from Jaquier et al. (2022), with nearly identical median regret trajectories. This outcome aligns with our expectations, since the region around the minimum, explored during the initial 180 iterations, is smooth and thus modelled equally well by both deep and shallow models.
273
+
274
+ # 4.3 WIND INTERPOLATION ON THE GLOBE
275
+
276
+ Motivation Non-Euclidean geometry has a particularly pronounced effect on vector fields, such as wind velocity fields on the globe, more so than on scalar functions. For instance, the famous hairy ball theorem states that a smooth vector field must always have a zero somewhere on the 2-sphere, i.e. there must always be a location where wind does not blow. Wind interpolation is thus an attractive use case for geometry-aware Gaussian vector fields, where they have been shown to perform well (Hutchinson et al., 2021; Robert-Nicoud et al., 2024). Here, we show that residual deep GPs can improve the performance of probabilistic wind velocity models when the data contains complex irregular patterns, which naturally occur in wind fields at lower altitudes.
277
+
278
+ Setup We consider the task of interpolating the monthly average wind velocity from the ERA5 dataset (Hersbach et al., 2023), from a set of locations on the Aeolus satellite track (Reitebuch,
279
+
280
+ ![](images/47f01d1f8c945bddf175fc62b7eb5896e5833fececb9c8fd88e72c3fd66cb6fa.jpg)
281
+ (a) The ground truth wind velocities as black arrows and the training locations along the Aeolus satellite track as red points.
282
+ Figure 5: Using residual deep GPs for probabilistic wind velocity modelling on the surface of Earth.
283
+
284
+ ![](images/86ee1e02b7f3663f0b165fa86834d07623bff0a11ca77f4187b1e68dab0a4ce6.jpg)
285
+ (b) Difference between the prediction and the ground truth wind velocities, shown as black arrows, and the predictive uncertainty, shown using a colour scale from purple (lowest) to yellow (highest), for a 3-layer residual deep GP and wind velocities for July 2010, at $0.1\mathrm{km}$ altitude.
286
+
287
+ 2012), simulating a practical setting of a weather-analysing satellite. We use ERA5 data from January to December 2010, as in Robert-Nicoud et al. (2024), and sample the Aeolus track locations every minute for a 24-hour period from 9:00 am, January 1st, 2019. We choose a 24-hour period instead of a 1-hour period, as in Hutchinson et al. (2021), because in that time frame, the satellite produces a denser set of observations, crucial for capturing the complexity of wind behaviour at low altitudes. The ground truth vector field and the input locations are visualised in Figure 5a. To assess how decreasing regularity of data associated with decreasing altitude affects our model, we consider data at three altitudes: approximately $5.5\mathrm{km}$ , $2\mathrm{km}$ , and $0.1\mathrm{km}$ . In our models, we use Hodge GVFs in hidden layers and as the last layer, and interdomain inducing variables for inference.
288
+
289
+ Results We report regression performance, in terms of NLPD, in Figure 6; MSE follows similar trends and can be found in Figure 11 in Appendix A. We find that residual deep GPs improve upon the state-of-the-art shallow Hodge GVFs (1-layer models in the plots, Robert-Nicoud et al. (2024))
290
+
291
+ both in prediction quality and uncertainty calibration, as evidenced by the significantly lower NLPD and MSE. Furthermore, Figure 5b shows that the uncertainty estimates of our deep model at the lowest altitude are interpretable. Indeed, regions of high uncertainty follow regions of irregular wind currents, such as boundaries where multiple currents meet or continental boundaries, as well as areas of seasonally high winds, such as India during the peak monsoon season. At the same time, low uncertainty is assigned to regions with constant-like currents. This is unachievable for shallow GVFs since their posterior covariance depends only on the locations of the observations, which are rather uniformly dense in our setup, and not on the observations themselves. Additionally, Figure 5b shows predictive uncertainty to be well-calibrated, with areas of highest error corresponding to regions of high uncertainty.
292
+
293
+ ![](images/89407b6d60f33fbb4969c25c6b6ee97ff821aadc7021862c0e4ef9a182a67910.jpg)
294
+ Figure 6: NLPD of residual deep GPs on the wind modelling task across three altitude levels. Solid lines give the mean NLPD, while the shaded regions around it span $\pm 1$ standard deviation. Statistics are computed from 12 runs—one for each month of 2010.
295
+
296
+ # 4.4 ACCELERATING INFERENCE FOR EUCLIDEAN DATA
297
+
298
+ Motivation Inspired by their connection to infinitely wide neural networks, Dutordoir et al. (2020) showed that geometry-aware GPs on hyperspheres can be applied to inherently Euclidean data to accelerate variational inference. Specifically, they reported that approximate variational inference
299
+
300
+ ![](images/14cb562b4f9775af06dddfa90ce3aef10bd5fb0f5c9ee73326e2aac547277259.jpg)
301
+ Figure 7: Wall clock time taken by one training step of Euclidean deep GPs with inducing locations, and residual deep GPs with interdomain variables. We consider 5 UCI datasets, with dimension $d$ and batch size $B$ . Solid lines show the mean, computed by averaging over 100 training steps, while the shaded areas span $\pm 1$ standard deviation. However, they are often too narrow to be visible.
302
+
303
+ using the shallow analogue of the interdomain inducing variables applied to data mapped from $\mathbb{R}^d$ to the proxy manifold of $\mathbb{S}_d$ can be significantly faster than inducing-location-based variational inference for a Euclidean GP on the original data, while achieving competitive predictive performance. We investigate whether this result can be extended to the case of deep Gaussian processes.
304
+
305
+ Setup As Euclidean data, we use the same UCI datasets as Dutordoir et al. (2020), and we use the same mapping from $\mathbb{R}^d$ to the proxy manifold $\mathbb{S}_d$ . We use projected GVFs to accommodate arbitrarily-dimensional hyperspheres. Overall, our experimental setup follows Dutordoir et al. (2020), except that working with deep models required us to optimise ELBO directly instead of marginalising out the variational mean $m$ and covariance $\mathbf{S}$ as in Titsias (2009). Additionally, because of high memory requirements of L-BFGS (Nocedal, 1980) arising from the lack of marginalisation and depth, we switch to Adam (Kingma and Ba, 2015). We use a single Intel i7-13700H CPU.
306
+
307
+ Results We compare the variational inference speed, measured by wall-clock time for a single optimisation step, in Figure 7. The advantage for shallow 1-layer GPs increases significantly with more layers, offering a considerable edge in deep models. However, predictive performance comparisons in Figure 17 in Appendix A do not show such an optimistic picture: Euclidean deep GPs always outperform residual deep GPs with the same number of layers in terms of NLPD and MSE. Possibly, this might be due to the aforementioned differences in optimisation. Also, our choice of the mapping from $\mathbb{R}^d$ to a proxy manifold and the choice of the proxy manifold itself might be overly simplistic and thus hinder performance. We hypothesise that better mappings or optimisation could potentially make the tested approach a more efficient alternative to Euclidean deep GPs. Achieving this, however, will require further work, which is beyond the scope of this paper.
308
+
309
+ # 5 CONCLUSION
310
+
311
+ In this paper, we proposed a novel model class of residual deep Gaussian processes on manifolds. We reviewed practical Gaussian vector field constructions for building their hidden layers and discussed two variational inference techniques, including one tailored to the structure of Gaussian vector fields on compact manifolds and based on interdomain inducing variables. We evaluated our models in synthetic experiments, examining the impact of Gaussian vector field and variational family choices. These experiments supported favouring Hodge Gaussian vector fields and interdomain inducing variables. They also demonstrated that increasing the number of layers virtually never degrades our models' performance, though it can quickly saturate and plateau. We hypothesize that larger datasets will slow saturation, necessitating more layers' complexity. However, we leave this for future work to explore. In a robotics-motivated stylised experiment, our models significantly enhanced Bayesian optimisation for an irregular function on the sphere. For probabilistic interpolation of wind velocities, we achieved state-of-the-art performance, surpassing the recently proposed shallow Hodge Gaussian vector fields. Finally, we showed interdomain inducing variables to be superior in terms of inference time, compared to doubly stochastic variational inference for Euclidean deep Gaussian processes. This indicates potential future benefits for Euclidean data if suitable mappings from manifold data to proxy manifolds are found. We believe residual deep Gaussian processes will provide a powerful toolset for applications in climate modelling, robotics, and beyond.
312
+
313
+ # ACKNOWLEDGMENTS
314
+
315
+ VB was supported by ELSA (European Lighthouse on Secure and Safe AI) funded by the European Union under grant agreement No. 101070617. KW thanks Edoardo Ponti for his mentorship. The Blender rendering scripts we used for plotting were adapted from Terenin (2022).
316
+
317
+ # REFERENCES
318
+
319
+ M. A. Álvarez, L. Rosasco, N. D. Lawrence, et al. Kernels for Vector-Valued Functions: A Review. Found. Trends Mach. Learn., 4(3):195–266, 2012. Cited on page 3.
320
+ I. Azangulov, A. Smolensky, A. Terenin, and V. Borovitskiy. Stationary Kernels and Gaussian Processes on Lie Groups and their Homogeneous Spaces I: the compact case. Journal of Machine Learning Research, 25(280):1-52, 2024. Cited on pages 2, 14.
321
+ V. Borovitskiy, I. Azangulov, A. Terenin, P. Mostowsky, M. Deisenroth, and N. Durrande. Matérn Gaussian processes on graphs. In International Conference on Artificial Intelligence and Statistics, 2021. Cited on page 1.
322
+ V. Borovitskiy, M. R. Karimi, V. R. Somnath, and A. Krause. Isotropic Gaussian Processes on Finite Spaces of Graphs. In International Conference on Artificial Intelligence and Statistics, 2023. Cited on page 1.
323
+ V. Borovitskiy, A. Terenin, P. Mostowsky, and M. P. Deisenroth. Matérn Gaussian processes on Riemannian manifolds. In Advances in Neural Information Processing Systems, 2020. Cited on pages 1-3.
324
+ S. Coveney, C. Corrado, C. H. Roney, D. O'Hare, S. E. Williams, M. D. O'Neill, S. A. Niederer, R. H. Clayton, J. E. Oakley, and R. D. Wilkinson. Gaussian process manifold interpolation for probabilistic atrial activation maps and uncertain conduction velocity. *Philosophical Transactions of the Royal Society A*, 378(2173):20190345, 2020. Cited on page 3.
325
+ Z. Dai, A. C. Damianou, J. I. González, and N. D. Lawrence. Variational Auto-encoded Deep Gaussian Processes. In International Conference on Learning Representations, 2016. Cited on page 1.
326
+ A. Damianou and N. D. Lawrence. Deep Gaussian Processes. In International Conference on Artificial Intelligence and Statistics, 2013. Cited on pages 1, 3.
327
+ E. De Vito, N. Mücke, and L. Rosasco. Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces. Analysis and Applications, 19(3):363-396, 2021. Cited on page 14.
328
+ V. Dutordoir, N. Durrande, and J. Hensman. Sparse Gaussian Processes with Spherical Harmonic Features. In International Conference on Machine Learning, 2020. Cited on pages 1, 6, 9, 10, 14.
329
+ B. Fichera, V. Borovitskiy, A. Krause, and A. Billard. Implicit Manifold Gaussian Process Regression. In Advances in Neural Information Processing Systems, 2023. Cited on page 1.
330
+ P. I. Frazier. A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811, 2018. Cited on page 8.
331
+ K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In Conference on Computer Vision and Pattern Recognition, 2016. Cited on page 4.
332
+ J. Hensman, N. Fusi, and N. D. Lawrence. Gaussian processes for big data, 2013. Cited on page 3.
333
+ H. Hersbach, B. Bell, P. Berrisford, G. Biavati, A. Horányi, J. Muñoz Sabater, J. Nicolas, C. Peubey, R. Radu, I. Rozum, D. Schepers, A. Simmons, C. Soci, D. Dee, and J.-N. Thépaut. ERA5 monthly averaged data on single levels from 1940 to present, 2023. Accessed on 13-Sep-2024. Cited on page 8.
334
+ M. Hutchinson, A. Terenin, V. Borovitskiy, S. Takao, Y. Teh, and M. Deisenroth. Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels. In Advances in Neural Information Processing Systems, 2021. Cited on pages 1-4, 8, 9.
335
+
336
+ N. Jaquier, V. Borovitskiy, A. Smolensky, A. Terenin, T. Asfour, and L. Rozo. Geometry-aware Bayesian Optimization in Robotics using Riemannian Matérn Kernels. In Conference on Robot Learning, 2022. Cited on pages 1, 3, 8, 15-17.
337
+ N. Jaquier, L. Rozo, M. González-Duque, V. Borovitskiy, and T. Asfour. Bringing motion taxonomies to continuous domains via GPLVM on hyperbolic manifolds. In International Conference on Machine Learning, 2024. Cited on page 3.
338
+ S. Kamthe and M. P. Deisenroth. Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control. In International Conference on Artificial Intelligence and Statistics, 2018. Cited on page 1.
339
+ D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. In International Conference for Learning Representations, 2015. Cited on pages 10, 16, 17, 19.
340
+ M. Kochurov, R. Karimov, and S. Kozlukov. Geoopt: Riemannian Optimization in PyTorch. In International Conference on Machine Learning, 2020. Cited on page 6.
341
+ A. Krause, A. Singh, and C. Guestrin. Near-Optimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms and Empirical Studies. Journal of Machine Learning Research, 9(8):235-284, 2008. Cited on page 1.
342
+ M. Lázaro-Gredilla and A. R. Figueiras-Vidal. Inter-domain Gaussian Processes for Sparse Inference using Inducing Features. In Advances in Neural Information Processing Systems, 2009. Cited on page 6.
343
+ F. Lindgren, H. Rue, and J. Lindström. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(4):423-498, 2011. Cited on page 3.
344
+ A. Mallasto and A. Feragen. Wrapped Gaussian process regression on Riemannian manifolds. In Conference on Computer Vision and Pattern Recognition, 2018. Cited on pages 2, 19, 22.
345
+ A. G. d. G. Matthews. Scalable Gaussian process inference using variational methods. In 2016. Cited on page 14.
346
+ C. L. C. Mattos, Z. Dai, A. C. Damianou, J. Forth, G. D. A. Barreto, and N. D. Lawrence. Recurrent Gaussian Processes. In International Conference on Learning Representations, 2016. Cited on page 1.
347
+ J. Nocedal. Updating Quasi-Newton Matrices With Limited Storage. Mathematics of Computation, 35(151):773-782, 1980. Cited on page 10.
348
+ C. E. Rasmussen and C. K. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. Cited on pages 1, 2.
349
+ O. Reitebuch. The Spaceborne Wind Lidar Mission ADM-Aeolus. In Atmospheric Physics, pages 815-827. 2012. Cited on page 8.
350
+ D. Robert-Nicoud, A. Krause, and V. Borovitskiy. Intrinsic Gaussian Vector Fields on Manifolds. In International Conference on Artificial Intelligence and Statistics, 2024. Cited on pages 1-3, 5, 8, 9, 14.
351
+ P. Rosa, V. Borovitskiy, A. Terenin, and J. Rousseau. Posterior Contraction Rates for Matérn Gaussian Processes on Riemannian Manifolds. In Advances in Neural Information Processing Systems, 2023. Cited on page 3.
352
+ H. Salimbeni and M. P. Deisenroth. Doubly Stochastic Variational Inference for Deep Gaussian Processes. In Advances in Neural Information Processing Systems, 2017. Cited on pages 1-4, 6, 7, 15, 22.
353
+ B. Shahrriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proceedings of the IEEE, 104(1):148-175, 2016. Cited on page 7.
354
+
355
+ J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, 2012. Cited on page 1.
356
+ A. Terenin. Gaussian processes and statistical decision-making in non-Euclidean spaces. arXiv preprint arXiv:2202.10613, 2022. Cited on page 11.
357
+ M. K. Titsias. Variational Learning of Inducing Variables in Sparse Gaussian Processes. In International Conference on Artificial Intelligence and Statistics, 2009. Cited on pages 3, 10.
358
+ J. Townsend, N. Koep, and S. Weichwald. Pymanopt: A Python Toolbox for Optimization on Manifolds using Automatic Differentiation. Journal of Machine Learning Research, 17(137):1-5, 2016. Cited on pages 6, 17.
359
+ M. van der Wilk, V. Dutordoir, S. T. John, A. Artemev, V. Adam, and J. Hensman. A Framework for Interdomain and Multioutput Gaussian Processes. arXiv preprint arXiv:2003.01115, 2020. Cited on page 6.
360
+ P. Whittle. Stochastic processes in several dimensions. Bulletin of the International Statistical Institute, 40(2):974-994, 1963. Cited on page 3.
361
+ J. Wilson, V. Borovitskiy, A. Terenin, P. Mostowsky, and M. Deisenroth. Efficiently sampling functions from Gaussian process posteriors. In International Conference on Machine Learning, 2020. Cited on pages 1, 6.
362
+ J. T. Wilson, V. Borovitskiy, A. Terenin, P. Mostowsky, and M. P. Deisenroth. Pathwise Conditioning of Gaussian Processes. Journal of Machine Learning Research, 22(105):1-47, 2021. Cited on page 6.
363
+ M. Yang, V. Borovitskiy, and E. Isufi. Hodge-Compositional Edge Gaussian Processes. In International Conference on Artificial Intelligence and Statistics, 2023. Cited on page 5.
364
+
365
+ # A ADDITIONAL EXPERIMENTAL DETAILS
366
+
367
+ # A.1 IMPLEMENTATION
368
+
369
+ Efficient kernel evaluation with the addition theorem In our implementation of manifold Matérn kernels, we utilise the addition theorem for spherical harmonics (De Vito et al., 2021; Dutordoir et al., 2020) to accelerate kernel computation. On $\mathbb{S}_d$ , the eigenfunctions of the Laplace-Beltrami operator are known to be certain special functions called spherical harmonics. The addition theorem gives a relation between all spherical harmonics corresponding to the $(k + 1)$ -th largest eigenvalue of the negative Laplace-Beltrami operator $-\Delta$ , denoted $\{\phi_{k,j}\}_{j = 1}^{J}$ , and Gegenbauer polynomials $C_k^{(\alpha)}$ , other special functions:
370
+
371
+ $$
372
+ \sum_ {j = 1} ^ {J} \phi_ {k, j} (x) \phi_ {k, j} \left(x ^ {\prime}\right) = c _ {k, d} C _ {k} ^ {(\alpha)} \left(x \cdot x ^ {\prime}\right) \tag {20}
373
+ $$
374
+
375
+ with the dot product computed after embedding $\mathbb{S}_d$ in $\mathbb{R}^{d + 1}$ as the unit sphere centred at the origin, $\alpha = \frac{d - 1}{2}$ , and $c_{k,d}$ some known absolute constants. Thus, when computing the scalar Matérn kernel on $\mathbb{S}_d$ , we truncate the infinite sum in Equation (3) to include all spherical harmonics up to the $(K + 1)$ -th eigenvalue, and apply Equation (20). This gives the following formula
376
+
377
+ $$
378
+ k _ {\nu , \kappa , \sigma^ {2}} (x, x ^ {\prime}) = \frac {\sigma^ {2}}{C _ {\nu , \kappa}} \sum_ {k = 0} ^ {K} \Phi_ {\nu , \kappa} (\lambda_ {k}) c _ {k, d} C _ {k} ^ {(\alpha)} (x \cdot x ^ {\prime}). \tag {21}
379
+ $$
380
+
381
+ In case of the Hodge Matérn kernel on $\mathbb{S}_2$ , we also apply addition theorem, only noting that here $\phi_{k,j}$ are replaced with the normalised vector spherical harmonics: $\frac{\nabla\phi_{k,j}}{\sqrt{\lambda_k}}$ for the divergence-only kernel, and $\frac{\star\nabla\phi_{k,j}}{\sqrt{\lambda_k}}$ for the curl-only kernel, where $\star$ is the Hodge star operator (Robert-Nicoud et al., 2024). In this case, $c_{k,d} = \frac{k + \alpha}{\alpha}$ . Thus,
382
+
383
+ $$
384
+ \boldsymbol {k} _ {\nu , \kappa , \sigma^ {2}} ^ {\operatorname {d i v}} (x, x ^ {\prime}) = \frac {\sigma^ {2}}{C _ {\nu , \kappa} ^ {\operatorname {d i v}}} \sum_ {k = 0} ^ {K} \frac {\Phi_ {\nu , \kappa} (\lambda_ {k})}{\lambda_ {k}} \frac {k + \alpha}{\alpha} \left(\nabla_ {x} \otimes \nabla_ {x ^ {\prime}}\right) C _ {k} ^ {(\alpha)} \left(x \cdot x ^ {\prime}\right) \tag {22}
385
+ $$
386
+
387
+ $$
388
+ \boldsymbol {k} _ {\nu , \kappa , \sigma^ {2}} ^ {\operatorname {c u r l}} (x, x ^ {\prime}) = \frac {\sigma^ {2}}{C _ {\nu , \kappa} ^ {\operatorname {c u r l}}} \sum_ {k = 0} ^ {K} \frac {\Phi_ {\nu , \kappa} (\lambda_ {k})}{\lambda_ {k}} \frac {k + \alpha}{\alpha} \left(\star \nabla_ {x} \otimes \star \nabla_ {x ^ {\prime}}\right) C _ {k} ^ {(\alpha)} \left(x \cdot x ^ {\prime}\right). \tag {23}
389
+ $$
390
+
391
+ The Hodge-compositional kernel, which we typically use, is the sum $\pmb{k}_{\nu,\kappa_1,\sigma_1^2}^{\mathrm{div}} + \pmb{k}_{\nu,\kappa_2,\sigma_2^2}^{\mathrm{curl}}$ .
392
+
393
+ Accelerated training with whitened inducing variables In all the models we test, approximate inference requires a variational mean vector $\pmb{m}$ and a variational covariance matrix $\mathbf{S}$ that parameterise $q(\pmb{u}) = \mathcal{N}(m, \mathbf{S})$ . However, to accelerate convergence during training, instead of working with the inducing variables $\pmb{u}$ directly, we work with whitened inducing variables $\pmb{u}' = \mathbf{L}^{-1}\pmb{u}$ , where $\mathbf{L}$ is the lower Cholesky factor of $k(z, z)$ or $k(\zeta, \zeta)$ (Matthews, 2016). Thus, in practice, denoting the whitened variational mean and covariance of $q(\pmb{u}')$ as $m'$ and $\mathbf{S}'$ , we use a modified version of Equation (5):
394
+
395
+ $$
396
+ \begin{array}{l} \mu_ {z, \boldsymbol {m} ^ {\prime}, \boldsymbol {S} ^ {\prime}} (\cdot) = \mu (\cdot) + k (\cdot , z) k (\boldsymbol {z}, \boldsymbol {z}) ^ {- 1} \left(\mathbf {L} \boldsymbol {m} ^ {\prime} - \mu (\boldsymbol {z})\right) (24) \\ = \mu (\cdot) + k (\cdot , z) \mathbf {L} ^ {- \top} \left(\boldsymbol {m} ^ {\prime} - \mathbf {L} ^ {- 1} \mu (\boldsymbol {z})\right), (25) \\ \end{array}
397
+ $$
398
+
399
+ $$
400
+ \begin{array}{l} k _ {z, \boldsymbol {m} ^ {\prime}, \boldsymbol {S} ^ {\prime}} (\cdot , \cdot^ {\prime}) = k (\cdot , \cdot^ {\prime}) - k (\cdot , z) k (z, z) ^ {- 1} \left(k (z, z) - \mathbf {L S} ^ {\prime} \mathbf {L} ^ {\top}\right) k (z, z) ^ {- 1} k (z, \cdot^ {\prime}) (26) \\ = k \left(\cdot , ^ {\prime}\right) - k \left(\cdot , z\right) \mathbf {L} ^ {- \top} \left(\mathbf {I} - \mathbf {S} ^ {\prime}\right) \mathbf {L} ^ {- 1} k \left(z, ^ {\prime}\right). (27) \\ \end{array}
401
+ $$
402
+
403
+ # A.2 MODELS
404
+
405
+ Mean and kernel In all models, we equip constituent GPs with a zero mean and an appropriate variant of the Matérn kernel, initialised with length scale $\kappa = 1$ . For kernels of output layers we
406
+
407
+ ![](images/edc950aa0e84b2b7d190dd6458ce765340b2198e0c39abfb725700163fa3cb61.jpg)
408
+ Figure 8: MSE of different residual deep GP variants and the baseline model, on the regression problem for the synthetic benchmark function in Figure 4a. Different subplots correspond to different training set sizes $N$ . The solid lines represent the mean, while the shaded areas represent the $\pm \sigma$ region around it, where $\sigma$ is the standard deviation, and all statistics were computed over 5 randomised runs.
409
+
410
+ initialise the variance to $\sigma^2 = 1.0$ , while for kernels in hidden layers of an $L$ -layer deep GPs we set $\sigma^2 = \frac{10^{-4}}{L - 1}$ at the start of training. In Section 4.1, Section 4.4, and Section 4.3 we initialise the smoothness parameter to $\nu = \frac{3}{2}$ , while in Section 4.2 we set it to $\nu = \frac{5}{2}$ to replicate the setup in Jaquier et al. (2022). We optimise the smoothness of the manifold Matérn kernels during training, owing to their differentiability with respect to $\nu$ , except in Section 4.2, where we fix $\nu$ to match the setup in Jaquier et al. (2022). Wherever we employ Hodge GVFs we use the Hodge compositional kernel, with separate $\nu, \kappa, \sigma^2$ for the curl-free and divergence-free parts. In models utilising interdomain variables, we use the same number of spherical harmonics for the kernel and inducing variables, as per our discussion in Appendix B.
411
+
412
+ Vector-valued GPs We model vector-valued GPs as a set of independent scalar-valued GPs stacked into a vector. We utilise this construction in Euclidean deep GPs and in residual deep GPs with projected GVFs.
413
+
414
+ Inducing locations Following Salimbeni and Deisenroth (2017), for all models utilising the variational family based on inducing locations $z^l$ , we initialise $z^l$ for every layer to be the centers of the clusters found via k-means clustering of training data. In residual deep GPs, we further project these locations onto the sphere, and we do not optimise them during training. In Euclidean deep GPs, we do not normalise the inducing locations and optimise them jointly with all other parameters.
415
+
416
+ Approximation in training and evaluation of deep models In all experiments, to approximate the ELBO in deep models during training, we use 3 samples from the posterior. In evaluation, we use 10 samples from the posterior to approximate the MSE and NLPD. In visualisation, we also use ten samples from the posterior to approximate the predictive mean and standard deviation.
417
+
418
+ # A.3 SYNTHETIC EXPERIMENTS
419
+
420
+ Data To examine the influence of data density on model performance in a controlled manner, we generate the training sets as approximately uniform grid of points on the 2-sphere $\mathbb{S}_2$ . This is done using the Fibonacci lattice method, which, for a grid of $n$ points, gives the colatitude and longitude of the $i$ -th point as
421
+
422
+ $$
423
+ \operatorname {c o l a t i t u d e} = \arccos \left(1 - \frac {2 i + 1}{n}\right), \quad \operatorname {l o n g i t u d e} = \frac {2 \pi i}{\phi}, \quad \phi = \frac {1 + \sqrt {5}}{2}. \tag {28}
424
+ $$
425
+
426
+ Because this method gives consistent coverage of $\mathbb{S}_2$ we also use it to generate the test set of 5000 points. We define the benchmark function $f^{*}\colon \mathbb{S}_{2}\to \mathbb{R}$ by
427
+
428
+ $$
429
+ f ^ {*} (x) = \left(Y _ {2, 3} \circ \varphi\right) (x) + \left(Y _ {1, 2} \circ \varphi \circ R\right) (x), \tag {29}
430
+ $$
431
+
432
+ where
433
+
434
+ $$
435
+ Y _ {2, 3} (\theta , \phi) = \sqrt {\frac {1 0 5}{3 2 \pi}} \sin^ {3} \theta \sin (3 \phi), \tag {30}
436
+ $$
437
+
438
+ $$
439
+ Y _ {1, 2} (\theta , \phi) = \sqrt {\frac {1 5}{8 \pi}} \sin \theta \sin (2 \phi) \tag {31}
440
+ $$
441
+
442
+ $$
443
+ R (x) = \left(x _ {1}, - x _ {3}, x _ {2}\right) \tag {32}
444
+ $$
445
+
446
+ $$
447
+ \varphi (x) = (\operatorname {a t a n 2} \left(x _ {2}, x _ {1}\right), \operatorname {a r c c o s} \left(x _ {3}\right)), \tag {33}
448
+ $$
449
+
450
+ with $x$ being an element of $\mathbb{S}_2$ embedded into $\mathbb{R}^3$ as the unit sphere centred at the origin. $Y_{2,3}, Y_{1,2}$ are spherical harmonics, smooth functions of their parameters. Singularities in $f^*$ are caused by the composition with $\varphi$ , which converts $x$ from Cartesian to spherical coordinates, but swaps the positions of the colatitude $\theta$ and longitude $\phi$ . The function $Y_{2,3} \circ \varphi$ has singularities at the poles, while the function $Y_{1,2} \circ \varphi \circ R$ has singularities around the equator.
451
+
452
+ Variational parameters To make the comparison between models as fair as possible, we set the number of inducing variables for each model in such a way that all models have almost the same total number of optimisable parameters. Specifically, we use the following formulae for the number of variational parameters $\alpha$ in a hidden layer, given that each GVF has $m$ inducing variables
453
+
454
+ $$
455
+ \alpha_ {\text {h o d g e}} = \underbrace {(m ^ {2} + m) / 2} _ {\text {c o v a r i a n c e}} + \underbrace {m} _ {\text {m e a n}} \tag {34}
456
+ $$
457
+
458
+ $$
459
+ \alpha_ {\text {e u c l i d e a n}} = \alpha_ {\text {p r o j e c t e d}} = 3 \cdot ((m ^ {2} + m) / 2 + m) \tag {35}
460
+ $$
461
+
462
+ In projected GVFs of the residual deep GPs and in vector-valued GPs of the baseline models, we equip each scalar GP with 49 inducing variables, which corresponds to spherical harmonics up to the 7-th negative eigenvalue. For Hodge GVFs, we use 70 interdomain inducing variables, corresponding to vector spherical harmonics up to the 6-th negative eigenvalue of the Hodge Laplacian. Despite this, residual deep GPs with Hodge GVFs are still at a disadvantage: a single Hodge GVF has 2555 variational parameters and $2 \cdot 3$ kernel parameters, a single projected GVF and manifold vector-valued GP has 3822 variational parameters and $3 \cdot 3$ kernel parameters, while one Euclidean vector-valued GP has 3822 variational parameters and $3 \cdot 4$ kernel parameters (we use automatic relevance determination (ARD), we get 4 as the sum of 3 length scale parameters and 1 prior variance parameter). Furthermore, we optimise the inducing locations in the Euclidean vector-valued GPs of the baseline model, so that each of them have an additional $3 \cdot 49$ optimisable parameters.
463
+
464
+ Training and evaluation We optimise all models using the Adam optimiser (Kingma and Ba, 2015) for 1000 iterations with learning rate set to 0.01.
465
+
466
+ Additional results The MSE comparison is presented in Figure 8. Figure 9 and Figure 10 show an extended comparison between the baseline model and Hodge residual deep GP with up to 10 layers and 10 randomised runs.
467
+
468
+ # A.4 GEOMETRY-AWARE BAYESIAN OPTIMISATION
469
+
470
+ Data To obtain an irregular function $f^{*}$ on $\mathbb{S}_2$ , we modify the target function from Section 4.1 to have only one global minimum near a singularity point. Specifically, $f^{*}$ was defined by
471
+
472
+ $$
473
+ f ^ {*} (x) = \left(Y _ {2, 3} \circ \varphi\right) (x) \cdot \left(x _ {3} + 1\right) \cdot \left(1 - \operatorname {a r c c o s} \left(x _ {3}\right)\right), \tag {36}
474
+ $$
475
+
476
+ where $Y_{2,3}$ and $\varphi$ are as in Appendix A.3. The absence of $Y_{1,2}$ removes the singularities around the equator, while the added scaling factors create a minimum at the north pole and ensure it is global.
477
+
478
+ Models After a preliminary examination, we found that differences between GVFs variants on this task were not significant. Nevertheless, in Figure 4b we present the models that performed best in this examination: in the left subplot, a 2-layer residual deep GP using coordinate-frame GVFs and inducing locations; in the right subplot, a 3-layer residual deep GP using projected GVFs and inducing locations. We set the hyper-priors for the shallow model according to Jaquier et al. (2022). We do not use hyper-priors for the deep models in this experiment.
479
+
480
+ ![](images/199522987bde1e68c2c5a9b1ab536809267164b9cef5a6678193d79460d5c802.jpg)
481
+ Figure 9: NLPD of the baseline model and Hodge residual deep GP on the regression problem for the synthetic benchmark function in Figure 4a. Different subplots correspond to different training set sizes $N$ . The solid lines represent the mean, while the shaded areas represent the $\pm \sigma$ region around it, where $\sigma$ is the standard deviation; all statistics are computed over 10 randomised runs.
482
+
483
+ ![](images/dac17a2823706569d06dc9e9cba4c84e4396a52532eb5638b6e5d7657d843ab6.jpg)
484
+ Figure 10: MSE of the baseline model and Hodge residual deep GP on the regression problem for the synthetic benchmark function in Figure 4a. Different subplots correspond to different training set sizes $N$ . The solid lines represent the mean, while the shaded areas represent the $\pm \sigma$ region around it, where $\sigma$ is the standard deviation; all statistics are computed over 10 randomised runs.
485
+
486
+ Optimisation process Replicating the setup of Jaquier et al. (2022), we begin each Bayesian optimisation by sampling 5 initial observations uniformly at random on the hypersphere. At each step, we minimise the expected improvement acquisition function using the first-order geometry-aware gradient optimisation implemented in PYMANOPT (Townsend et al., 2016). We approximate the expected improvement acquisition function for deep models using Monte Carlo averages driven by pathwise sampling, as described at the very end of Section 3.3. After each optimisation step, we reinitialise the model and fit it to the data for 500 iterations, using the Adam optimiser (Kingma and Ba, 2015) with a learning rate of 0.01 for the deep model, and the BFGS optimiser for the shallow model.
487
+
488
+ Additional results and analysis In Figure 4b, we see that the variance of the logarithm of regret increases after switching from the shallow GP to the residual deep GP. This increase can be explained by variation in the initial 180 points acquired by the shallow GP between independent runs. The locations of these points determine the quality of fit of the deep GP. Indeed, if the points cluster near the true minimum, the deep model often achieves improvement in 1 or 2 iterations. With fewer points around the optimum the fit is poorer and more iterations are required to make an improvement. This variance in the number of steps before a better observation is acquired increases the variance of the regret.
489
+
490
+ We also examined the performance of residual deep GPs on Bayesian optimisation of the irregular target function without the initial shallow GP stage. We equip the last layer of the deep GP with the same hyper-priors as the shallow GP; however, instead of using the BFGS optimiser, we use L-BFGS due to memory constraints. Note that in the original experiment we did not use hyper-priors or a quasi-newton optimiser for the deep model. We use them here because we found that they are important for effective exploration with shallow GPs and they serve the same purpose for our deep GP.
491
+
492
+ ![](images/177463f63c3c1d10f44b5e0a2cf2c134d2356af18032fc447823eaec846a793e.jpg)
493
+ Figure 11: MSE of residual deep GPs on the wind modelling task across three altitude levels. Solid lines give the mean MSE, while the shaded regions around it span $\pm 1$ standard deviation. Statistics are computed from 12 runs—one for each month of 2010.
494
+
495
+ ![](images/3e1974a59e2ae53138695f642e5a71b9ea3ddbc39a95de6bcefa5bba99e30f18.jpg)
496
+ Figure 12: Comparison of Bayesian optimisation performed with a shallow GP followed by a residual deep GP vs only with residual deep GP on the irregular target function. Solid lines show median logarithm of regret while the shaded areas extend one standard deviation above and below. Blue dotted lines show three optimisation runs with the deep GP only which did not escape a local minimum—these runs contribute strongly to the large variance of its regret. Grey dotted line indicates a transition from the shallow GP to the deep GP at iteration 180.
497
+
498
+ We report the logarithm of regret achieved by using the residual deep GP model from the very first iteration in Figure 12. We find that in 12 out of 15 of the runs, our model improves upon the shallow GP, often even before the 100th iteration. This was expected, since we have seen that residual deep GPs recover shallow solutions when data is not abundant enough to capture target complexity. We also see that the variance of the regret is considerably larger than for the shallow GP. This is largely caused by the 3 outlier runs, indicated with a dotted blue line, where the model gets stuck in a local minimum. As this experiment is fairly sensitive to the setup setting, this could be due to the fact that the baseline is an exact GP, while our model recovers a sparse GP, and the fact that our model uses the L-BFGS optimiser, while the exact model uses the BFGS optimiser. Thus, using deep GPs exclusively can be more sample-efficient than employing the initial shallow GP stage; however, with the current setup it appears that this poses an increased risk of being stuck in a local minimum.
499
+
500
+ # A.5 WIND INTERPOLATION ON THE GLOBE
501
+
502
+ Data To each location sampled along the track of the Aeolus satellite, we assign the wind velocity from the closest location present in the ERA5 dataset. Our test set is a grid of 5000 points, same as in Appendix A.3. To each location in the test set, we also assign the wind velocity from the closest location in the ERA5 dataset.
503
+
504
+ Variational parameters For each GVF within the layers of the tested models we use 198 inter-domain inducing variables. They correspond to all vector spherical harmonics up to, and including, the 10th negative eigenvalue of the Hodge Laplacian. This choice is arbitrary and simply serves to balance quality of fit with training time.
505
+
506
+ ![](images/3cf389aa6d6417726bf5ccc0eb30838d8aaeb0087c00fc95761c22736719a341.jpg)
507
+ Figure 13: MSE of residual deep GPs on the wind modelling task across the 12 months of 2010.
508
+
509
+ ![](images/c686c3cb8bae93a8fb8a5a6616d6b42dbc0f57363d923be029841e2522c4daf3.jpg)
510
+ Figure 14: NLPD of residual deep GPs on the wind modelling task across the 12 months of 2010.
511
+
512
+ Training and evaluation We fit the models to data using the Adam optimiser for 1000 iterations with the learning rate set to 0.01. To evaluate the models, we compute the MSE and NLPD via Monte Carlo sampling as described in Equation (16), we visualise the predictive uncertainty at a point $x_{i}$ which is computed as $\frac{1}{10}\sum_{n = 1}^{10}\| \mathbf{S}_i^n\|$ , where $\mathbf{S}_i^n$ is the posterior covariance matrix of the last layer given the $n$ -th sample from the penultimate layer, and $\| \cdot \|$ is the Frobenius norm.
513
+
514
+ Additional results The MSE comparison is presented in Figure 11. The individual results for each month are shown in Figure 14 and Figure 13. A larger version of Figure 5b as well as its analogs for other altitudes are presented in Figure 18. Figures 19 to 21 present the ground truth vector field, predictive mean and uncertainty, and one posterior sample for July 2010 for the three different altitudes.
515
+
516
+ Additionally, we compare our model with the baseline model from Section 4.1. The final layer of the baseline is a coordinate-frame GVF with independent Matérn 5/2 kernels, where the frame is given by the gradients of the spherical coordinates (with singularities taken care of), this is motivated in part by the approach of Mallasto and Feragen (2018). The results are shown in Figure 15 and Figure 16.
517
+
518
+ # A.6 REGRESSION ON UCI DATASETS
519
+
520
+ Data In the mapping from $\mathbb{R}^d$ to $\mathbb{S}_d$ , we set $b$ to 1. This is done for all datasets and the bias is kept constant during training. Indeed, in our initial examinations, we found that learning the bias often seemed to result in overfitting and worse performance.
521
+
522
+ Training and evaluation We train each model for 5000 iterations using the Adam optimiser (Kingma and Ba, 2015) with the learning rate set to 0.01, such that the learning curves plateau
523
+
524
+ ![](images/590d55bfacca3e87547ab7715810150c67554074122c674afb696139f2d00d10.jpg)
525
+ Figure 15: Comparison of the NLPD of residual deep GPs and the baseline model for wind field modelling across the $0.1\mathrm{km}$ , $1.0\mathrm{km}$ , and $5.0\mathrm{km}$ altitudes.
526
+
527
+ ![](images/c98a655baae72237f9f947ba603bd655aee3e49021ae4934e0d473d842514f57.jpg)
528
+ Figure 16: Comparison of the MSE of residual deep GPs and the baseline model for wind field modelling across the $0.1\mathrm{km}$ , $1.0\mathrm{km}$ , and $5.0\mathrm{km}$ altitudes.
529
+
530
+ for both Euclidean deep GPs and residual deep GPs. Each iteration consists of a gradient step using a batch of data. When the size of the training set is smaller than 1000 data points—that is, for the Yacht, Concrete, and Energy datasets—a batch is the entire dataset. For the Kin8mn and Power datasets, whose training sets are considerably larger, a batch of 1000 data points is sampled with replacement from the training set.
531
+
532
+ Additional results The test NLPD and MSE of all models can be seen in Figure 17.
533
+
534
+ # B MORE ON INTERDOMAIN INDUCING VARIABLES ON MANIFOLDS
535
+
536
+ Empirically, we find that when the number of eigenfunctions $K$ used to approximate the manifold Matérn kernels exceeds the number of interdomain inducing variables, performance of residual deep GPs deteriorates. This can be surprising, since a higher $K$ yields a better approximation of the true manifold Matérn kernel.
537
+
538
+ A potential reason for this phenomenon may be identified by examining the kernel of a posterior sparse Matérn GP $f \sim \mathcal{GP}(\mu, k)$ more specifically, its whitened reparameterisation (see Appendix A.1), as that makes equations cleaner. In this, it will be helpful to define
539
+
540
+ $$
541
+ \Psi_ {i: j} (\cdot) = \left(\sqrt {\frac {\sigma^ {2}}{C _ {\nu , \kappa}} \Phi_ {\nu , \kappa} (\lambda_ {i})} \phi_ {i} (\cdot), \dots , \sqrt {\frac {\sigma^ {2}}{C _ {\nu , \kappa}} \Phi_ {\nu , \kappa} (\lambda_ {j})} \phi_ {j} (\cdot)\right) ^ {\top}, \tag {37}
542
+ $$
543
+
544
+ which allows us to express the Matérn kernel approximated with $K + 1$ eigenfunctions as $k_{\nu, \kappa, \sigma}(\cdot, \cdot') = \Psi_{0:K}(\cdot)^{\top} \Psi_{0:K}(\cdot')$ . Now, recalling Equation (19), and denoting the number of inter
545
+
546
+ ![](images/d804aac1529620e49ee868a2670952caeaa88e3cc454d12bd61364fc574ad257.jpg)
547
+ Figure 17: NLPD and MSE of residual deep GPs with spherical harmonic features and Euclidean deep GPs with inducing points on five UCI datasets. Residual deep GPs had their inputs mapped from $\mathbb{R}^d$ to $\mathbb{S}_d$ via $x\mapsto (x,b) / \| (x,b)\|$ . Solid lines give the mean MSE and shaded regions around them span $\pm 1$ standard deviation. All statistics were computed from 5 randomised runs.
548
+
549
+ domain inducing variables by $M$ , we see
550
+
551
+ $$
552
+ \begin{array}{l} k (\cdot , \boldsymbol {\zeta}) \mathbf {L} ^ {- \top} = \left(\phi_ {0} (\cdot), \dots , \phi_ {M} (\cdot)\right) ^ {\top} \operatorname {d i a g} \left(\frac {1}{\frac {\sigma^ {2}}{C _ {\nu , \kappa}} \Phi_ {\nu , \kappa} (\lambda_ {0})}, \dots , \frac {1}{\frac {\sigma^ {2}}{C _ {\nu , \kappa}} \Phi_ {\nu , \kappa} (\lambda_ {M})}\right) ^ {- 1 / 2} (38) \\ = \left(\phi_ {0} (\cdot), \dots , \phi_ {M} (\cdot)\right) ^ {\top} \operatorname {d i a g} \left(\sqrt {\frac {\sigma^ {2}}{C _ {\nu , \kappa}} \Phi_ {\nu , \kappa} \left(\lambda_ {0}\right)}, \dots , \sqrt {\frac {\sigma^ {2}}{C _ {\nu , \kappa}} \Phi_ {\nu , \kappa} \left(\lambda_ {M}\right)}\right) (39) \\ = \boldsymbol {\Psi} _ {0: M} (\cdot) ^ {\top}, (40) \\ \end{array}
553
+ $$
554
+
555
+ which we can substitute into Equation (27)
556
+
557
+ $$
558
+ \begin{array}{l} k _ {\boldsymbol {\zeta}, \mathbf {m} ^ {\prime}, \mathbf {S} ^ {\prime}} (\cdot , \cdot^ {\prime}) = k (\cdot , \cdot^ {\prime}) - k (\cdot , \boldsymbol {\zeta}) \mathbf {L} ^ {- \top} (\mathbf {I} - \mathbf {S} ^ {\prime}) \mathbf {L} ^ {- 1} k (\boldsymbol {\zeta}, \cdot^ {\prime}). (41) \\ = \boldsymbol {\Psi} _ {0: K} ^ {\top} (\cdot) \boldsymbol {\Psi} _ {0: K} (\cdot^ {\prime}) - \boldsymbol {\Psi} _ {0: M} ^ {\top} (\cdot) \left(\mathbf {I} - \mathbf {S} ^ {\prime}\right) \boldsymbol {\Psi} _ {0: M} (\cdot^ {\prime}) (42) \\ = \boldsymbol {\Psi} _ {M + 1: K} ^ {\top} (\cdot) \boldsymbol {\Psi} _ {M + 1: K} (\cdot^ {\prime}) + \boldsymbol {\Psi} _ {0: M} ^ {\top} (\cdot) \mathbf {S} ^ {\prime} \boldsymbol {\Psi} _ {0: M} (\cdot^ {\prime}) (43) \\ \end{array}
559
+ $$
560
+
561
+ For $K = M$ , posterior covariance reduces to the second term only, which is determined by the kernel hyperparameters and the variational covariance matrix. However, for $M > K$ , the first term contributes additional variance that can only be reduced by changing the hyperparameters of the prior, like length scale and prior variance, rather than the variational parameters $m'$ and $S'$ .
562
+
563
+ With this particular setup, there are two forces at play during optimisation: one which lowers the prior variance to match the posterior variance, and another which modifies $\mathbf{S}'$ to approximate the true posterior covariance by introducing dependencies between basis coefficients. In practice, we observe that this can lead to a difficulty if the $\Psi_{0:M}^{\top}(\cdot)\mathbf{S}'\Psi_{0:M}(\cdot')$ is already at the desired variance but $\mathbf{S}'$ must still be adjusted to approximate the true covariance, while $\Psi_{M+1:K}^{\top}(\cdot)\Psi_{M+1:K}(\cdot')$ is still too large. In this case, the first mechanism pushes $\Psi_{M+1:K}^{\top}(\cdot)\Psi_{M+1:K}(\cdot')$ downwards by lowering the prior variance $\sigma^2$ , which necessarily also reduces $\Psi_{0:M}^{\top}(\cdot)\mathbf{S}'\Psi_{0:M}(\cdot')$ . Consequently, $\mathbf{S}'$ must increase to compensate for this. This process results in a tug-of-war between the variational parameters and kernel hyperparameters, which seems to make optimisation difficult, and is thus a possible reason for the drop in performance as $K > M$ .
564
+
565
+ One remedy is to set $K = M$ , which is what we actually do in our experiments. However, since this comes at some cost to the kernel approximation, we propose that future work can consider an extended variational family which, in our preliminary tests, helped mitigate this issue at a minimal cost. Our extension expands the $\mathbf{S}'$ matrix with a parametrised diagonal $\mathbf{D}'$ corresponding to the $K - M$ eigenfunctions not previously used in the variational family but used in the kernel, giving
566
+
567
+ $$
568
+ k _ {\zeta , \mathbf {m} ^ {\prime}, \mathbf {S} ^ {\prime}} (\cdot , \cdot^ {\prime}) \tag {44}
569
+ $$
570
+
571
+ $$
572
+ = \boldsymbol {\Psi} _ {M + 1: K} ^ {\top} (\cdot) \boldsymbol {\Psi} _ {M + 1: K} (\cdot^ {\prime}) + \boldsymbol {\Psi} _ {0: M} ^ {\top} (\cdot) \mathbf {S} ^ {\prime} \boldsymbol {\Psi} _ {0: M} (\cdot^ {\prime}) + \boldsymbol {\Psi} _ {M + 1: K} ^ {\top} (\cdot) \left(\mathbf {D} ^ {\prime} - \mathbf {I}\right) \boldsymbol {\Psi} _ {M + 1: K} (\cdot^ {\prime}) \tag {45}
573
+ $$
574
+
575
+ $$
576
+ = \boldsymbol {\Psi} _ {0: M} ^ {\top} (\cdot) \mathbf {S} ^ {\prime} \boldsymbol {\Psi} _ {0: M} (\cdot^ {\prime}) + \boldsymbol {\Psi} _ {M + 1: K} ^ {\top} (\cdot) \mathbf {D} ^ {\prime} \boldsymbol {\Psi} _ {M + 1: K} (\cdot^ {\prime}). \tag {46}
577
+ $$
578
+
579
+ This eliminates the aforementioned conflict, allowing the variational parameters to affect both terms in a similar way.
580
+
581
+ Ostensibly, when $K - M$ is large, it significantly impacts performance. However, as we have seen in Appendix A.1, this can be avoided by applying the addition theorem, which is possible if the parameters of $\mathbf{D}'$ corresponding to eigenfunctions with the same eigenvalues are kept equal. With this method, the number of additional variational parameters is minimal, and, comparing our extended variational family with its original variant, there is practically no increase in computation time, as $\Psi_{M+1:K}^{\top}(\cdot)\mathbf{D}'\Psi_{M+1:K}(\cdot')$ is simply substituted for $\Psi_{M+1:K}^{\top}(\cdot)\Psi_{M+1:K}(\cdot')$ .
582
+
583
+ # C RELATION TO WRAPPED GAUSSIAN PROCESSES
584
+
585
+ Wrapped GPs of Mallasto and Feragen (2018) are used to model a function of Euclidean data taking values on a Riemannian manifold. They are constructed by choosing a base point function, which assigns a point on the manifold to each Euclidean input point, and a coordinate-frame GVF prior (although the latter is implicit in the original paper). Inference is done by lifting the training labels from the manifold to the tangent space at the base points assigned to their corresponding inputs, performing inference in the tangent space, and projecting the posterior GVF into the manifold using the exponential map. The base point function is chosen either as a constant mapping to a point minimising the squared distance from the training points (i.e. empirical Fréchet mean) or as an auxiliary regression function.
586
+
587
+ We derived the manifold-to-manifold layers of our model as a generalisation of the linear mean construction of Salimbeni and Deisenroth (2017); however, we may also build them based on the ideas of wrapped GPs. The key difference is that our construction is manifold-input—whereas wrapped GPs are Euclidean-input—and the input and output manifolds are identical, allowing layers to be composed. Thus, the first non-trivial modification is to replace the Euclidean domain with the manifold domain. This requires adapting the GVF from Euclidean kernels to manifold kernels. Furthermore, instead of using an auxiliary regression function or Fréchet mean, the natural choice yielding a generalisation of the linear mean is the identity map for the base point function. This yields a manifold-to-manifold GP; however, to enable doubly stochastic variational inference, the next modification is to replace the exact GVFs with sparse GVFs using inducing points or interdomain inducing variables. With these modifications, we obtain our manifold-to-manifold GPs, which can be composed sequentially to yield residual deep GPs.
588
+
589
+ ![](images/6844a7533f4246270e84f736015ca745ce48ebb7792c64de621b31f72d633dcf.jpg)
590
+ (a) $5.0\mathrm{km}$ altitude.
591
+
592
+ ![](images/43db2ae83e68a3b1d134d23c1db20c353db9aa40d6417e02c4162bc78dfa9627.jpg)
593
+ (b) $2.0\mathrm{km}$ altitude.
594
+
595
+ ![](images/43f95006087cf6a15c5bddeacfc868ea87f4dba37d295183d01997d50a980f25.jpg)
596
+ (c) $0.1\mathrm{km}$ altitude.
597
+ Figure 18: Difference between the prediction and the ground truth wind velocities, shown as black arrows, and the predictive uncertainty, shown using a colour scale from purple (lowest) to yellow (highest), for a 3-layer residual deep GP and wind velocities for July 2010, at three altitude levels.
598
+
599
+ ![](images/e01a71aa244bc20b4084eab96552c4ba21f802a9075d8d78a12906cb5e5e1d53.jpg)
600
+ (a) Ground truth.
601
+
602
+ ![](images/a553fbef0dd114624f0df1db6217cea50309d04d249fa9dd3273652dcc733f36.jpg)
603
+ (b) Predictive mean and uncertainty.
604
+
605
+ ![](images/36c95504417f78895a68fa4c6e6d6eb368f0bbe2439f9ad2adb945369220a4dc.jpg)
606
+ (c) Posterior sample.
607
+ Figure 19: Ground truth wind velocity data at an altitude of $5.0\mathrm{km}$ from July 2010, and the corresponding posterior mean, uncertainty, and sample from a 3-layer residual deep GP. The mean and sample are shown as black arrows, while the predictive uncertainty is shown using a colour scale from purple (lowest) to yellow (highest).
608
+
609
+ ![](images/fe3dc047b5856cb9eb4b9a6eaef48cc0ebb76ffde65f106744620eb1e160f61c.jpg)
610
+ (a) Ground truth.
611
+
612
+ ![](images/33c9f59f9268680e7ab29373e5a988c67fcc9635ceb142f4d4da95de85fc0dea.jpg)
613
+ (b) Predictive mean and uncertainty.
614
+
615
+ ![](images/b5cf63570c21f8548e507c790c127250abb079bae5a35d259635893e45b69787.jpg)
616
+ (c) Posterior sample.
617
+ Figure 20: Ground truth wind velocity data at an altitude of $2.0\mathrm{km}$ from July 2010, and the corresponding posterior mean, uncertainty, and sample from a 3-layer residual deep GP. The mean and sample are shown as black arrows, while the predictive uncertainty is shown using a colour scale from purple (lowest) to yellow (highest).
618
+
619
+ ![](images/89828e28511f5f69f2e898e99fdfa12fccf96035e7c46cd9c2ab02ded42e0c34.jpg)
620
+ (a) Ground truth.
621
+
622
+ ![](images/69bb0b9e3a07b0c876921c26cda2bfe5d33f3e4908aab92918499e7fb413f4eb.jpg)
623
+ (b) Predictive mean and uncertainty.
624
+
625
+ ![](images/0a8b7882edfa4b7dccdf652e89e0cfae42a1279fa121380cebe54fc8544aee31.jpg)
626
+ (c) Posterior sample.
627
+ Figure 21: Ground truth wind velocity data at an altitude of $0.1\mathrm{km}$ from July 2010, and the corresponding posterior mean, uncertainty, and sample from a 3-layer residual deep GP. The mean and sample are shown as black arrows, while the predictive uncertainty is shown using a colour scale from purple (lowest) to yellow (highest).
ICLR/2025/Residual Deep Gaussian Processes on Manifolds/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfe476f87411c29df746896f598cccf4efb28ad23b72599df3947d974f6791f7
3
+ size 2135861
ICLR/2025/Residual Deep Gaussian Processes on Manifolds/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:471f50d18f788bf895ec7a41d9728a4fe536c7da99eadaeaad4b4c3fc323d469
3
+ size 892802
ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/44dd8cbd-5744-4d91-9f29-85768a064ef2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be559c34d7a7e1db497da1827ef4ce6ff53ab3b7674aa4c901fefc8675161bb8
3
+ size 225829
ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/44dd8cbd-5744-4d91-9f29-85768a064ef2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f69058ca34dbb23311eb182fa357e3c8c342624426a2ff97cb34a2a127face59
3
+ size 259763
ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/44dd8cbd-5744-4d91-9f29-85768a064ef2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbd8f06d17624ece4169198271fc84f999735ebd11898213e82cb4fbe196e1b4
3
+ size 12782590
ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dd2f08be17d0c0984d4abd2c50e06e5e2b7e5c97f790d3f1bdc48eea2d8b3dc
3
+ size 2380221
ICLR/2025/Restructuring Vector Quantization with the Rotation Trick/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4f48237d73e52a7023f8fc76316716a011706b2757282ffdcc3a12a3cca7c45
3
+ size 1423832
ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/5b2ee4cc-09f5-403a-a3c3-3f13f822cf03_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd31da4cd0f9ed4afc9652a1a9a6a6f6f8ee42ce2c8e782a87d3f52783e6d326
3
+ size 251942
ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/5b2ee4cc-09f5-403a-a3c3-3f13f822cf03_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87a888eddaba84f856a3ac5171316d54cdf7b69b15a9719e64d8ff446748fb4f
3
+ size 283598
ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/5b2ee4cc-09f5-403a-a3c3-3f13f822cf03_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b121eb54ec4b1f23881bcd6ffc043587049cc45fc3ed1fc08a50ac5fea986126
3
+ size 10240254
ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38d6c170cc638d61b196cbac737f9371dbf2c629f9b9c267bb94b8fd55a5f882
3
+ size 3204349
ICLR/2025/Rethinking Reward Modeling in Preference-based Large Language Model Alignment/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0add34c64643f5da62cd6592ad3fe1f3f6b00a9e2c41b241ccf6158c2a6a6a82
3
+ size 1459408
ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/050aa0c8-0272-4b38-a8cd-bc15e39bbc4b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:996b32491b3e270452b84ddfe7bcf3e8822a8055bada8180d9a92253fcc01ad6
3
+ size 145581
ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/050aa0c8-0272-4b38-a8cd-bc15e39bbc4b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:befba0e37e18ec1d3e55be129a28fafd2880a72a718a9d33bdf33b841092e13d
3
+ size 175363
ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/050aa0c8-0272-4b38-a8cd-bc15e39bbc4b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77b57eebb813beeed86e8c856b95b6fff91c00f77ab209d50ab1f1966e2aadd9
3
+ size 8081128
ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0a32e987760abd84b1e2380e05851a8c62df28d4e22de893137e428b083f6df
3
+ size 2599879
ICLR/2025/Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0be4dd385a8274d9e7e351899675a7187f025e8d0580782930995100230c0696
3
+ size 654793
ICLR/2025/Retrieval Head Mechanistically Explains Long-Context Factuality/b88a8af7-9bc9-4014-ad52-3e112ce4bc09_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0f99ef872ab69b760bfba652688190cf3ea9237a35e28faf3f4a1e5839dc24b
3
+ size 76231
ICLR/2025/Retrieval Head Mechanistically Explains Long-Context Factuality/b88a8af7-9bc9-4014-ad52-3e112ce4bc09_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30c71c2fbc834488c8b66526ef91a102f9e8cdfdbfd63717b34cebc044c614e9
3
+ size 87393