SlowGuess commited on
Commit
248f34c
·
verified ·
1 Parent(s): 804fc69

Add Batch 11d10bbb-9fcb-4cd2-8031-e49867b9e9f6

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. abenchmarkofcategoricalencodersforbinaryclassification/8e505528-5365-4b20-a56f-aa8837553833_content_list.json +3 -0
  2. abenchmarkofcategoricalencodersforbinaryclassification/8e505528-5365-4b20-a56f-aa8837553833_model.json +3 -0
  3. abenchmarkofcategoricalencodersforbinaryclassification/8e505528-5365-4b20-a56f-aa8837553833_origin.pdf +3 -0
  4. abenchmarkofcategoricalencodersforbinaryclassification/full.md +475 -0
  5. abenchmarkofcategoricalencodersforbinaryclassification/images.zip +3 -0
  6. abenchmarkofcategoricalencodersforbinaryclassification/layout.json +3 -0
  7. acaseforreframingautomatedmedicalimageclassificationassegmentation/5bd74e27-4907-4270-b043-70a2a6f2f565_content_list.json +3 -0
  8. acaseforreframingautomatedmedicalimageclassificationassegmentation/5bd74e27-4907-4270-b043-70a2a6f2f565_model.json +3 -0
  9. acaseforreframingautomatedmedicalimageclassificationassegmentation/5bd74e27-4907-4270-b043-70a2a6f2f565_origin.pdf +3 -0
  10. acaseforreframingautomatedmedicalimageclassificationassegmentation/full.md +0 -0
  11. acaseforreframingautomatedmedicalimageclassificationassegmentation/images.zip +3 -0
  12. acaseforreframingautomatedmedicalimageclassificationassegmentation/layout.json +3 -0
  13. afastheuristictooptimizetimespacetradeoffforlargemodels/993d6f42-1b27-49de-bca5-48652f44f02e_content_list.json +3 -0
  14. afastheuristictooptimizetimespacetradeoffforlargemodels/993d6f42-1b27-49de-bca5-48652f44f02e_model.json +3 -0
  15. afastheuristictooptimizetimespacetradeoffforlargemodels/993d6f42-1b27-49de-bca5-48652f44f02e_origin.pdf +3 -0
  16. afastheuristictooptimizetimespacetradeoffforlargemodels/full.md +482 -0
  17. afastheuristictooptimizetimespacetradeoffforlargemodels/images.zip +3 -0
  18. afastheuristictooptimizetimespacetradeoffforlargemodels/layout.json +3 -0
  19. agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/60d9acb9-0b15-470c-8ed1-f8a10c11a7f3_content_list.json +3 -0
  20. agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/60d9acb9-0b15-470c-8ed1-f8a10c11a7f3_model.json +3 -0
  21. agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/60d9acb9-0b15-470c-8ed1-f8a10c11a7f3_origin.pdf +3 -0
  22. agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/full.md +278 -0
  23. agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/images.zip +3 -0
  24. agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/layout.json +3 -0
  25. amassivescalesemanticsimilaritydatasetofhistoricalenglish/8cee2e8f-6f6e-4d50-898a-1be35c7324f7_content_list.json +3 -0
  26. amassivescalesemanticsimilaritydatasetofhistoricalenglish/8cee2e8f-6f6e-4d50-898a-1be35c7324f7_model.json +3 -0
  27. amassivescalesemanticsimilaritydatasetofhistoricalenglish/8cee2e8f-6f6e-4d50-898a-1be35c7324f7_origin.pdf +3 -0
  28. amassivescalesemanticsimilaritydatasetofhistoricalenglish/full.md +307 -0
  29. amassivescalesemanticsimilaritydatasetofhistoricalenglish/images.zip +3 -0
  30. amassivescalesemanticsimilaritydatasetofhistoricalenglish/layout.json +3 -0
  31. ameasuretheoreticaxiomatisationofcausality/7f33c9b5-94b0-4aba-b49e-f07f33b2a4ac_content_list.json +3 -0
  32. ameasuretheoreticaxiomatisationofcausality/7f33c9b5-94b0-4aba-b49e-f07f33b2a4ac_model.json +3 -0
  33. ameasuretheoreticaxiomatisationofcausality/7f33c9b5-94b0-4aba-b49e-f07f33b2a4ac_origin.pdf +3 -0
  34. ameasuretheoreticaxiomatisationofcausality/full.md +0 -0
  35. ameasuretheoreticaxiomatisationofcausality/images.zip +3 -0
  36. ameasuretheoreticaxiomatisationofcausality/layout.json +3 -0
  37. ametadatadrivenapproachtounderstandgraphneuralnetworks/aec2d363-b0ba-4098-9d3d-7d04bf3f82e5_content_list.json +3 -0
  38. ametadatadrivenapproachtounderstandgraphneuralnetworks/aec2d363-b0ba-4098-9d3d-7d04bf3f82e5_model.json +3 -0
  39. ametadatadrivenapproachtounderstandgraphneuralnetworks/aec2d363-b0ba-4098-9d3d-7d04bf3f82e5_origin.pdf +3 -0
  40. ametadatadrivenapproachtounderstandgraphneuralnetworks/full.md +663 -0
  41. ametadatadrivenapproachtounderstandgraphneuralnetworks/images.zip +3 -0
  42. ametadatadrivenapproachtounderstandgraphneuralnetworks/layout.json +3 -0
  43. amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/3923ddea-b059-47c8-8f3f-111e24127343_content_list.json +3 -0
  44. amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/3923ddea-b059-47c8-8f3f-111e24127343_model.json +3 -0
  45. amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/3923ddea-b059-47c8-8f3f-111e24127343_origin.pdf +3 -0
  46. amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/full.md +639 -0
  47. amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/images.zip +3 -0
  48. amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/layout.json +3 -0
  49. aneuralcollapseperspectiveonfeatureevolutioningraphneuralnetworks/854e4251-0979-4ea7-b7a7-1c013dffffcd_content_list.json +3 -0
  50. aneuralcollapseperspectiveonfeatureevolutioningraphneuralnetworks/854e4251-0979-4ea7-b7a7-1c013dffffcd_model.json +3 -0
abenchmarkofcategoricalencodersforbinaryclassification/8e505528-5365-4b20-a56f-aa8837553833_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21317d7c9e966f76ed8ef42f0c7c3eddda71064ca92131a34cb915ce3415ac05
3
+ size 105895
abenchmarkofcategoricalencodersforbinaryclassification/8e505528-5365-4b20-a56f-aa8837553833_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0f8f4e232de4db77946a96e0a103aa7f1b827423cba53b7366d977441a2d076
3
+ size 129335
abenchmarkofcategoricalencodersforbinaryclassification/8e505528-5365-4b20-a56f-aa8837553833_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31fed49e9aaf2251d270ad7042cf84db6ac7ae62b404b27322358f4ed5143888
3
+ size 1879799
abenchmarkofcategoricalencodersforbinaryclassification/full.md ADDED
@@ -0,0 +1,475 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A benchmark of categorical encoders for binary classification
2
+
3
+ Federico Matteucci<sup>1</sup>, Vadim Arzamasov<sup>1</sup>, and Klemens Böhm<sup>1</sup>
4
+
5
+ $^{1}$ Karlsruhe Institute of Technology {federico.matteucci, vadim.arzamasov, klemens.boehm}@kit.edu
6
+
7
+ # Abstract
8
+
9
+ Categorical encoders transform categorical features into numerical representations that are indispensable for a wide range of machine learning models. Existing encoder benchmark studies lack generalizability because of their limited choice of 1. encoders, 2. experimental factors, and 3. datasets. Additionally, inconsistencies arise from the adoption of varying aggregation strategies. This paper is the most comprehensive benchmark of categorical encoders to date, including an extensive evaluation of 32 configurations of encoders from diverse families, with 48 combinations of experimental factors, and on 50 datasets. The study shows the profound influence of dataset selection, experimental factors, and aggregation strategies on the benchmark's conclusions — aspects disregarded in previous encoder benchmarks. Our code is available at https://github.com/DrCohomology/EncoderBenchmarking.
10
+
11
+ # 1 Introduction
12
+
13
+ Learning from categorical data poses additional challenges compared to numerical data, due to a lack of inherent structure such as order, distance, or kernel. The conventional solution is to transform categorical attributes into a numerical form, i.e., encode them, before feeding them to a downstream Machine Learning (ML) model. Various encoders have been proposed, followed by several benchmark studies. However, their combined results remain inconclusive, as we now describe.
14
+
15
+ Many factors impact the generalizability [26] of a benchmark of encoders, including: 1. the compared encoders, 2. the number of datasets, 3. the quality metrics, 4. the ML models used, and 5. the tuning strategy. We also hypothesize that 6. the aggregation strategy used to summarize the results of multiple experiments may affect the conclusions of a study. Existing encoder benchmarks, reviewed in Section 2, only partially control for these factors. First, none of these studies uses more than 15 datasets of a given type (regression or classification). Second, despite these studies collectively covering a substantial number of encoders, they often focus on specific encoder families, resulting in comparison gaps between the best encoders. For instance, the best-performing encoders from [28] (Cross-Validated GLMM) and [44] (Mean-Target) have not been studied together yet. Third, the results of existing studies are often not comparable due to variations in the selected quality metrics. For instance, [28] measures quality with ROC AUC, [4] with average precision, and [41] with accuracy. Fourth, existing studies tune ML models in different ways, yielding incompatible evaluations. For instance, [28, 41] do not tune, while [4, 5, 8, 44] tune but do not specify if they tune the ML model on encoded data or if they tune the entire ML pipeline. Last, no benchmark study of categorical encoders explores the impact of aggregation strategies, which is substantial according to our experiments. For instance, [5] ranks the encoders by average ranking across all datasets, while [28] computes the median ranking with Kemeny-Young aggregation [46].
16
+
17
+ This study offers a taxonomy and a comprehensive experimental comparison of encoders for binary classification, taking into account the factors just mentioned. In particular, we consider: 1. 32 encoder configurations, including all of the best-performing ones from the literature and three novel encoders; 2. 50 datasets for binary classification; 3. four quality metrics; 4. five widely used ML models; 5. three tuning strategies; 6. 10 aggregation strategies gathered from existing categorical encoder benchmarks and from benchmarking methodology studies [27, 9]. This allows us to provide novel insights into the sensitivity of experimental results to experimental factors. In particular, we demonstrate how replicability [26] may not be ensured even for studies conducted on up to 25 datasets. For those combinations of experimental factors that show reproducible results, we isolate and recommended the best encoders.
18
+
19
+ Paper outline: Section 2 reviews existing works, Section 3 presents a taxonomy of encoder families, Section 4 describes the experimental setup, and Section 5 features the results.
20
+
21
+ Table 1: Related work on categorical encoders for binary classification.
22
+
23
+ <table><tr><td rowspan="2" colspan="2"># Binary classification datasets</td><td>Ours</td><td>[28]</td><td>[4]</td><td>[8]</td><td>[5]</td><td>[44]</td><td>[41]</td></tr><tr><td>50</td><td>10</td><td>5</td><td>3</td><td>2</td><td>2</td><td>6</td></tr><tr><td colspan="2"># ML models</td><td>5</td><td>5</td><td>1</td><td>4</td><td>2</td><td>1</td><td>5</td></tr><tr><td rowspan="8">Encoder family</td><td>Identifier</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>Frequency-based</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Contrast</td><td>✓</td><td></td><td></td><td></td><td></td><td></td><td>✓</td></tr><tr><td>Similarity</td><td>✓</td><td></td><td>✓</td><td></td><td>✓</td><td></td><td></td></tr><tr><td>Simple target</td><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td></td><td>✓</td></tr><tr><td>Binning</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Smoothing</td><td>✓</td><td>✓</td><td></td><td></td><td>✓</td><td></td><td>✓</td></tr><tr><td>Data-constraining</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td></tr><tr><td rowspan="3">Quality metric</td><td>Precision-recall based</td><td>✓</td><td></td><td>✓</td><td>✓</td><td>✓</td><td></td><td></td></tr><tr><td>Balanced accuracy</td><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td></td><td></td></tr><tr><td>Accuracy</td><td>✓</td><td></td><td></td><td>✓</td><td></td><td>✓</td><td>✓</td></tr><tr><td rowspan="3">Tuning strategy</td><td>Full pipeline tuning</td><td>✓</td><td></td><td></td><td>?</td><td>✓</td><td>✓*</td><td></td></tr><tr><td>Model tuning</td><td>✓</td><td></td><td>✓</td><td></td><td></td><td></td><td></td></tr><tr><td>No tuning</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td></tr><tr><td rowspan="3">Aggregation strategy</td><td>Heuristic</td><td>✓</td><td></td><td></td><td></td><td>✓</td><td></td><td></td></tr><tr><td>Friedman-Nemenyi</td><td>✓</td><td></td><td>✓</td><td></td><td></td><td>✓</td><td></td></tr><tr><td>Kemeny-Young</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td></td></tr></table>
24
+
25
+ # 2 Related work
26
+
27
+ **Benchmarks of encoders.** We focus on binary classification tasks, as they offer a wider range of compatible encoders; indeed, we could conduct a deeper replicability analysis while maintaining the computation feasible. Table 1 summarizes the related work. The other benchmarks often consider few datasets and either do not tune the ML model or do not describe the tuning procedure. This limits their applicability and generalizability. Additionally, there are substantial differences in the experimental settings across articles, including the encoders considered, quality metrics employed, and aggregation strategies used to interpret results. Hence, the comparability of these findings is limited. For instance, [28] recommends a data-constraining encoder, [41] both data-constraining and contrast encoders, [5, 4] similarity encoders, [8] an identifier encoder, and [44] a simple target encoder. Other benchmarks of encoders are [36], which focuses on regression tasks and faces similar issues, and [30, 14, 20], that use only a single dataset.
28
+
29
+ Analysis of benchmarks. When designing our benchmark, we adhered to the best practices discussed in the literature on benchmark design and analysis. In particular, [27] studies how choices of experimental factors impact the experimental results and advocates for benchmarks that consider a large variety of factors. Similarly, [9] suggests guidelines to mitigate the inconsistencies in the choices of data and evaluation metric. Finally, [2] proposes a methodology to account for variance in the design choices (randomization of sources of variation) and post-processing of the experimental results (significant and meaningful improvements).
30
+
31
+ # 3 Taxonomy of encoders
32
+
33
+ This section presents the essential terminology and discusses the considered encoders and their corresponding families. Appendix 7.1 provides formal and detailed descriptions of the encoders.
34
+
35
+ # 3.1 Notation and terminology
36
+
37
+ Consider a tabular dataset with target $y$ taking values in $\{0,1\}$ , and let $\mathbf{A}$ be one of its attributes (columns). $\mathbf{A}$ is categorical if it represents qualitative properties and takes values in a finite domain $\Omega_A$ . Each $\omega \in \Omega_A$ is a level of $\mathbf{A}$ . Categorical attributes do not support arithmetic operations like addition or multiplication, and their comparison is not based on arithmetic relations. An encoder $E$ replaces a categorical attribute $\mathbf{A}$ with a set of numerical attributes, $E(\mathbf{A})$ . We write $E(\Omega_A)$ to indicate the domain of $E(\mathbf{A})$ . Encoders may encode different levels in $\mathbf{A}$ in the same way, or encode in different ways different occurrences in the dataset of the same level. Encoders are either supervised or unsupervised: Supervised encoders require a target column, while unsupervised encoders solely rely on $\mathbf{A}$ . In what follows, $\mathbf{A}$ always denotes the categorical attribute to be encoded.
38
+
39
+ # 3.2 Unsupervised encoders
40
+
41
+ Identifier encoders assign a unique vector identifier to each level. The most recognized encoder is One-Hot (OH), the default encoder in most machine learning pipelines [11, 15]. One-Hot is both space-inefficient and ineffective [28, 4, 5]. Alternatives include Ordinal (Ord), which assigns a unique consecutive identifier to each level, and Binary (Bin), which splits the base-2 representation of Ord(A) into its digits.
42
+
43
+ Frequency-based encoders replace levels with some function of their frequency in the dataset. We use Count, which relies on absolute frequencies [28].
44
+
45
+ Contrast encoders encode levels into $(L - 1)$ -dimensional vectors so that the encodings of all levels sum up to $(0, \dots, 0)$ [41]. A constant intercept term, 1, is usually appended to the encoding of each level. Contrast encoders encode levels such that their coefficients represent the level's effect contrasted against a reference value. A common example is Sum, which contrasts against the target's average value.
46
+
47
+ Similarity encoders treat $\omega \in \Omega_A$ as strings and map them into a numeric space taking their similarity into account [5, 4]. These encoders are particularly useful for handling "dirty" categorical datasets that may contain typos and redundancies. One example is Min-Hash (MH), which decomposes each level into a set of $n$ -grams, sequences of $n$ consecutive letters, and encodes to preserve the Jaccard similarity of the decompositions.
48
+
49
+ # 3.3 Supervised encoders
50
+
51
+ Simple target encoders encode levels with a function of the target. Prime examples are Mean-Target (MT) [7], which encodes with the conditional average $y$ given $\mathbf{A}$ , and Weight of Evidence (WoE) [39], which encodes with the logit of MT(A). As Mean-Target can lead to severe overfitting [28, 31], it may benefit from regularization. The following families of encoders are regularization for Mean-Target.
52
+
53
+ We propose Binning encoders, that regularize MT by partitioning either $\Omega_A$ or $\mathrm{MT}(\Omega_A)$ into bins. Pre-Binned MT (PBMT) partitions $\Omega_A$ to maximize the number of bins such that each bin's relative frequency exceeds a specified threshold, then encodes the binned attribute with MT. Discretized MT (DMT) partitions $\mathrm{MT}(\Omega_A)$ into intervals of equal length, then encodes each level with the lower bound of the interval in which its MT encoding falls.
54
+
55
+ Smoothing encoders blend $\mathrm{MT}(\Omega_A)$ with the overall average target. Notable examples are Mean-Estimate (ME) [22], which uses a weighted average of the two, and Generalized Linear Mixed Model encoder (GLMM) [28], which encodes with the coefficients of a generalized linear mixed model fitted on the data.
56
+
57
+ Data-constraining encoders regularize $\mathrm{MT}(\mathbf{A})$ by restricting the amount of data used to encode each occurrence of a level in the dataset. CatBoost (CB) [31] first randomly permutes the dataset's rows, then maps each occurrence of a level $\omega$ to the average target of its previous occurrences. Cross-Validated MT (CVMT) [28] splits the dataset into folds of equal size, then encodes each fold with an
58
+
59
+ MT trained on the other folds. We propose the BlowUp variant of CVMT, BUMT, which trains an MT on each fold and uses them to encode the whole dataset. Related variants are Cross-Validated GLMM (CVGLMM) [28] and its BlowUp version (BUGLMM).
60
+
61
+ # 4 Experimental design
62
+
63
+ As there is no intrinsic measure of an encoder's quality, we proxy the latter with the quality of an ML model trained on encoded data. This procedure is in line with the current literature on the topic, discussed in Section 2. Each experiment thus consists of the following steps. First, we fix a combination of factors: a dataset, an ML model, a quality metric, and a tuning strategy. Then, we partition the dataset using a 5-fold stratified cross-validation and pre-process the training folds by:
64
+
65
+ - imputing missing values with median for numerical and mode for categorical attributes;
66
+ - scaling the numerical attributes;
67
+ - encoding the categorical attributes.
68
+
69
+ If tuning is to be applied, we fine-tune the pipeline with nested cross-validation and output the average performance over the outer test folds. We used standard scikit-learn [29] procedures for scaling and missing values imputation.
70
+
71
+ We conducted experiments using Python 3.8 on an AMD EPYX 7551 machine with 32 cores and 128 GB RAM. We limit each evaluation to 100 minutes to handle the extensive workload. As described in Appendix 7.3.1, out of the 64000 cross-validated evaluations, 61812 finished on time without throwing errors. For the sensitivity, replicability, and encoder comparison analysis, we ignored the missing evaluations. We did so 1. since there is no clearly superior imputation method, and 2. to avoid introducing unnecessary variability in the analysis. Our preliminary experiments confirm that imputing the small number of missing evaluations does not significantly impact our analysis.
72
+
73
+ In what follows, we describe the datasets, ML models, quality metrics, and tuning strategies we use in our experiments. Then, we outline the different aggregation strategies. Appendix 7.2 provides further details about datasets and aggregation strategies.
74
+
75
+ # 4.1 Encoders
76
+
77
+ We used the category_encoders<sup>1</sup> implementations of Bin, CB, Count, Ord, OH, Sum, and WoE. We sourced MH from the authors' implementation [4, 5].<sup>2</sup> We implemented DMT, GLMM, ME, MT, PBMT, CVMT, BUMT, CVGLMM, and BUGLMM. We also added a baseline encoder, Drop, which encodes every level with 1. For DMT, we experimented with the number of bins: $\{2, 5, 10\}$ , for ME, with the regularization strength: $\{0.1, 1, 10\}$ , for PBMT, with the minimum frequency: $\{0.001, 0.01, 0.1\}$ , and for cross-validated encoders, such as CVMT, with the number of folds: $\{2, 5, 10\}$ . We display hyperparameter values with subscripts, e.g., $\mathrm{CV}_2\mathrm{MT}$ .
78
+
79
+ # 4.2 Datasets
80
+
81
+ We used binary classification datasets. This allows us to conduct in-depth analysis using the same ML models and quality metrics. Additionally, certain supervised encoders, e.g., WoE, are specifically designed for binary classification tasks. We chose 50 datasets with categorical attributes from OpenML [42], including the suitable ones from the related work.
82
+
83
+ # 4.3 ML models
84
+
85
+ We experimented with diverse ML models that process data in different ways: decision trees (DT) and boosted trees (LGBM) exploit orderings, support vector machines (SVM) use kernels, k-nearest neighbors (k-NN) relies on distances, and logistic regression (LogReg) is a "pseudo-linear" model. The LGBM implementation we used is from the LightGBM module, while the other models' implementations are from scikit-learn. Table 2 compares our model choices with related work. We
86
+
87
+ Table 2: ML models used in related studies.
88
+
89
+ <table><tr><td></td><td></td><td>Ours</td><td>[28]</td><td>[4]</td><td>[8]</td><td>[5]</td><td>[44]</td><td>[41]</td></tr><tr><td rowspan="7">Model family</td><td>Tree ensembles</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>✓</td></tr><tr><td>Linear</td><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td></td><td>✓</td></tr><tr><td>SVM</td><td>✓</td><td>✓</td><td></td><td></td><td>✓</td><td></td><td>✓</td></tr><tr><td>k-NN</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>DT</td><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td>✓</td><td></td></tr><tr><td>Neural</td><td></td><td></td><td></td><td>✓</td><td></td><td></td><td>✓</td></tr><tr><td>Naïve Bayes</td><td></td><td></td><td></td><td></td><td></td><td></td><td>✓</td></tr></table>
90
+
91
+ excluded neural models due to their inferior performance on tabular data [15] and the absence of a recommended architecture. We also did not use Naive Bayes due to its lack of popularity.
92
+
93
+ # 4.4 Quality metrics and tuning strategies
94
+
95
+ We assessed an encoder's quality by evaluating an ML model trained on the encoded data. We use four quality metrics: balanced accuracy (BAcc), F1-score (F1), accuracy (Acc), and Area Under the ROC Curve (AUC). We compared three tuning strategies:
96
+
97
+ - no tuning;
98
+ - model tuning: the entire training set is pre-processed before tuning the model;
99
+ - full tuning: the entire pipeline is tuned on the training set, with each training fold of the nested cross-validation pre-processed independently.
100
+
101
+ We used Bayesian search from scikit-optimize<sup>4</sup> for full tuning, and for model tuning grid search from scikit-learn. Table 4 summarizes the tuning search space for different ML models. To mitigate excessive runtime, we chose not to tune certain ML models and limited the dataset selection to the smallest 30 for full tuning, as Table 3 illustrates.
102
+
103
+ Table 3: Factors for different tuning strategies.
104
+
105
+ <table><tr><td></td><td>Models</td><td># Datasets</td></tr><tr><td>No tuning</td><td>DT, k-NN, LogReg, SVM, LGBM</td><td>50</td></tr><tr><td>Model tuning</td><td>DT, k-NN, LogRreg</td><td>50</td></tr><tr><td>Full tuning</td><td>DT, k-NN, LogReg, SVM</td><td>30</td></tr></table>
106
+
107
+ Table 4: Tuning search space.
108
+
109
+ <table><tr><td></td><td>Hyperparameter</td><td>Interval</td><td>Grid</td></tr><tr><td>DT</td><td>max_depth</td><td>[2, ..., 5]</td><td>{2,5, None}</td></tr><tr><td>k-NN</td><td>n_neighbors</td><td>[2, ..., 10]</td><td>{2,5, 10}</td></tr><tr><td>LogReg</td><td>C</td><td>[0.2, 5]</td><td>{0,1, 10}</td></tr><tr><td rowspan="2">SVM</td><td>C</td><td>[0.1, 2]</td><td></td></tr><tr><td>gamma</td><td>[0.1, 100]</td><td></td></tr></table>
110
+
111
+ # 4.5 Aggregating into a consensus ranking
112
+
113
+ A common practice for summarizing and interpreting the results of benchmark experiments is to aggregate them into a consensus ranking of alternatives (encoders in our case) [10, 27, 15]. To obtain a dataset-independent ranking of encoders, we aggregate the results across different datasets while keeping all other factors fixed. We now present well-known aggregation strategies used in benchmarks.
114
+
115
+ Heuristics rank alternatives based on an aggregate score. Common aggregation heuristics include mean rank (R-M) [5], median rank (R-Md), mean quality (Q-M), median quality (Q-Md), rescaled
116
+
117
+ mean quality [36, 15] (Q-RM), the number of times the alternative was ranked the best (R-B) or the worst (R-W) [41], the number of times the alternative's quality is better than the best quality multiplied by a threshold $\theta \leq 1$ ( $Q - Th_{\theta}$ ).
118
+
119
+ Friedman-Nemenyi tests [10] (R-Nem $_{p-value}$ ). First, one ranks alternatives separately for each dataset and then applies a Friedman test to reject the hypothesis that all encoders have the same average rank. If the hypothesis is rejected, pairwise Nemenyi post-hoc tests are conducted to compare pairs of alternatives. Finally, one uses the results of these post-hoc tests to construct the consensus ranking. This aggregation strategy requires the user to choose a p-value.
120
+
121
+ Kemeny-Young aggregation [21, 47] (R-Kem) first ranks alternatives separately for each dataset. Then, it determines the consensus ranking that minimizes the sum of distances to the datasets' rankings. We adopt the approach described in [45], with a distance measure that accommodates ties and missing values in the rankings. We then formulate the optimization problem as a mixed integer linear problem and solve it using a GUROBI solver with academic license. Kemeny-Young aggregation is much slower than the other aggregation strategies, taking minutes for each aggregation.
122
+
123
+ # 5 Results
124
+
125
+ This section summarizes the main results of our study. Appendix 7.3 further discusses the missing evaluations, run time, replicability, the ranks of the encoders and studies the effect of tuning on pipeline quality.
126
+
127
+ # 5.1 Sensitivity analysis
128
+
129
+ The relative performance of encoders, i.e., the ranking, can depend on the pick of ML model, quality metric, and tuning strategy. More, the choice of an aggregation strategy impacts the consensus ranking. To quantify the influence of these choices, we calculate the similarity between rankings using the Jaccard index $J$ for the sets of best encoders and the Spearman correlation coefficient $\rho$ . Intuitively, $J$ measures if two experiments with different factor combinations agree on the best encoders, while $\rho$ takes the entire ranking into account. For both measures, values close to 1 indicate high agreement and low sensitivity. Conversely, values near 0 (or, for $\rho$ , negative) suggest low consistency and high sensitivity.
130
+
131
+ # 5.1.1 Sensitivity to experimental factors
132
+
133
+ We evaluate the sensitivity of encoder rankings on individual datasets with respect to an experimental factor (ML model, quality metric, or tuning strategy) by varying the factor of interest and keeping the other factors fixed, then calculating the similarity between pairs of rankings. After that, we average the result across all combinations of the other factors. Figures 1a, 1b, and 1c show the resulting values, with Spearman's $\rho$ in the upper triangle and Jaccard index $J$ in the lower triangle. For example, Spearman's $\rho$ between encoder rankings for DT and SVM, averaged across all datasets, tuning strategies, and quality metrics, is 0.3.
134
+
135
+ Our findings highlight the high sensitivity of results to experimental factors, for both the full rankings and the best encoders. They also explain why results from other studies are so inconsistent, as choosing different values for any factor will lead to different results.
136
+
137
+ # 5.1.2 Sensitivity to aggregation strategy
138
+
139
+ To evaluate the impact of the aggregation strategy on the consensus ranking, we apply the same procedure as above to consensus rankings instead of rankings on individual datasets. Figure 1a presents the results with the notation from Section 4.5. For example, Spearman's $\rho$ between consensus rankings obtained with Q-M and Q-Md averaged across all ML models, tuning strategies, and quality metrics is 0.8.
140
+
141
+ While some aggregation strategies show strong similarities, different strategies yield very different consensus rankings in general. This is particularly evident for Jaccard index $J$ , indicating the high sensitivity of the best encoders to the rank aggregation strategy.
142
+
143
+ ![](images/f9882af69c7121bc9a4f886a1ccfdacd8b08fc6ea6621bc92ed034bb1c3c13f1.jpg)
144
+ (a)
145
+
146
+ ![](images/ddf8f2526d438dd0ca3d1f78eb328f3e5f25f01e0916f2708b02cb20185a12aa.jpg)
147
+
148
+ ![](images/1dab6b7f9e14b8b287e144f422b36d97ed80822be95fe6e7496bc358c918a091.jpg)
149
+
150
+ ![](images/91d984dd8fc3872c45f4f767572f92b3e017f1c93299b7517ba865c4f5771aef.jpg)
151
+ Figure 1: Sensitivity as the average similarity between rankings, measured with $\rho$ (upper triangle) and $J$ (lower triangle), computed between individual rankings for varying: (a) ML model, (b) quality metric, (c) tuning strategy, and between consensus rankings for varying (d) aggregation strategy.
152
+
153
+ # 5.2 Replicability
154
+
155
+ Replicability is defined as the property of a benchmark to produce consistent results from different data [26]. This definition does not, however, provide a quantifiable notion of replicability. To overcome this, we made the following modeling decisions. First, we fix a factor combination: ML model, quality metric, tuning strategy, and aggregation strategy. We excluded the R-Nem and R-Kem aggregation strategies due to their slower run time. Second, we model the result of a benchmark on a dataset sample $S$ with the consensus ranking aggregated across $S$ . Third, we quantify replicability as the similarity between consensus rankings averaged over all factor combinations and 100 pairs of equal-sized disjoint sets of datasets. As discussed in Section 5.1, we measure the similarity with $\rho$ and $J$ to capture the similarity between both the rankings and the best encoders. We refer to them as $\rho$ -replicability and $J$ -replicability, respectively.
156
+
157
+ Figure 2 shows the outcome for different tuning strategies, conditional on the ML model and the size of the dataset samples. We have studied additional factors in Appendix 7.3.3. The shaded areas represent a bootstrapped $95\%$ confidence interval. Our findings show an upward trend of $\rho$ -replicability as the size of the dataset samples increases. This observation confirms that, in general, considering a larger number of datasets yields more reliable experimental outcomes. It is, however, important to note that this pattern does not always hold for $J$ -replicability. This suggests that, for some models, the best encoders might vary significantly even with a relatively large number of datasets. To conclude, the replicability of our results strongly depends on the ML model, with logistic regression exhibiting the highest replicability and decision trees the lowest.
158
+
159
+ ![](images/cb8d741b4895bd8e7ab582a3b5b87ff9639907b2fcd1c86e2b2ca1a530a65463.jpg)
160
+ Figure 2: Replicability as the average similarity of consensus rankings from disjoint subsets of datasets.
161
+
162
+ ![](images/ac4e5820a1bf4db76c946fa7d34409fa1d37fd7ce6f85a28662447bb2ed4f4f6.jpg)
163
+
164
+ ![](images/b36ac5a82ba6935670b3d030c69de8afee27e2ee5a32aec0b46de08cd9ec880c.jpg)
165
+
166
+ # 5.3 Comparing encoders
167
+
168
+ Based on the outcome of Section 5.2, we now examine the ranks of encoders limited to decision trees, logistic regression, and all ML models.
169
+
170
+ Figure 3a shows the rank of encoders from the experiments with decision trees across all datasets, quality metrics, and tuning strategies. One-Hot is the best-performing encoder; however, Nemenyi tests at a significance level of 0.05 fail to reject that the average rank of One-Hot is the same as that of the other encoders.
171
+
172
+ Figure 3b features the encoder ranks for logistic regression, where four encoders, namely One-Hot, Sum, Binary, and Weight of Evidence, consistently achieve higher ranks compared to the others. Nemenyi tests confirm that this difference in ranks is significant. These results are in line with the ones from Section 5.2, which indicate low replicability of the results for decision trees and higher replicability for logistic regression.
173
+
174
+ Figure 3c presents the ranks of encoders across all datasets, ML models, quality metrics, and tuning strategies. Similarly to logistic regression, One-Hot, Sum, Binary, and Weight of Evidence consistently achieve significantly higher average ranks compared to the other encoders, again confirmed by Nemenyi tests. We recommend these four encoders as the preferred choices in practical applications. This conclusion contradicts other studies reporting a suboptimal performance of One-Hot [5, 28].
175
+
176
+ Our findings also reveal that Drop performs significantly worse than all other encoders, i.e., encoding categorical attributes generally yields better results than dropping them.
177
+
178
+ # 5.4 Comparing to related work
179
+
180
+ In this section, we compare our results with the findings of other studies. To do so, we select subsets of our results that mimic the experimental settings in related work. In [28], $\mathrm{CV}_5\mathrm{GLMM}$ outperformed every competitor for boosted trees and k-NN, while GLMM was recommended for SVMs. However, in our experiments, Sum outperformed GLMM for SVMs, One-Hot did better than $\mathrm{CV}_5\mathrm{GLMM}$ for boosted trees, and $\mathrm{CV}_{10}\mathrm{GLMM}$ was better than $\mathrm{CV}_5\mathrm{GLMM}$ for k-NN. Next, while in [5] similarity encoders are better than One-Hot for boosted trees, subsequent research reported no significant difference between Min hash and One-Hot on medium-sized tabular datasets [4]. Our findings are in line with this latter result, as we could not find a performance difference between the two encoders with a t-test with a significance level of 0.05. In [41], Sum is reported as the best encoder on the Adult dataset for boosted trees, while a Data-constraining encoder is reported as the worst. With the same setting, we did not find a significant performance difference for any encoder except for Drop, which
181
+
182
+ ![](images/38b87727640e3214e05c5df9414d766b3a68085fff4ae7eb036d73abb7ddf404.jpg)
183
+ (a) Decision tree
184
+ Figure 3: Ranks of encoders.
185
+
186
+ ![](images/fffeb8a427e7761778f259638672a1d6d721b13063019d15ef8bc476ce497056.jpg)
187
+ (b) Logistic regression
188
+
189
+ ![](images/f15b7af46502f379277126d9789c219d068b8d2a4b8398804c8410b33c3d4b91.jpg)
190
+ (c) All models
191
+
192
+ performed the worst. On the Bank marketing dataset, [8] showed that One-Hot and Mean-Target outperformed Binary with logistic regression. In our experiments, Binary was slightly worse than One-Hot and Mean-Target. In [44], Dummy, an identifier encoder similar to One-Hot, was better than Mean-Target on the Tic-tac-toe dataset with boosted trees. We, instead, did not observe any significant difference between One-Hot and Mean-Target for these factors.
193
+
194
+ # 6 Limitations and conclusions
195
+
196
+ Limitations. First, we treated encoders as part of the pre-processing, but certain encoders can be an integral component of specific ML models. For instance, CatBoost is derived from the homonymous boosted trees algorithm, which re-encodes the data multiple times during training. Second, we applied a single encoder to all categorical attributes. Using different encoders based on the cardinality of the attribute may sometimes yield favorable results [28, 4]. However, the selection of the optimal encoder for each attribute requires either domain knowledge of the attribute or purpose-built tools, which falls outside the scope of our benchmark and is therefore left as future work. We also did not include neural networks, due to the absence of a recommended architecture and reported interior performance to tree-based models on tabular data [15].
197
+
198
+ # Conclusions.
199
+
200
+ In this study, we conducted an extensive evaluation of encoder performance across various experimental factors, including ML models, quality metrics, and tuning strategies. Our results demonstrate a high sensitivity of encoder rankings to these factors, both for the full rankings and the best-performing encoders. This sensitivity explains the inconsistent results among related studies, as different choices in any of these factors can lead to different outcomes. We also assessed the impact of aggregation strategies on consensus rankings, revealing significant variations in rankings depending on the chosen strategy. This emphasizes the importance of carefully considering the aggregation method when post-processing and interpreting results. Regarding replicability, we defined and quantified it using $\rho$ -replicability and $J$ -replicability. Our findings indicate that replicability is influenced by factors such as the ML model, with logistic regression exhibiting the highest replicability and decision trees
201
+
202
+ the lowest. Additionally, larger dataset samples tend to yield more reliable experimental outcomes, although this trend does not always hold for $J$ -replicability. Based on our results, we recommend specific encoders for practical applications. For decision trees, Weight of Evidence performed the best, although statistical tests did not show a significant difference from other encoders. For logistic regression, Sum, One-Hot, Binary, and Weight of Evidence consistently achieved higher ranks, with statistically significant differences from other encoders. These findings contradict previous studies, highlighting the importance of considering a broad range of experimental factors. Finally, our comparative analysis with related work revealed discrepancies in encoder performance, suggesting that the breadth of our study may contribute to these differences. This emphasizes the need for caution when interpreting results from studies with more limited experimental settings. Overall, our study provides valuable insights into the sensitivity of encoder performance to experimental factors, as well as recommendations for practical encoder selection across different scenarios.
203
+
204
+ # Acknowledgments
205
+
206
+ We thank Dmitriy Simakov for valuable discussions and Natalia Arzamasova for her algorithm and implementation of the PreBinnedEncoder. This work was supported in part by the German Research Foundation (Deutsche Forschungsgemeinschaft), project Charakterisierung, Modellierung und Homogenisierung von Vernetzungswerken mit Hilfe interpretierbarer Datenanalysemethoden, and by the State of Baden-Württemberg, project Algorithm Engineering für die Scalability Challenge.
207
+
208
+ # References
209
+
210
+ [1] David Aha. Tic-Tac-Toe Endgame. UCI Machine Learning Repository. 1991.
211
+ [2] Xavier Bouthillier et al. "Accounting for Variance in Machine Learning Benchmarks". In: MLSys. mlsys.org, 2021.
212
+ [3] Laurent Candillier and Vincent Lemaire. Nomao. UCI Machine Learning Repository. 2012.
213
+ [4] Patricio Cerda and Gael Varouquaux. "Encoding High-Cardinality String Categorical Variables". In: IEEE Trans. Knowl. Data Eng. 34.3 (2022), pp. 1164–1176.
214
+ [5] Patricio Cerda, Gael Varoquaux, and Balázs Kégl. "Similarity encoding for learning with dirty categorical variables". In: CoRR abs/1806.00979 (2018).
215
+ [6] Congressional Voting Records. UCI Machine Learning Repository. 1987.
216
+ [7] Don Coppersmith, Se June Hong, and Jonathan R. M. Hosking. "Partitioning Nominal Attributes in Decision Trees". In: Data Min. Knowl. Discov. 3.2 (1999), pp. 197-217.
217
+ [8] Mwamba Kasongo Dahouda and Inwhee Joe. "A Deep-Learned Embedding Technique for Categorical Features Encoding". In: IEEE Access 9 (2021), pp. 114381-114391.
218
+ [9] Mostafa Dehghani et al. "The Benchmark Lottery". In: CoRR abs/2107.07002 (2021).
219
+ [10] Janez Demsrar. "Statistical Comparisons of Classifiers over Multiple Data Sets". In: J. Mach. Learn. Res. 7 (2006), pp. 1-30.
220
+ [11] Keyu Duan et al. "A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking". In: NeurIPS. 2022.
221
+ [12] Bob Evans. Cylinder Bands. UCI Machine Learning Repository. 1995.
222
+ [13] Farhad Soleimanian Gharehchopogh and Seyyed Reza Khaze. "Data mining application for cyber space users tendency in blog writing: a case study". In: CoRR abs/1307.7432 (2013).
223
+ [14] Sebastian Gnat. "Impact of Categorical Variables Encoding on Property Mass Valuation". In: KES. Vol. 192. Procedia Computer Science. Elsevier, 2021, pp. 3542-3550.
224
+ [15] Léo Grinsztajn, Edouard Oyallon, and Gael Varoquaux. "Why do tree-based models still outperform deep learning on typical tabular data?" In: NeurIPS. 2022.
225
+ [16] C. Harley, R. Reynolds, and M. Noordewier. Molecular Biology (Promoter Gene Sequences). UCI Machine Learning Repository. 1990.
226
+ [17] Hans Hofmann. Statlog (German Credit Data). UCI Machine Learning Repository. 1994.
227
+ [18] Ronald Iman and James Davenport. “Approximations of the critical region of the Friedman statistic”. In: Communications in Statistics-Theory and Methods 9 (Jan. 1980), pp. 571–595.
228
+ [19] Andras Janosi et al. Heart Disease. UCI Machine Learning Repository. 1988.
229
+
230
+ [20] Justin M. Johnson and Taghi M. Khoshgoftaar. "Encoding Techniques for High-Cardinality Features and Ensemble Learners". In: IRI. IEEE, 2021, pp. 355–361.
231
+ [21] John G Kemeny. “Mathematics without numbers”. In: Daedalus 88.4 (1959), pp. 577–591.
232
+ [22] Daniele Micci-Barrega. "A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems". In: SIGKDD Explor. 3.1 (2001), pp. 27-32.
233
+ [23] Erick Moreno-Centeno and Adolfo R Escobedo. "Axiomatic aggregation of incomplete rankings". In: IIE Transactions 48.6 (2016), pp. 475-488.
234
+ [24] S Moro, P Rita, and P Cortez. Bank Marketing. UCI Machine Learning Repository. 2012.
235
+ [25] *Mushroom. UCI Machine Learning Repository.* 1987.
236
+ [26] Engineering National Academies of Sciences, Medicine, et al. Reproducibility and replicability in science. National Academies Press, 2019.
237
+ [27] Christina Nießl et al. "Over-optimism in benchmark studies and the multiplicity of design and analysis options when interpreting their results". In: WIREs Data Mining Knowl. Discov. 12.2 (2022).
238
+ [28] Florian Pargent et al. "Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features". In: Comput. Stat. 37.5 (2022), pp. 2671-2692.
239
+ [29] F. Pedregosa et al. "Scikit-learn: Machine Learning in Python". In: Journal of Machine Learning Research 12 (2011), pp. 2825-2830.
240
+ [30] Kedar Potdar, Taher S Pardawala, and Chinmay D Pai. “A comparative study of categorical variable encoding techniques for neural network classifiers”. In: International journal of computer applications 175.4 (2017), pp. 7-9.
241
+ [31] Liudmila Ostroumova Prokhorenkova et al. "CatBoost: unbiased boosting with categorical features". In: NeurIPS. 2018, pp. 6639-6649.
242
+ [32] Quinlan and Quinlan. Credit Approval. UCI Machine Learning Repository.
243
+ [33] Ross Quinlan. Statlog (Australian Credit Approval). UCI Machine Learning Repository.
244
+ [34] Ross Quinlan. Thyroid Disease. UCI Machine Learning Repository. 1987.
245
+ [35] Jan N van Rijn and Jonathan K Vis. "Endgame Analysis of Dou Shou Qi". In: ICGA Journal 37.2 (2014), pp. 120-124.
246
+ [36] Diogo Seca and João Mendes-Moreira. "Benchmark of Encoders of Nominal Features for Regression". In: WorldCIST (1). Vol. 1365. Advances in Intelligent Systems and Computing. Springer, 2021, pp. 146-155.
247
+ [37] Alen Shapiro. *Chess (King-Rook vs. King-Pawn)*. UCI Machine Learning Repository. 1989.
248
+ [38] Peter Sprent and Nigel C Smeeton. Applied nonparametric statistical methods. CRC press, 2016.
249
+ [39] Gero Szepannek. "On the practical relevance of modern machine learning algorithms for credit scoring applications". In: WIAS Report Series 29 (2017), pp. 88-96.
250
+ [40] Muhammad Usman and Adeel Ahmed. *Dresses_Attribute_Sales*. UCI Machine Learning Repository, 2014.
251
+ [41] Eric Valdez-Valenzuela, Angel Kuri-Morales, and Helena Gómez-Adorno. "Measuring the Effect of Categorical Encoders in Machine Learning Tasks Using Synthetic Data". In: MICAI (1). Vol. 13067. Lecture Notes in Computer Science. Springer, 2021, pp. 92–107.
252
+ [42] Joaquin Vanschoren et al. "OpenML: networked science in machine learning". In: SIGKDD Explor. 15.2 (2013), pp. 49-60.
253
+ [43] J. Wnek. MONK's Problems. UCI Machine Learning Repository. 1992.
254
+ [44] Marvin N Wright and Inke R König. "Splitting on categorical predictors in random forests". In: PeerJ 7 (2019), e6339.
255
+ [45] Yeawon Yoo and Adolfo R. Escobedo. “A New Binary Programming Formulation and Social Choice Property for Kemeny Rank Aggregation”. In: Decis. Anal. 18.4 (2021), pp. 296–320.
256
+ [46] H Peyton Young. "Condorcet's theory of voting". In: American Political science review 82.4 (1988), pp. 1231-1244.
257
+ [47] H Peyton Young and Arthur Levenglick. “A consistent extension of Condorcet's election principle”. In: SIAM Journal on applied Mathematics 35.2 (1978), pp. 285–300.
258
+
259
+ [48] Maciej Zieba et al. "Boosted SVM for extracting rules from imbalanced data in application to prediction of the post-operative life expectancy in the lung cancer patients". In: Appl. Soft Comput. 14 (2014), pp. 99-108.
260
+
261
+ # 7 Appendix
262
+
263
+ # 7.1 Encoders
264
+
265
+ This section presents a reproducible description of the encoders discussed in Section 3, following the structure outlined below. We discuss identifier, frequency-based, contrast, and simple target encoders together in Appendix 7.1.4, as all of these encoders can be explicitly represented as functions. Similarity, binning, smoothing, and data-constraining encoders have dedicated sections. Table 5 contains the notation used in this section.
266
+
267
+ Table 5: Notation for section 7.1.
268
+
269
+ <table><tr><td>Symbol</td><td>Meaning</td></tr><tr><td>N0</td><td>natural numbers including 0</td></tr><tr><td>(x)2</td><td>base-2 representation of x ∈ N0</td></tr><tr><td>Xn×d</td><td>set of matrices with entries in X, n rows and d columns</td></tr><tr><td>1</td><td>indicator function</td></tr><tr><td>D</td><td>binary classification dataset</td></tr><tr><td>n</td><td>number of rows of D</td></tr><tr><td>y ∈ {0,1}n</td><td>target attribute of D</td></tr><tr><td>ΩA = {ωl}L</td><td>categorical domain (strings)</td></tr><tr><td>A ∈ ΩAn</td><td>categorical attribute of D to be encoded</td></tr><tr><td>li ∈ {1,...,L}</td><td>such that A_i = ωli</td></tr><tr><td>E: A → M ∈ Rn×d</td><td>encoder</td></tr><tr><td>M ∈ Rn×d</td><td>encoding of A, compact notation</td></tr><tr><td>E(A) ∈ Rn×d</td><td>encoding of A with explicit encoder</td></tr><tr><td>E(ΩA)</td><td>unique values of rows of E(A)</td></tr><tr><td>d = d(E, A)</td><td>number of columns of M</td></tr><tr><td>Mi</td><td>i-th row of M if d &gt; 1, i-th component of M if d = 1</td></tr><tr><td>l ∈ {1,...,L}</td><td>index of levels</td></tr><tr><td>i, h ∈ {1,...,n}</td><td>row indices of M, A, or y</td></tr><tr><td>j ∈ {1,...,d}</td><td>column index of M</td></tr></table>
270
+
271
+ # 7.1.1 Similarity encoders [5, 4]
272
+
273
+ Min-Hash treats $\omega \in \Omega_A$ as a string, splits it into its set of character-level n-grams (substrings of n-consecutive characters), uses a hash function to encode each n-gram into an integer, and finally encodes $\omega$ with the minimum value of the hash function on the set of n-grams. The process is repeated for $d$ hash functions, yielding $M \in \mathbb{R}^{n \times d}$ . The default value of $d$ is 30, the authors report good performance with $300^6$ .
274
+
275
+ # 7.1.2 Binning encoders
276
+
277
+ Pre-Binned Mean-Target partitions $\Omega_A$ into $B$ buckets $\{P_b\}_{b=1}^B$ to solve the optimization problem
278
+
279
+ Maximize $B$
280
+
281
+ $$
282
+ \text {s u b j e c t} \frac {1}{n} \sum_ {\omega \in P _ {b}} \sum_ {i = 1} ^ {n} \mathbb {1} (\mathbf {A} _ {i} = \omega) \geq \vartheta \quad \forall b \leq B
283
+ $$
284
+
285
+ where $\vartheta \in [0,1]$ is a user-defined threshold. Each bucket is then treated as a new level and encoded with Mean-Target, yielding an encoding $M\in \mathbb{R}^n$
286
+
287
+ Discretized Mean-Target partitions $MT(\Omega_A)$ into intervals $\{I_1,\dots ,I_B\}$ of equal length. Letting $I(l)$ be the interval that contains $MT(\omega_l)$ (that is, the average target associated to $\omega_{l}$ ), the encoding is $\mathbf{M}\in \mathbb{R}^n:\mathbf{M}_i = \inf I(l_i)$ . We experimented with $B = 2,5,10$
288
+
289
+ Table 6: Identifier encoders
290
+
291
+ <table><tr><td></td><td>E(ΩA)</td><td>E(A)</td><td></td></tr><tr><td>Binary [8]</td><td>{0,1}[log2(L)]+1</td><td>Mi=(li)2</td><td></td></tr><tr><td>Dummy [28, 44]</td><td>{0,1}L-1</td><td>Mij={1(Ai=ωj)0j≠L0j=L</td><td></td></tr><tr><td>One-Hot [28, 4, 8, 44, 41]</td><td>{0,1}L</td><td>Mij=1(Ai=ωj)</td><td></td></tr><tr><td>Ordinal [28, 44, 41]</td><td>N0</td><td>Mi=li</td><td></td></tr></table>
292
+
293
+ Table 7: Frequency-based encoders
294
+
295
+ <table><tr><td>Count [36]</td><td>N0</td><td>Mi = ∑j 1 (Aj = ωl)</td></tr><tr><td>Frequency [28]</td><td>R</td><td>Mi = 1/n ∑j 1 (Aj = ωl)</td></tr></table>
296
+
297
+ Table 8: Contrast encoders — without intercept
298
+
299
+ <table><tr><td>Sum [41]</td><td>RL-1</td><td>Mij = {1(Ai = j) j ≠ L -1 j = L</td></tr><tr><td>Backward difference [41]</td><td>RL-1</td><td>Mij = {−L-i/L i ≤ j 1/L i &gt; j</td></tr><tr><td>Helmert [41]</td><td>RL-1</td><td>Mij = {−1/j+1 i ≤ j 1/j+1 i = j+1 0 i ≥ j+2</td></tr></table>
300
+
301
+ Table 9: Simple target encoders
302
+
303
+ <table><tr><td>Mean-Target [28, 8]</td><td>R</td><td>Mi = ∑n h=1 yh1(Ah = ωl i)</td></tr><tr><td>Weight of Evidence [39] [28]</td><td>N0</td><td>Mi = log(MT(A)/1-MT(A))</td></tr></table>
304
+
305
+ # 7.1.3 Smoothing target encoders
306
+
307
+ Mean-Estimate [41]. Let $n_l = \sum_{i=1}^n \mathbb{1}(A_i = \omega_l)$ be the number of occurrences of $\omega_l$ in $\mathbf{A}$ .
308
+
309
+ $$
310
+ \mathbf {M} _ {i} = \frac {n _ {l _ {i}} M T (\omega_ {l _ {i}}) + \frac {w}{n} \sum_ {i = 1} ^ {n} y _ {i}}{w + n _ {l _ {i}}}
311
+ $$
312
+
313
+ where $w$ is a user-defined weight. Common choices are 1, 10.
314
+
315
+ GLMM [28] fits, for every $\omega_{l}\in \Omega_{A}$ , a random intercept model
316
+
317
+ $$
318
+ y _ {i} = \beta_ {l _ {i}} + u _ {l _ {i}} + \varepsilon_ {i}
319
+ $$
320
+
321
+ where $u_{l}\sim N(0,\tau^{2})$ and $\varepsilon_{i}\sim N(0,\sigma^{2})$ . The encoding is $\mathbf{M}\in \mathbb{R}^n:\mathbf{M}_i = \beta_{l_i}$
322
+
323
+ # 7.1.4 Identifier, frequency-based, contrast, simple target encoders
324
+
325
+ The descriptions are divided as follows: Table 6 is for identifier encoders, Table 7 is for frequency-based encoders, Table 8 is for contrast encoders, and Table 9 is for simple target encoders.
326
+
327
+ # 7.1.5 Data-constraining encoders
328
+
329
+ CatBoost [41] uses a permutation $\pi$ of $\{1, \dots, n\}$ and encodes with $\mathbf{M} \in \mathbb{R}^n$ such that
330
+
331
+ $$
332
+ \mathbf {M} _ {\pi (i)} = \sum_ {h \leq \pi (i)} y _ {h} \mathbb {1} \left(A _ {h} = \omega_ {l _ {\pi (i)}}\right)
333
+ $$
334
+
335
+ Table 10: Notation for section 7.2.2.
336
+
337
+ <table><tr><td>Symbol</td><td>Meaning</td></tr><tr><td>⊥</td><td>missing evaluation or rank</td></tr><tr><td>1</td><td>indicator function</td></tr><tr><td>Ei</td><td>encoder as in table 5</td></tr><tr><td>Φj: Ei → R ∪ {⊥}</td><td>average cross-validated quality on the j-th dataset, all other factors fixed</td></tr><tr><td>Φmaxj = maxi=1,...,n{Φj(Ei)}</td><td>best quality on the j-th dataset, all other fac-tors fixed</td></tr><tr><td>Φminj = mini=1,...,n{Φj(Ei)}</td><td>worst quality on the j-th dataset, all other factors fixed</td></tr><tr><td>rj: E → N0 ∪ {⊥}</td><td>ranking obtained from Φj</td></tr><tr><td>Rj = (1 (rj(Ei) ≤ rj(Eh))i,h=1n ∈ {0,1}n×n</td><td>adjacency matrix of rj</td></tr><tr><td>c: E → N0 ∪ {⊥}</td><td>consensus ranking</td></tr><tr><td>C ∈ {0,1}n×n</td><td>adjacency matrix of c</td></tr><tr><td>i,h,k ∈ {1,...,n}</td><td>index of encoders</td></tr><tr><td>j ∈ {1,...,m}</td><td>index of objects to be aggregated</td></tr></table>
338
+
339
+ Cross-Validated MT [28] randomly partitions $\{1, \dots, n\}$ in $k$ folds of equal size. Let $D_{a_i}$ be the fold that contains $i$ . Then, every fold is encoded with Mean-Target trained on the other $k - 1$ folds:
340
+
341
+ $$
342
+ \mathbf {M} _ {i} = \sum_ {h = 1} ^ {n} \mathbb {1} (h \notin D _ {a _ {i}}) \mathbb {1} (\mathbf {A} _ {h} = \omega_ {l _ {i}}) y _ {h}
343
+ $$
344
+
345
+ Common values for $k$ are 2, 5, 10.
346
+
347
+ Cross-Validated GLMM [28] works in a similar fashion as CVMT: it encodes each fold with GLMM trained on the other $k - 1$ folds.
348
+
349
+ BlowUp Cross-Validated MT randomly partitions $\{1, \ldots, n\}$ in $k$ folds $D_{1}, \ldots, D_{k}$ of roughly equal size. Then, it encodes with $\mathbf{M} \in \mathbb{R}^{n \times k}$ so that the $j$ -th column is $\mathbf{A}$ encoded with Mean-Target trained on the $j$ -th fold, yielding
350
+
351
+ $$
352
+ \mathbf {M} _ {i j} = \sum_ {h = 1} ^ {n} \mathbb {1} (h \in D _ {j}) \mathbb {1} (\mathbf {A} _ {h} = \omega_ {l _ {i}}) y _ {h}
353
+ $$
354
+
355
+ We experimented with $k = 2,5,10$
356
+
357
+ Blowup Cross-Validated GLMM is analogous to BUMT, but the $j$ -th column of its encoding $\mathbf{M}$ is encode with $\mathbf{M} \in \mathbb{R}^{n \times k}$ so that the $j$ -th column of $\mathbf{M}$ is $\mathbf{A}$ encoded with GLMM trained on the $j$ -th fold.
358
+
359
+ # 7.2 Experimental design
360
+
361
+ This section provides additional details about the datasets and aggregation strategies we discussed in Section 4. The notation we use in this section is summarized in Table 10.
362
+
363
+ # 7.2.1 Datasets
364
+
365
+ Table 11 lists the datasets used in our experiments. The columns are as follows: ID is the OpenML identifier; $n$ is the number of rows; $d$ is the number of attributes; ${d}_{\text{cat }}$ is the number of categorical attributes; $\max \left| {\Omega }_{A}\right|$ is the maximum categorical attribute cardinality; the "ft" flag denotes datasets used for full-tuning (cf. Section 4.4)
366
+
367
+ Table 11: Datasets used in the study.
368
+
369
+ <table><tr><td>Name</td><td>Ref.</td><td>ID</td><td>n</td><td>d</td><td>\( d_{cat} \)</td><td>max \( |\Omega_A| \)</td><td>ft</td></tr><tr><td>ada_prior</td><td></td><td>1037</td><td>4562</td><td>14</td><td>7</td><td>40</td><td>✓</td></tr><tr><td>adult</td><td></td><td>1590</td><td>48842</td><td>14</td><td>7</td><td>42</td><td>✓</td></tr><tr><td>airlines</td><td></td><td>1169</td><td>539383</td><td>7</td><td>4</td><td>293</td><td></td></tr><tr><td>amazon.employee_access</td><td></td><td>4135</td><td>32769</td><td>9</td><td>9</td><td>7518</td><td></td></tr><tr><td>Agrawal1</td><td></td><td>1235</td><td>1000000</td><td>9</td><td>3</td><td>20</td><td></td></tr><tr><td>Australian</td><td>[33]</td><td>40981</td><td>690</td><td>14</td><td>4</td><td>14</td><td>✓</td></tr><tr><td>bank-marketing</td><td>[24]</td><td>1461</td><td>45211</td><td>16</td><td>6</td><td>12</td><td></td></tr><tr><td>).(blogger</td><td>[13]</td><td>1463</td><td>100</td><td>5</td><td>3</td><td>5</td><td>✓</td></tr><tr><td>Census-Income-KDD</td><td></td><td>42750</td><td>199523</td><td>41</td><td>27</td><td>51</td><td></td></tr><tr><td>credit-approval</td><td>[32]</td><td>29</td><td>690</td><td>15</td><td>6</td><td>15</td><td>✓</td></tr><tr><td>credit-g</td><td>[17]</td><td>31</td><td>1000</td><td>20</td><td>11</td><td>10</td><td>✓</td></tr><tr><td>cylinder-bands</td><td>[12]</td><td>6332</td><td>540</td><td>37</td><td>17</td><td>71</td><td>✓</td></tr><tr><td>dresses-sales</td><td>[40]</td><td>23381</td><td>500</td><td>12</td><td>11</td><td>25</td><td>✓</td></tr><tr><td>heart-h</td><td>[19]</td><td>51</td><td>294</td><td>13</td><td>6</td><td>4</td><td></td></tr><tr><td>ibm-employee-attrition</td><td></td><td>43896</td><td>1470</td><td>34</td><td>5</td><td>9</td><td>✓</td></tr><tr><td>ibm-employee-performance</td><td></td><td>43897</td><td>1470</td><td>33</td><td>5</td><td>9</td><td>✓</td></tr><tr><td>irish</td><td></td><td>451</td><td>500</td><td>5</td><td>2</td><td>11</td><td>✓</td></tr><tr><td>jungle_chess_2pcs...elephant</td><td>[35]</td><td>40999</td><td>2351</td><td>46</td><td>2</td><td>3</td><td>✓</td></tr><tr><td>jungle_chess_2pcs...lion</td><td>[35]</td><td>41007</td><td>2352</td><td>46</td><td>2</td><td>3</td><td>✓</td></tr><tr><td>jungle_chess_2pcs...rat</td><td>[35]</td><td>41005</td><td>3660</td><td>46</td><td>2</td><td>3</td><td>✓</td></tr><tr><td>kddinternet_USAGE</td><td></td><td>981</td><td>10108</td><td>68</td><td>20</td><td>129</td><td>✓</td></tr><tr><td>KDDCup09_appetency</td><td></td><td>1111</td><td>50000</td><td>230</td><td>33</td><td>15416</td><td></td></tr><tr><td>KDDCup09_churn</td><td></td><td>1112</td><td>50000</td><td>230</td><td>33</td><td>15416</td><td></td></tr><tr><td>KDDCup09_upselling</td><td></td><td>1114</td><td>50000</td><td>230</td><td>33</td><td>15416</td><td></td></tr><tr><td>KDD98</td><td></td><td>42343</td><td>82318</td><td>477</td><td>107</td><td>18543</td><td></td></tr><tr><td>kr-vs-kp</td><td>[37]</td><td>3</td><td>3196</td><td>36</td><td>1</td><td>3</td><td>✓</td></tr><tr><td>kick</td><td></td><td>41162</td><td>72983</td><td>32</td><td>17</td><td>1063</td><td></td></tr><tr><td>law-school-admission-bianry</td><td></td><td>43890</td><td>20800</td><td>11</td><td>1</td><td>6</td><td></td></tr><tr><td>molecular-biology_promoters</td><td>[16]</td><td>956</td><td>106</td><td>57</td><td>56</td><td>4</td><td>✓</td></tr><tr><td>monks-problems-1</td><td>[43]</td><td>333</td><td>556</td><td>6</td><td>4</td><td>4</td><td>✓</td></tr><tr><td>monks-problems-2</td><td>[43]</td><td>334</td><td>601</td><td>6</td><td>4</td><td>4</td><td>✓</td></tr><tr><td>mv</td><td></td><td>881</td><td>40768</td><td>10</td><td>1</td><td>3</td><td>✓</td></tr><tr><td>mushroom</td><td>[25]</td><td>43922</td><td>8124</td><td>22</td><td>16</td><td>12</td><td>✓</td></tr><tr><td>national-longitudinal-survey-binary</td><td>[6]</td><td>43892</td><td>4908</td><td>16</td><td>4</td><td>29</td><td>✓</td></tr><tr><td>nomao</td><td>[3]</td><td>1486</td><td>34465</td><td>118</td><td>27</td><td>3</td><td></td></tr><tr><td>nursery</td><td></td><td>959</td><td>12960</td><td>8</td><td>7</td><td>5</td><td></td></tr><tr><td>openpayments</td><td></td><td>42738</td><td>73558</td><td>5</td><td>4</td><td>4374</td><td></td></tr><tr><td>porto-seguro</td><td></td><td>41224</td><td>595212</td><td>57</td><td>13</td><td>104</td><td></td></tr><tr><td>profb</td><td></td><td>470</td><td>672</td><td>9</td><td>3</td><td>28</td><td>✓</td></tr><tr><td>sick</td><td>[34]</td><td>38</td><td>3772</td><td>29</td><td>2</td><td>5</td><td>✓</td></tr><tr><td>sf-police-incidents</td><td></td><td>42344</td><td>538638</td><td>6</td><td>5</td><td>21838</td><td></td></tr><tr><td>SpeedDating</td><td></td><td>40536</td><td>8378</td><td>120</td><td>58</td><td>260</td><td>✓</td></tr><tr><td>students Scores</td><td></td><td>43098</td><td>1000</td><td>7</td><td>2</td><td>6</td><td>✓</td></tr><tr><td>telco-customer-churn</td><td></td><td>42178</td><td>7043</td><td>19</td><td>11</td><td>6531</td><td></td></tr><tr><td>thoracic-surgery</td><td>[48]</td><td>1506</td><td>470</td><td>16</td><td>3</td><td>7</td><td>✓</td></tr><tr><td>tic-tac-toe</td><td>[1]</td><td>50</td><td>958</td><td>9</td><td>9</td><td>3</td><td>✓</td></tr><tr><td>Titanic</td><td></td><td>40945</td><td>1309</td><td>13</td><td>6</td><td>1307</td><td></td></tr><tr><td>vote</td><td>[6]</td><td>56</td><td>435</td><td>16</td><td>16</td><td>3</td><td>✓</td></tr><tr><td>wholesale-customers</td><td></td><td>1511</td><td>440</td><td>8</td><td>1</td><td>3</td><td>✓</td></tr><tr><td>WMO-Hurricane-Survival-Dataset</td><td></td><td>43607</td><td>5021</td><td>22</td><td>21</td><td>4173</td><td></td></tr></table>
370
+
371
+ # 7.2.2 Aggregation strategies
372
+
373
+ This section presents the mathematical formulations of the aggregation strategies. As Section 4.5 explains, the results are aggregated across datasets, while keeping the other factors — ML model, tuning strategy, and quality metric — fixed.
374
+
375
+ Heuristics. Heuristics aggregate by ranking encoders according to some score. Increasing heuristics assign the best rank to the encoder with the highest score, while the non-increasing ones assign the best rank to the encoders with the lowest score. Table 12 contains the respective formulas. Any missing evaluations $(\perp)$ are ignored during the computation.
376
+
377
+ Friedman-Nemenyi tests. The Friedman test is used to rule out the null hypothesis that all encoders have, on average, the same rank. The Friedman statistic adjusted for ties [38, 18] is
378
+
379
+ $$
380
+ T = \frac {(m - 1) (S _ {t} - C)}{S _ {r} - C}
381
+ $$
382
+
383
+ where $S_{r} = \sum_{i = 1}^{n}\sum_{j = 1}^{m}r_{j}(E_{i})^{2}, S_{t} = \frac{1}{m}\sum_{i = 1}^{n}\left(\sum_{j = 1}^{m}r_{j}(E_{i})\right)^{2}$ , and $C = \frac{1}{4} mn(n + 1)^2$
384
+
385
+ Under the null hypothesis that all encoders have the same rank, $T$ is approximately distributed as an $F$ -distribution with $n - 1$ and $(m - 1)(n - 1)$ degrees of freedom.
386
+
387
+ If the Friedman hypothesis is rejected, one can compare all pairs of encoders with $n(n - 1) / 2$ Nemenyi post-hoc tests [10]. Nemenyi tests apply a correction to control the error from testing multiple hypotheses. Two encoders $E_{1}$ and $E_{2}$ are significantly different if
388
+
389
+ $$
390
+ \frac {1}{m} \sum_ {j = 1} ^ {m} \left(r _ {j} (E 1) - r _ {j} (E _ {2})\right) \geq q _ {\alpha} \sqrt {\frac {n (n + 1)}{6 m}}
391
+ $$
392
+
393
+ where $\frac{1}{\sqrt{2}} q_{\alpha}$ is the critical value based on a Studentized range statistic [10].
394
+
395
+ Kemeny-Young aggregation. [21, 47, 45, 23]
396
+
397
+ The consensus's adjacency matrix $\mathbf{C}$ is a solution to the mixed-integer linear problem
398
+
399
+ $$
400
+ \text {M a x i m i z e} \sum_ {i, h} \mathbf {S} _ {i h} \left(2 \mathbf {C} _ {i h} - 1\right)
401
+ $$
402
+
403
+ subject to $\mathbf{C}_{ih} - \mathbf{C}_{kh} - \mathbf{C}_{ik}\geq -1$
404
+
405
+ $\mathbf{C}_{ih} + \mathbf{C}_{hi}\geq 1$ $\forall i < h$
406
+
407
+ $\mathbf{C}_{ih}\in \{0,1\}$ (20 $\forall i,h$
408
+
409
+ where $\mathbf{S} = \left(\sum_{j}\frac{\mathbf{R}_{ih}^{j}}{n_{j}(n_{j} - 1)}\right)_{i,h = 1}^{n}$ is a cost matrix and $n_j = \sum_i\mathbb{1}(r_j(E_i)\neq \bot)$ is the number of encoders with evaluation on dataset $j$ . This formulation accounts for ties and missing ranks.
410
+
411
+ # 7.3 Results
412
+
413
+ This section complements Section 5.
414
+
415
+ # 7.3.1 Missing evaluations
416
+
417
+ We successfully completed 61812 runs out of 64000, one per combination of encoder, dataset, ML model, tuning strategy, and quality metric. The 2188 failed evaluations are equally distributed among the encoders, while tuning was the greatest influencing factor. Indeed, there were 4303 missing runs with no tuning, 1152 with model tuning, and 32 with full tuning. This is likely due to the bigger datasets used in no tuning and model tuning, cf. Section 4.4 and Table 11. The total runtime for successful evaluations was 108 days.
418
+
419
+ Table 12: Scores of heuristics.
420
+
421
+ <table><tr><td></td><td>Score of E</td><td>Increasing</td></tr><tr><td>Mean rank</td><td>1/m∑j rj(E)</td><td></td></tr><tr><td>Median rank</td><td>median{rj(E)}m j=1</td><td></td></tr><tr><td>Rank best</td><td>∑j=1m 1 (rj(E)=1)</td><td>✓</td></tr><tr><td>Rank worst</td><td>∑j=1m 1 (rj(E)≠ max i=1,...,m r(Ej))</td><td></td></tr><tr><td>Mean quality</td><td>1/m∑j Φj(E)</td><td>✓</td></tr><tr><td>Median quality</td><td>median{Φj(E)}m j=1</td><td>✓</td></tr><tr><td>Rescaled mean quality</td><td>1/m ∑j=1m Φj(E)-Φjmin/Φjmax-Φjmin</td><td>✓</td></tr><tr><td>θ-best quality</td><td>∑j=1m 1 (Φj(E)≥θ·Φjmax)</td><td>✓</td></tr></table>
422
+
423
+ # 7.3.2 Run time
424
+
425
+ We computed two scores for the runtimes of encoders. First is the time necessary to encode the dataset. The outcome, displayed in figure 4a, is that GLMM-based encoders are the slowest. This happens because the bottleneck of these encoders is the fitting of the random intercept model, a problem that we could only partially alleviate with our custom implementation. As expected, the model has no influence on the runtime.
426
+
427
+ Second, the time necessary to tune the model-encoder pipeline, each tuning step requiring encoding the dataset and then fitting a model. Figure 4b tells a similar story as for encoding, with GLMM-based encoders being the slowest. The other encoders, apart from Drop and Mean-Target, all show a similar runtime.
428
+
429
+ ![](images/41e16f9545e367575fbe1d3b31161932d2240cfbb18e30d7db423d69d6ce1ce8.jpg)
430
+
431
+ ![](images/451280812143485a74e9d1a20fd532daee0354122687354c44d6520837363dcd.jpg)
432
+ (a) Encoding
433
+ (b) Tuning
434
+ Figure 4: Runtime of encoders (a) and full tuning pipelines (b).
435
+
436
+ # 7.3.3 Replicability
437
+
438
+ This section extends the replicability analysis of Section 5.2, showing the behavior of different quality metrics in Figure 5a and aggregation strategies in Figure 5b.
439
+
440
+ The quality metrics behave similarly for $\rho$ -replicability. The notable exception is the AUC in model tuning, which is significantly better than the other metrics. Regarding $J$ -replicability, instead, accuracy is clearly the poorer choice. This hints that accuracy cannot discern the best encoder as well as the other metrics do and that it is more sensitive to the choice of dataset.
441
+
442
+ Among the aggregation strategies, rank best (R-B) shows higher replicability. A possible explanation is that R-B produces consensus rankings with many encoders tied as the best ones and few tiers in general,
443
+
444
+ ![](images/b387083eab9f4a9dbaba1b7508cdc94bffd007a68e7d7f0107a2cd49ade2504b.jpg)
445
+
446
+ ![](images/6296e4e43e9e32df8a434919911e7124fc05e9ae51e1cccfe5b9f67b906ef660.jpg)
447
+ (b) Aggregation strategy
448
+ Figure 5: Average similarity of consensus rankings from disjoint subsets of datasets, conditional on (a) quality metric and (b) aggregation strategy.
449
+
450
+ # 7.3.4 Comparing encoders
451
+
452
+ This section expands on Section 5.3 and portrays in Figure 6 the distribution of ranks of encoders. The best encoders are evident for LogReg (Sum, OH, WoE, Bin) and k-NN (WoE), confirmed by Nemenyi tests at 0.05 significance.
453
+
454
+ ![](images/4cfe7ed40de6fe350b79525e891f697365a8a85a584406b125c7f0380b91762b.jpg)
455
+
456
+ ![](images/8ad18abafcbe3bb6bd9fd040d3a018ed87b65f25af6898d6ceeab98508f90a4e.jpg)
457
+ (a) DT
458
+ Figure 6: Ranks of encoders.
459
+
460
+ # 7.3.5 Effect of tuning
461
+
462
+ This section investigates whether tuning leads to improvements in pipeline performance. The tuning strategies are described in Section 4.4 For a pair of tuning strategies, we consider the factors they share and subtract the performance of the pipelines. Figure 7 shows that full tuning is, in general, advantageous over no tuning and slightly better than model tuning.
463
+
464
+ ![](images/3d58d2b84b99d69f946277cce2d97057a3a0da05f24d94f48c203f9024c75bad.jpg)
465
+ (a) Model
466
+
467
+ ![](images/50cc9406c6080370ef377355ed2a4d38eacdb76311d551f2df89c545c755b8c6.jpg)
468
+
469
+ ![](images/43f998541103cdcb97c8adcea8e1d0c0c4a9d8b0de548a83377b2b166ae8bcb8.jpg)
470
+ (b) Scoring
471
+
472
+ ![](images/9c59e90f99c3b6189f027f8522cfdfa3ec456c3ee813b5cf748fa3da6613e08e.jpg)
473
+ (c) Encoder — full tuning VS no tuning
474
+ (d) Encoder — full tuning VS model tuning
475
+ Figure 7: Performance gain of full tuning over no tuning and model tuning.
abenchmarkofcategoricalencodersforbinaryclassification/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97f39e8160d79b5f83003fc3ff27cb975a57d9b88987ec4baa3e238237092fab
3
+ size 1439976
abenchmarkofcategoricalencodersforbinaryclassification/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:002ae1b1c7210e8b6a51ae7552cbfd70fe3bb58b2eefa9d14702080b1cc5a2d0
3
+ size 562779
acaseforreframingautomatedmedicalimageclassificationassegmentation/5bd74e27-4907-4270-b043-70a2a6f2f565_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f985c92cfa83aadf78846b245b8e1a30fdba98bb4046d5f5d23123f237e4d5b
3
+ size 156048
acaseforreframingautomatedmedicalimageclassificationassegmentation/5bd74e27-4907-4270-b043-70a2a6f2f565_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dcb695dcbd2e1d53f452d91998f91a72df3253d1fb921fa8e4868afc45f15e0b
3
+ size 186765
acaseforreframingautomatedmedicalimageclassificationassegmentation/5bd74e27-4907-4270-b043-70a2a6f2f565_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba47b9bf03a1c195fb7976286023f1e762b13d8bd28418c2b51372a936357977
3
+ size 5675051
acaseforreframingautomatedmedicalimageclassificationassegmentation/full.md ADDED
The diff for this file is too large to render. See raw diff
 
acaseforreframingautomatedmedicalimageclassificationassegmentation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c24bd7fad78b9de7d09b0ee78ed27c20e7ea735f1cb22e85e3e852ae8711a5e6
3
+ size 872287
acaseforreframingautomatedmedicalimageclassificationassegmentation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a00ffc793d02c9ad1ef06c7135f3063cff938357437e8ba4c79e679078f33cb2
3
+ size 689306
afastheuristictooptimizetimespacetradeoffforlargemodels/993d6f42-1b27-49de-bca5-48652f44f02e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3d5fcfefb32771d75beff1ddcc6c7d3dfe8b9672da477c89f58f9d852deb831
3
+ size 130049
afastheuristictooptimizetimespacetradeoffforlargemodels/993d6f42-1b27-49de-bca5-48652f44f02e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dc2b959d61238f5f9c2b1cebb47405476565e2f9796ac2ea080de013c81c630
3
+ size 154122
afastheuristictooptimizetimespacetradeoffforlargemodels/993d6f42-1b27-49de-bca5-48652f44f02e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:356c92d9caf7d0c8b3a25b351b64404663214936b8c192b6d3abe8ac28056663
3
+ size 1161835
afastheuristictooptimizetimespacetradeoffforlargemodels/full.md ADDED
@@ -0,0 +1,482 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A fast heuristic to optimize time-space tradeoff for large models
2
+
3
+ Akifumi Imanishi *
4
+
5
+ Preferred Networks imanishi@preferred.jp
6
+
7
+ Zijian Xu*
8
+ Preferred Networks
9
+ joe@preferred.jp
10
+
11
+ Masayuki Takagi
12
+ Preferred Networks
13
+ mtakagi@preferred.jp
14
+
15
+ Sixue Wang
16
+
17
+ Preferred Networks
18
+ cecilwang@preferred.jp
19
+
20
+ Emilio Castillo
21
+ Preferred Networks
22
+ ecastill@preferred.jp
23
+
24
+ # Abstract
25
+
26
+ Training large-scale neural networks is heavily constrained by GPU memory. In order to circumvent this limitation, gradient checkpointing, or recomputation is a powerful technique. There is active research in this area with methods such as Checkmake [19] or Moccasin [3]. However, both Checkmate and Moccasin rely on mixed integer linear programming or constraint programming, resulting in limited scalability due to their exponentially large search space.
27
+
28
+ This paper proposes a novel algorithm for recomputation (FastSA) based on a simulated annealing heuristic that achieves comparable or even better solutions than state-of-the-art alternatives. FastSA can optimize computational graphs with thousands of nodes within 3 to 30 seconds, several orders of magnitude faster than current solutions.
29
+
30
+ We applied FastSA to PyTorch models and verified its effectiveness through popular large vision and text models, including recent language models with the transformer architecture. The results demonstrate significant memory reductions by $73\%$ with extra $18\%$ computational overheads on average. Our experiments demonstrate the practicality and effectiveness of our recomputation algorithm, further highlighting its potential for wide application in various deep learning domains.
31
+
32
+ # 1 Introduction
33
+
34
+ The memory requirements for deep neural networks continue to grow together with model complexity. For example, training a state-of-the-art Large Language Model (LLM) such as LLaMA [40] with 65 billion parameters needs 1024 NVIDIA A100 GPU devices for 21 days and different techniques to overcome the immense memory consumption of the process. While the model parameters may fit in the device memory, the results of several operations of the forward pass are saved and remain in the device memory to compute the gradients during the backpropagation step. This limits the size of trainable models. One of the most widely used solutions is to split the model parameters across multiple devices into vertical or horizontal ways [34, 26] in what is called Pipeline or Model parallelism. However, these approaches may have the following problems. (1) It may be difficult to split the model equally across the workers when the model architecture is complex. (2) We need to modify the model for the parallelism. (3) Hiding communication behind computation needs further tuning. Heavy inter-device communication may result in a GPU utilization of only $5\%$ of the hardware peak[30].
35
+
36
+ ![](images/c8415d86f214a048c3772af166bc575ac5cf1498c29a46bda3475ed9861ed692.jpg)
37
+ Figure 1: Optimization time for all 23 models used in our experiments with different budgets. Checkmate LP could not find a feasible solution within 6 hours for models with more than 2300 nodes. For most instances, FastSA could found recomputation plan around 3 seconds.
38
+
39
+ Numerous techniques exist to lessen memory requirements while training deep neural networks. Gradient checkpointing, or recomputation, is a widely-known approach that recomutes some of the activations during the backward pass, rather than saving them in memory, consequently reducing memory requirements. Recomputation plans can be identified by the user or done heuristically. Originally, division-based methods were used for automatic recomputation, where the forward computation graph was split into stages, and only tensors active across stages were stored [7, 21, 20]. However, these methods are less effective for complex large networks that are far from sequential networks. Checkmate is one of the state-of-the-art methods in automatic recomputation. It can be applied to any computational graph, considering the costs of operators and the sizes of values. However, Checkmate requires to solve a large mixed integer linear programming problem, where the search space is exponential to the square of the size of the computational graph, thus requires immense computational time and a large RAM resource. Moreover, Checkmate cannot efficiently reduce the memory usage if the initial computation order is not memory efficient enough (discussed in Appendix A.1). A recent work, Moccasin [3], uses constraint programming to build a set of restrictions and introduces a hyperparameter that limits the amount of times a variable can be recomputed. Although the search space remains exponential like Checkmate, this hyperparameter lessens the number of integer variables from quadratic to linear with respect to the graph size, resulting in a faster execution.
40
+
41
+ This paper proposes a novel technique to recompute the activations that need to be saved for the backpropagation pass by using a fast heuristic algorithm that operates on the joint computational graph of the forward and backward passes and determines which operations to be recomputed to reduce the amount of used temporal memory. We formalize the problem of recomputation as finding a candidate sequence of operators that minimizes an objective function. We apply a simulated annealing algorithm where each step can be an operator addition or deletion from the computational sequence, or altering the order in which operators are sequentially applied, while maintaining the dependencies between them. The performance of the simulated annealing is significantly improved by using a segment tree data structure [9] which allows to lazily evaluate the memory usage of each candidate sequence, enabling nearly 1 million mutations on the candidate sequence per second.
42
+
43
+ The main contributions of our paper are:
44
+
45
+ - We present a novel approach to recompute intermediate results in computational graphs using a heuristic algorithm based on simulated annealing that works for any computational graph.
46
+ - We further improve the algorithm's performance using a data structure that allows efficient computation of peak memory usage when the sequence of operators is changed. This can optimize the recomputation within 3 to 30 seconds even for large computational graphs.
47
+ - We evaluate our proposal using a representative set of models obtained from Hugging Face, including vision models, text models, and recent language models such as LLaMa [40]. We show reductions in memory consumption by $73\%$ with an overhead of extra $18\%$ computational cost on average.
48
+
49
+ ![](images/e651b9993c42149a1b0250122da8d4725e4cae6d0659801186df43860bea6cfc.jpg)
50
+ Figure 2: Lifetime of recomputed values. The left side shows the full lifetime of a value without recomputation, while the right side demonstrates that recomputation results in the value having two distinct lifetimes.
51
+
52
+ ![](images/d2e1f6492ed482dd69fca70d23f6f921a88107a3edef6731a0adc89285530847.jpg)
53
+
54
+ # 2 Fast Simulated Annealing for Recomputation
55
+
56
+ # 2.1 Problem Description
57
+
58
+ To apply the Simulated Annealing optimization strategy to the recomputation problem, we present a series of formal concepts that will be used throughout the algorithm description.
59
+
60
+ Computational Graph and Sequence We consider computational graphs in which each node represents an operator. A computational graph consists of a finite set of nodes $\mathcal{N}$ and a finite set of values $\nu$ . Each node takes some values as inputs and produces some values as outputs; therefore, we introduce notations $\mathrm{input}(n)\subseteq \mathcal{V}$ and $\mathrm{output}(n)\subseteq \mathcal{V}$ for the values associated to a node $n\in \mathcal{N}$ . In most cases, the numbers of input $(n)$ and output $(n)$ are small. The computational graph is often visualized as a directed acyclic graph (DAG) with directed edges from $n_i\in \mathcal{N}$ to $n_j\in \mathcal{N}$ if an output of $n_i$ is used as an input of $n_j$ , i.e., $\mathrm{output}(n_i)\cap \mathrm{input}(n_j)\neq \emptyset$ . A value that is not the output of any node is a model input, which often represents the arguments passed to the model and the model parameters. We also have model outputs, the target of the computation. Both nodes and values are weighted: the computational cost of node $n$ is represented by $c(n)\geq 0$ , and the size of value $v$ is represented by $s(v)\geq 0$ .
61
+
62
+ Rather than performing the optimization over the DAG, the proposed method is easier to understand if the computation is treated as a sequence of nodes $(n_{1},\ldots ,n_{T})$ , which means that $n_t$ is operated at integer time $t$ . We say that a sequence of nodes has a valid dependency if we can execute the operators in the order of the sequence so that all model outputs are correctly computed. Formally, $\forall i.\forall v\in \mathrm{input}(n_i).\exists j < i$ such that $v\in \mathrm{output}(n_j)$ or $v$ is a model input. It is worth noting that node duplication is allowed in the above node sequence $(n_1,\dots,n_T)$ , and the recomputation of a value can be represented by multiple appearances of the same node.
63
+
64
+ Memory usage In order to optimize memory usage, we must keep track of when values are computed and used as inputs. Specifically, we must keep value $v$ in memory from the time it is computed as an output until the last time it is used as an input. However, when $v$ is recomputed at time $t$ , we can temporarily free $v$ from memory if it is not used until $t$ . When recomputation is considered, the lifetime of $v$ can be represented as a set of independent time intervals, as shown in Figure 2. To determine whether $v$ is kept in memory at a given time $t$ , we use a function $L_{S}(v,t) \mapsto \{0,1\}$ . This function outputs 1 if $v$ is kept in memory at time $t$ , and 0 otherwise.
65
+
66
+ $$
67
+ L _ {S} (v, t) := \left\{ \begin{array}{l l} 1 & v \text {i s a m o d e l i n p u t} \\ 1 & v \text {i s a m o d e l o u t p u t} \wedge \forall t ^ {\prime} > t, v \not \in \mathrm {o u t p u t} (n _ {t ^ {\prime}}) \\ 1 & \exists t ^ {\prime \prime} \geq t, v \in \mathrm {i n p u t} (n _ {t ^ {\prime \prime}}) \wedge \forall t ^ {\prime} \in [ t, t ^ {\prime \prime}), v \not \in \mathrm {o u t p u t} (n _ {t ^ {\prime}}) \\ 0 & \text {o t h e r w i s e} \end{array} \right.
68
+ $$
69
+
70
+ Let $M(S)$ be the minimum memory size required to compute a valid sequence $S$ . Then, we have $M(S) = \max_{1 \leq t \leq T} \sum_{v \in V} L_S(v, t) \cdot s(v)$ . Also, we define the cost of the sequence, $C(S)$ , by $C(S) = \sum_{1 \leq t \leq T} c(n_t)$ .
71
+
72
+ ![](images/810e15002c138a65aa4f1121fcc0ccba9acb1acb3402ca51e46a3f3c3b387439.jpg)
73
+ Figure 3: Overview of the simulated annealing.
74
+
75
+ Recomputation Let $f$ be an objective function which maps sequences to a real number. The recomputation problem is defined as the construction of a valid sequence $(n_1, \ldots, n_T)$ that minimizes $f(S)$ .
76
+
77
+ When minimizing memory usage, we set $f(S) \coloneqq M(S)$ . The objective function can depend not only on $M(S)$ but also on computational costs. In our experiments, we utilized Equation 1:
78
+
79
+ $$
80
+ f (S) := \max (\text {b u d g e t}, M (S)) \times C (S) \tag {1}
81
+ $$
82
+
83
+ This function minimizes the total computational cost when the memory budget is met. Otherwise, it tries to minimize both the peak memory and the cost.
84
+
85
+ # 2.1.1 Add-max segment tree
86
+
87
+ The range-add/range-max segment tree [5] is a data structure that can evaluate add and sum queries for intervals lazily. The add-max segment tree, utilized in this paper, holds an array $A[T]$ of length $T$ , and can carry out the following operations in $O(\log T)$ time:
88
+
89
+ - Add $x$ to $A[i]$ for $i \in [l, r)$
90
+ - Obtain the maximum value of $A[i]$ for $i \in [l, r)$
91
+
92
+ # 2.2 Fast Simulated Annealing Algorithm
93
+
94
+ The proposed algorithm is based on simulated annealing (SA) with a straightforward outline. We make small modifications to the sequence and accept them if they decrease the objective function $f(S)$ . Please see Figure 3 for an overview of the algorithm. Two critical mutations must be performed in sublinear time for the algorithm to be efficient: (1) Slightly modifying $S$ to create $S'$ . (2) Computing $f(S')$ .
95
+
96
+ To generate $S'$ from $S$ , we either add or remove a node. We represent the sequence as a fixed-sized vector of nodes, where each position in the sequence corresponds to a specific time step. This representation makes it easy to manage the lifetime of each value. The vector contains a special node, nop, which has zero computational cost and no inputs or outputs associated with it. When adding or removing a node, its value is exchanged with that of nop.
97
+
98
+ We start with a valid sequence $S = (n_{1}, \dots, n_{T})$ and for all $1 \leq i \leq T$ , $n_i \in \{\mathcal{N} \cup \mathrm{nop}\}$ . To enable the insertion of a long computation between nodes, most elements of $S$ should be initialized with nop. In practice, the initial $S$ is constructed from the sequence representing the initial execution order by inserting nop nodes between each element.
99
+
100
+ In each iteration, we perform one of the following three mutations:
101
+
102
+ 1. (Add computation) Select a random node $n \in \mathcal{N}$ and a random time $t$ where $n_t = \mathrm{nop}$ . We attempt to update $n$ , i.e., $n_t \gets n$ .
103
+ 2. (Remove computation) Select a random time $t$ such that $n_t \neq \mathrm{nop}$ and attempt to update $n_t \gets \mathrm{nop}$ .
104
+ 3. (Rotating sequence) Select two random times $t_1$ and $t_2$ such that $n_{t_1} \neq \mathrm{nop}$ and $n_{t_2} = \mathrm{nop}$ . Try swapping $n_{t_1}$ and $n_{t_2}$ . This modification can be implemented with the combination of the first two modifications.
105
+
106
+ To ensure that the mutated sequence $S'$ has a valid dependency, we validate the following conditions and update the sequence only if they are satisfied. Note that we can efficiently check these conditions by maintaining a set of produced and used times for each value $v$ .
107
+
108
+ 1. (Add computation) For each input $v$ of $n_t$ , $v$ must be produced before time $t$ .
109
+ 2. (Remove computation) For each output $v$ of $n_t$ , either $v$ must already be produced before time $t$ , or there must not be any user of $v$ before the next production of $n_t$ .
110
+
111
+ An important point to note is that the mutations 1 and 2 are inverses of each other, meaning that to undo a change, one simply needs to reapply the inverse mutation.
112
+
113
+ # 2.2.1 Updating peak memory and objective
114
+
115
+ In the following discussion, we consider the update of $f(S)$ by mutation 1 and 2. Since the objective function $f(S)$ often depends on both memory usage $M(S)$ and the overall cost $C(S) = \sum_{t} c(n_{t})$ , we assume this is still the case here. We can easily update $C(S)$ by adding or subtracting the corresponding cost of a newly added or removed node from $S$ . Updating $M(S)$ is more challenging; to calculate the maximum peak memory consumption when a node is added or removed from $S$ , we have to calculate the aggregated memory consumption by the values with overlapping lifetimes.
116
+
117
+ The lifetime of inputs and outputs change when adding or removing node $n$ (see Figure 5 in Appendix for illustration). However, the insertion and deletion of a node only slightly modify the life intervals for each node's input and output values. Consider a value $v$ with life intervals $L(v) \coloneqq \{(t_i, t_j)\}$ determined by $L_S(v, t)$ . Suppose node $n$ is inserted at time $t$ , and $v$ is an input of $n$ . We update the life intervals of $v$ as follows: If there is $(t_i, t_j) \in L(v)$ such that $t_i \leq t \leq t_j$ , do nothing. Otherwise, take a $(t_i, t_j) \in L(v)$ such that $t_j \leq t$ is maximum (such $t_j$ exists if the sequence after insertion is valid). Update $L(v) \gets L(v) \backslash (t_i, t_j) \cup (t_i, t)$ . Similar rules apply for updating the life intervals of the outputs of the inserted node and the inputs and outputs of the node to remove. Specifically, on inserting or deleting a node, the update of life intervals for an input or output value applies to one of the following four cases:
118
+
119
+ 1. No update.
120
+ 2. Extend or shrink a range $(t_L, t_R)$ to $(t_L, t_{R'})$ or $(t_{L'}, t_R)$ .
121
+ 3. Split a range $(t_L, t_R)$ into $(t_L, t_{R'})$ and $(t_{L'}, t_R)$ .
122
+ 4. Merge two ranges $(t_L, t_{R'})$ and $(t_{L'}, t_R)$ into $(t_L, t_R)$ .
123
+
124
+ Because the update of live intervals involves adding or subtracting the value size $s(v)$ from a certain range for each input or output value $v$ (as depicted in Figure 5 in Appendix), we can efficiently maintain memory usage and calculate the peak memory using a segment tree. This data structure allows for range-max and range-sum queries to be performed in $O(\log T)$ time, as introduced in Section 2.1.1. The segment tree maintains memory usage for time $t$ in an internal array $A[t]$ , and peak memory can be determined efficiently by taking the maximum value in the range $[0, T)$ in $O(\log T)$ time. To be specific, we add $s(v)$ for the extended range and subtract $s(v)$ for the removed range when updating the live intervals. The segment tree significantly contributes to the simulated annealing as it can reduce the time for the differential update of a lifetime interval from the naive $O(T)$ time to $O(\log T)$ time.
125
+
126
+ # 2.2.2 Improving memory reduction by grouping
127
+
128
+ To determine an optimal recomputation sequence, it may be necessary to replicate specific node patterns to rematerialize a larger value. However, achieving this using random insertion and deletion of nodes is challenging. To address this issue, we introduce the concept of grouped nodes, which concatenates multiple nodes in a series to enable the recomputation of a series of nodes. A grouping node, represented by $g$ , is formed by concatenating two nodes, $n_1$ and $n_2$ , and has the following properties:
129
+
130
+ $c(g) = c(n_1) + c(n_2)$
131
+ - input $(g) = \mathrm{input}(n_1)\cup (\mathrm{input}(n_2)\backslash \mathrm{output}(n_1))$
132
+ - output $(g) = (\mathrm{output}(n_1)\backslash \mathrm{input}(n_2))\cup \mathrm{output}(n_2)$
133
+
134
+ Grouped nodes can be especially useful for sequenced patterns where $n_1$ must be computed immediately before $n_2$ , as grouping these nodes together can make the sequence more likely to converge during simulated annealing. Additionally, we observed that after conducting annealing with grouped nodes, further improvement can be achieved by decomposing the grouped node and performing another round of annealing with lower temperatures. In Appendix C.1, we provide further discussion on the benefits of using node grouping for optimization.
135
+
136
+ # 2.3 Other Considerations
137
+
138
+ Conceptually, recomputation enables to optimize the time-space tradeoff without changing the graph semantics, i.e., the output values of the neural network. However, in real models, certain operators with internal states or side effects may produce different output values if recomputed or if their computation order is changed. Our algorithm can handle these cases by simply prohibiting the node addition or removal for these nodes.
139
+
140
+ An additional way to reduce memory usage that can be easily incorporated into our recomputation algorithm is offloading, which involves moving tensors from GPU memory to host memory when they are not immediately needed, and then moving them back before performing the dependent operation. Our algorithm can be extended to support offloading, as outlined in Appendix B.3. Supporting offloading together with recomputation offers the potential for even greater reductions in memory usage, particularly for larger computational graphs.
141
+
142
+ # 3 Experiments
143
+
144
+ In this section, we present the results of our experiments with the recomputation algorithm. Our algorithm was integrated into the PyTorch framework, and we used it to optimize the internal computational graph of various popular models, including vision and text architectures.
145
+
146
+ # 3.1 Configuration
147
+
148
+ Model and input data Our experiments involved the latest vision models and vision transformers obtained from timm (PyTorch Image Models), as well as text models (including language models) from Hugging Face transformers. To obtain the full computation graph with backpropagation, the vision models were set up for image classification, and model variants with sequence classification heads for the text models. The computational graphs were obtained by PyTorch's symbolic tracing. The value sizes are estimated from shapes and data types. Since node cost estimations were not available in symbolic tracing, all nodes were estimated to have unit costs. For memory budgets, we used the $50\%$ and $25\%$ values of the simulated initial peak memory. For additional details on the environment, PyTorch integration, hyperparameters of the models (e.g., batch sizes, sequence length in transformers), and the effect of using simulated costs instead of actual ones, please refer to Appendix D.
149
+
150
+ Objective function and hyperparameters of SA As outlined in Section 2, we utilized an objective function of the form $f(S) = \max (\mathrm{budget}, M(S)) \times C(S)$ for our SA algorithm unless mentioned otherwise. This objective function seeks to minimize $C(S)$ once the memory budget is met, and until that point, it minimizes both memory and cost. We included cost in the memory minimization objectives to address the risk of large numbers of nodes, which can result in slower convergence. This objective function is continuous and independent of the cost or size units, and it works effectively for a broad range of practical models.
151
+
152
+ To ensure efficient convergence, we utilized the first SA on grouped nodes for at most 20 million iterations, or until the memory budget was met, whichever occurred first. The second SA ran for a fixed 2 million iterations for the purpose of cost reduction as detailed in section 2.2.2.
153
+
154
+ Checkmate To the best of our knowledge, the most powerful recomputation planner currently available is Checkmate [19]. We re-implemented Checkmate for PyTorch. However, Checkmate
155
+
156
+ ![](images/1a640890afc3f94168015767548bd8b90f45a07cc589ca6333d8bcedef76da94.jpg)
157
+ Figure 4: Comparison of simulated memory decrease and cost increase by recomputation. For each model, the memory budget is set to $50\%$ and $25\%$ of the simulated initial peak memory (darker bars represent the $25\%$ budget and light bars represent $50\%$ budget). FastSA AVE represents the geometric mean for all 23 models by FastSA. Checkmate LP's average bar is not shown since failed to solve 11 out of 23 model instances.
158
+
159
+ ![](images/a4d9e066b961025c8518c970f3913aa631505956edc0308c0dcdc7b24216482c.jpg)
160
+
161
+ could not find any feasible solution within 6 hours for all models with more than 500 nodes due to the size of the integer linear programming (MILP) to be solved. Therefore, for the comparison with large models, we resort to Checkmate LP, where the MILP was relaxed to linear programming (LP). For clarification, the original version is referred as Checkmate MILP. Gurobi [15] is used as the internal solver for Checkmate MILP together with PDLP solver, provided by OR-Tools [28] for Checkmate $\mathrm{LP^3}$ .
162
+
163
+ Moccassin Another alternative to perform recomputation is Moccasin [3]. Similarly to Checkmate, Moccasin uses constant programming (CP) to solve the recomputation problem. It introduces a new hyperparameter that acts as an upper-bound on how many times a value can be recomputed. Thanks to this limitation, the number of integer variables involved in the CP set is linear to the number of nodes in the graph, allowing a faster execution time than Checkmate. However, the possible solutions that Moccasin can converge to are heavily limited by this hyperparameter value and the achievable memory reductions can be sub-optimal.
164
+
165
+ # 3.2 Memory reduction on large models
166
+
167
+ Figure 4 provides a comparison between FastSA and Checkmate LP in terms of simulated memory reduction and cost increase. We selected the most representative models for this figure, in addition to the geometric mean of all 23 models tested in Appendix D.7. Checkmate LP failed to find solutions for 11 models due to time limits or out-of-memory errors of 100 GiB, so the geometric mean is not reported. Moreover, memory usage was not fully reduced to the budget due to LP relaxation and randomized rounding, which resulted in larger overhead for some models, such as ViT (small); we discuss this behavior in Appendix D.6.
168
+
169
+ For the $50\%$ memory budgets, FastSA algorithm was successful in meeting the target budget for all models, with an average increase in model execution time of $7\%$ . However, for the $25\%$ budgets, FastSA could not reduce memory to the budget for some models due to either limitations on the model itself or convergence to a suboptimal solution due to SA heuristic limitations. Despite this, the algorithm reduced memory usage by an average of $73\%$ with an average increase in overhead of $18\%$ . Overall, these results demonstrate the superior performance of FastSA compared to Checkmate LP in reducing memory usage, particularly for larger neural networks.
170
+
171
+ A comparison with Moccasin was also conducted by applying FastSA to the publicly available data in [3]. Table1 presents the results from the original Moccasin study, extended to include outcomes for FastSA. The experiments pertain to two graph variants, namely random-layered (RL) and graphs mentioned in [19] (CM). For the RL graphs, FastSA achieved the lowest recomputation overhead
172
+
173
+ <table><tr><td rowspan="2">Graph</td><td rowspan="2">(n,m)</td><td rowspan="2">B</td><td colspan="3">CHECKMATE MILP</td><td colspan="3">MOCCASIN</td><td colspan="3">FASTSA</td></tr><tr><td>CI</td><td>Mem</td><td>Time</td><td>CI</td><td>Mem</td><td>Time</td><td>CI</td><td>Mem</td><td>Time</td></tr><tr><td rowspan="3">RL 1</td><td>100</td><td>90</td><td>0.8</td><td>89.2</td><td>18.5</td><td>0.8</td><td>88.1</td><td>9.3</td><td>0.0</td><td>79.4</td><td>11.7</td></tr><tr><td>236</td><td>80</td><td>2.3</td><td>79.5</td><td>22.7</td><td>2.3</td><td>79.5</td><td>9.5</td><td>0.3</td><td>79.4</td><td>11.5</td></tr><tr><td></td><td>70</td><td colspan="3">no experiment</td><td colspan="3">no experiment</td><td>2.2</td><td>77.3</td><td>12.0</td></tr><tr><td rowspan="3">RL 2</td><td>250</td><td>90</td><td>0.9</td><td>90.0</td><td>685.1</td><td>0.9</td><td>89.8</td><td>55.0</td><td>0.0</td><td>77.3</td><td>15.1</td></tr><tr><td>944</td><td>80</td><td colspan="3">time limit exceeded</td><td>4.9</td><td>80.0</td><td>639.5</td><td>0.0</td><td>72.2</td><td>15.0</td></tr><tr><td></td><td>70</td><td colspan="3">no experiment</td><td colspan="3">no experiment</td><td>2.6</td><td>68.9</td><td>14.5</td></tr><tr><td rowspan="3">RL 3</td><td>500</td><td>90</td><td colspan="3">time limit exceeded</td><td>0.7</td><td>90.0</td><td>1803.3</td><td>0.03</td><td>87.4</td><td>21.0</td></tr><tr><td>2461</td><td>80</td><td colspan="3">time limit exceeded</td><td>3.4</td><td>80.0</td><td>1804.8</td><td>2.3</td><td>78.6</td><td>20.9</td></tr><tr><td></td><td>70</td><td colspan="3">no experiment</td><td colspan="3">no experiment</td><td>4.8</td><td>(73.5)</td><td>21.2</td></tr><tr><td rowspan="3">RL 4</td><td>1000</td><td>90</td><td colspan="3">time limit exceeded</td><td>0.7</td><td>90.0</td><td>3612.9</td><td>0.4</td><td>87.5</td><td>36.0</td></tr><tr><td>5857</td><td>80</td><td colspan="3">time limit exceeded</td><td>3.4</td><td>80.0</td><td>3611.8</td><td>2.5</td><td>78.4</td><td>36.3</td></tr><tr><td></td><td>70</td><td colspan="3">no experiment</td><td colspan="3">no experiment</td><td>7.4</td><td>70.0</td><td>36.5</td></tr><tr><td rowspan="3">CM 1</td><td>73</td><td>90</td><td>0.0</td><td>88.4</td><td>6.3</td><td>0.0</td><td>88.4</td><td>3.1</td><td>0.1</td><td>75.3</td><td>9.4</td></tr><tr><td>149</td><td>80</td><td>0.1</td><td>76.9</td><td>5.6</td><td>0.1</td><td>78.9</td><td>3.1</td><td>0.1</td><td>75.1</td><td>9.7</td></tr><tr><td></td><td>70</td><td colspan="3">no experiment</td><td colspan="3">no experiment</td><td>3.0</td><td>62.3</td><td>9.7</td></tr><tr><td rowspan="3">CM 2</td><td>353</td><td>90</td><td>0.1</td><td>89.0</td><td>434.1</td><td>0.2</td><td>89.9</td><td>65.2</td><td>0.2</td><td>86.2</td><td>10.9</td></tr><tr><td>751</td><td>80</td><td>0.3</td><td>79.7</td><td>485.3</td><td>0.3</td><td>80.0</td><td>69.3</td><td>0.4</td><td>76.6</td><td>10.9</td></tr><tr><td></td><td>70</td><td colspan="3">no experiment</td><td colspan="3">no experiment</td><td>0.8</td><td>66.8</td><td>11.0</td></tr></table>
174
+
175
+ in all instances. Remarkably, for RL1 and RL2 cases, FastSA managed to decrease memory usage without adding new recomputation nodes, through the optimization of the execution's topological ordering.
176
+
177
+ # 3.3 Solution optimality
178
+
179
+ Table 1: Comparison with Moccasin. The table includes results of Checkmate MILP and Moccasin from [3], extended with FastSA data. The columns B and CI represent memory budget and cost increase percentage, respectively. Alongside original results for $90\%$ and $80\%$ budgets, a $70\%$ budget row only demonstrating FastSA results is also inserted. For all random-layered (RL) cases, FastSA exhibited the smallest CI, as it optimizes topological ordering, reducing memory without materialization.
180
+
181
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">(n,m)</td><td rowspan="2">B</td><td colspan="4">CHECKMATE MILP</td><td colspan="2">FASTSA</td></tr><tr><td>CI</td><td>Mem</td><td>Time</td><td>CI</td><td>Mem</td><td>Time</td></tr><tr><td rowspan="3">VGG11</td><td rowspan="3">(69, 119)</td><td>90</td><td>1.4</td><td>87.6</td><td>5.1</td><td>2.9</td><td>87.5</td><td>2.4</td></tr><tr><td>80</td><td>2.9</td><td>79.8</td><td>1.5</td><td>2.9</td><td>79.8</td><td>2.4</td></tr><tr><td>70</td><td colspan="3">infeasible</td><td>2.9</td><td>(79.8)</td><td>2.4</td></tr><tr><td rowspan="5">ResNet18</td><td rowspan="5">(171, 437)</td><td>90</td><td>0.6</td><td>85.7</td><td>47.6</td><td>0.6</td><td>85.7</td><td>2.1</td></tr><tr><td>80</td><td>1.8</td><td>78.6</td><td>45.3</td><td>2.9</td><td>75.0</td><td>2.1</td></tr><tr><td>70</td><td>2.3</td><td>67.9</td><td>367.3</td><td>4.7</td><td>66.1</td><td>2.1</td></tr><tr><td>60</td><td>4.1</td><td>57.1</td><td>1846.0</td><td>5.3</td><td>55.4</td><td>2.2</td></tr><tr><td>50</td><td>(4.7)</td><td>(50.0)</td><td>&gt;3600</td><td>6.4</td><td>(51.8)</td><td>2.3</td></tr></table>
182
+
183
+ Table 2: Comparison between FastSA and Checkmate MILP on small models. For the resnet18 case with $50\%$ budget, the best feasible solution was used because the MILP solver could not find the optimal solution within the time limit.
184
+
185
+ We conducted a study on how closely FastSA solutions match with the optimal solution by comparing its results with those of Checkmate MILP on small models. The results, compiled in Table 2, show that for the model VGG11 under the strictest budget constraint (80%), FastSA located the same plan as Checkmate MILP for recomputation. For ResNet18, except for a 50% budget, FastSA managed to cut down memory use under the budget in all cases, but with up to 2.4% more overhead from recomputation compared to Checkmate MILP. Even though FastSA fell short of finding the optimal solutions for these cases, it was up to 1000x faster than Checkmate MILP, particularly
186
+
187
+ under strict memory budget scenarios. In certain situations where the computational graph has high topological freedom, Checkmate might present suboptimal recomputation plans (discussed in Appendix A.1). This can be found in the results for RL graphs in Table 1, where FastSA could find better solutions than Checkmate MILP or Moccasin.
188
+
189
+ # 4 Related Work
190
+
191
+ # 4.1 Model Parallelism
192
+
193
+ One common approach to scaling a single neural network that is limited by memory is to partition it into multiple devices using either Model Parallelism or Pipeline Parallelism. Model Parallelism [34] involves horizontally splitting the neural network by distributing the parameters of each layer across multiple GPU devices, while Pipeline Parallelism [26] proposes to vertically split the network by assigning each device several contiguous layers of the model. While these approaches enable deep neural networks to be trained at scale, their performance can be limited by the communication overhead required for each iteration of the model. Rajbhandari et al. [30] found that these approaches can achieve only $5\%$ of the performance peak of a V100 device, highlighting the limitations of these techniques. These limitations have spurred further research into improving neural network scaling, which is discussed in more detail below.
194
+
195
+ # 4.2 Recomputation
196
+
197
+ Recomputation is a technique that was first introduced in classical compiler research to minimize the number of required registers, and later on was adapted for use in deep neural networks (DNNs) by Chen et al. [7] as a means of reducing memory consumption during training of sequential models. However, this method is limited to sequential graphs and disregards node costs. Kusumoto et al. [21] proposed dynamic programming algorithms for more general computational graphs. Also, Kumar et al. [20] leverages tree decomposition to handle more general graph structures. However, these methods still requires large computational overhead, making them impractical for larger networks.
198
+
199
+ Jain et al. [19] formulated recomputation as a mixed integer linear programming (MILP) problem and proposed Checkmate, a solver to find an optimal recomputation plan. Checkmate tries to minimize the total computational cost under memory budget and dependency constraints. Although Checkmate has shown to significantly outperform existing methods in terms of solution quality, it requires substantial computational resources to solve the MILP. The number of decision variables in the MILP scales quadratically to the graph size and time scales exponentially to it. To address the limitation of Checkmate, Bartan et al. [3] proposed Moccasin, which formulates recomputation using constraint programming (CP). The number of integer variables is reduced from quadratic to linear in Moccasin, by setting a hyperparameter for the maximum times a value can be recomputed, thus expediting execution. However, this formulation also narrows the search space and may impact overall quality of the recomputation plans when compared to Checkmate or FastSA. Also, in a parallel development, Rockmate [44] was proposed for models with repeated layers. It decomposes the problem of recomputation into intra-layer and inter-layer recomputation. It applies Checkmate to a single layer, which forms a smaller graph than the entire model, and then finds the recomputation plan across the layers by Rotor [17], a dynamic programming based recomputation algorithm that works for sequential models.
200
+
201
+ In contrast, our proposed method converges to solutions that are comparable or even better than Checkmate and Moccasin with a single CPU core, taking less than four seconds on average. This significant reduction in computational time is achieved by leveraging efficient heuristics and optimization techniques. Our results demonstrate that our approach has great potential in the context of real-world applications, especially for cases where large computational resources are not available.
202
+
203
+ # 4.3 Other techniques for memory reduction
204
+
205
+ There are several other techniques that have been proposed to reduce device memory consumption in addition to recomputation, including offloading, which involves transferring some of the model parameters to a system's CPU or an external memory device when they are not immediately required. Beaumont et al. [4] discuss the combination of offloading and recomputation as a means of further reducing memory utilization. Our proposed method can easily be extended to support
206
+
207
+ offloading, as detailed in appendix B.3. Another approach that leverages offloading is the ZeRO technique proposed by Rajbhandari et al. [30], which partitions the model parameters, optimizer states, and activations among several devices to increase parallelism and reduce the memory used by each device. This approach enables the training of exceptionally large models with hundreds of billions of parameters, making it a powerful tool for advanced natural language processing and computer vision applications. Other techniques focus on reducing the size of the parameters and intermediate results, such as quantization [23], which reduces the floating point precision of computations and weight storage up to 2-bits in extreme cases. Sparsification [25] exploits the sparsity patterns that arise during computation to reduce the total needed memory while keeping the same precision. There are also more exotic approaches, such as the reversible residual network [14], which is a memory-efficient architecture that can perform backward computation without saving the activations. However, its applicability is limited to residual networks only. It is worth noting that these techniques are largely orthogonal to our proposed method and can be combined to further improve memory savings in neural network optimization.
208
+
209
+ # 5 Conclusion
210
+
211
+ In this paper, we present a novel method for recomputation that offers several key advantages over existing approaches. Our method is applicable to general graphs and can support any objective function that depends on peak memory usage and the total cost of computation. Moreover, it can find near-optimal recomputation plans within a remarkably short computational time of 3 to 30 seconds, even for large computational graphs.
212
+
213
+ Another major advantage of our method is its efficiency, as it is single-threaded and uses memory resources in a highly efficient manner. This makes it ideal for integration into neural network compilers, where it can further streamline the optimization process. Additionally, our algorithm is highly flexible and can handle a wide range of problem settings, including recomputation with offloading and consideration of node-intermediate memory usage.
214
+
215
+ We explored the tradeoff between computation time and memory usage and evaluated the effectiveness of our algorithm in terms of reducing peak memory usage. Our experiments demonstrate that our approach can achieve significant reductions in peak memory usage, with reduced impact on computation time, for a wide range of neural network architectures. Overall, our experiments validate the effectiveness and practicality of our recomputation algorithm, highlighting its potential for widespread application in many areas of deep learning research and development.
216
+
217
+ Limitations In conclusion, while our algorithm offers a powerful solution for complex neural network optimization problems, it has certain limitations that must be considered. As demonstrated in Figure 4, our approach was able to reduce memory usage more significantly than Checkmate LP in most cases, but the result was not ideal for some models due to suboptimal node grouping.
218
+
219
+ When planning recomputation in distributed training, the computational graph may need to have extra requirements such as (1) the optimized graph must be able to split equally for each worker in Pipeline Parallel or (2) the communication operators must be overlapped with arithmetic computations. Currently, it is difficult to handle these constraints with the standard FastSA.
220
+
221
+ Additionally, the peak memory estimation may not be always precise due to memory aliasing, caused by operators that change the memory views or do inplace updates. This is one of the reason why there is a gap between the simulated and actual GPU memory usage as shown in Appendix D.8.
222
+
223
+ Overall, our proposed approach offers a fast and efficient method for neural network optimization, providing significant improvements over existing techniques. However, future research may be required to support a wider variety of computation.
224
+
225
+ # References
226
+
227
+ [1] Alaaeldin Ali, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, et al. Xcit: Cross-covariance image transformers. Advances in neural information processing systems, 34: 20014-20027, 2021.
228
+
229
+ [2] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021.
230
+ [3] Burak Bartan, Haoming Li, Harris Teague, Christopher Lott, and Bistra Dilkina. Moccasin: Efficient tensor rematerialization for neural networks. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 1826-1837. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/bartan23a.html.
231
+ [4] Olivier Beaumont, Lionel Eyraud-Dubois, and Alena Shilova. Efficient combination of rematerialization and offloading for training dnns. Advances in Neural Information Processing Systems, 34:23844-23857, 2021.
232
+ [5] JL Bentley. Algorithms for klee's rectangle problems. dept. of computer science, 1977.
233
+ [6] Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, March 2021. URL https://doi.org/10.5281/zenodo.5297715. If you use this software, please cite it using these metadata.
234
+ [7] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016.
235
+ [8] Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555, 2020.
236
+ [9] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2022.
237
+ [10] Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. Advances in Neural Information Processing Systems, 34:3965-3977, 2021.
238
+ [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
239
+ [12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
240
+ [13] Yuxin Fang, Quan Sun, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva-02: A visual representation for neon genesis. arXiv preprint arXiv:2303.11331, 2023.
241
+ [14] Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. Advances in neural information processing systems, 30, 2017.
242
+ [15] Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023. URL https://www.gurobi.com.
243
+ [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
244
+ [17] Julien Herrmann, Olivier Beaumont, Lionel Eyraud-Dubois, Julien Hermann, Alexis Joly, and Alena Shilova. Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory. arXiv preprint arXiv:1911.13214, 2019.
245
+
246
+ [18] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1314-1324, 2019.
247
+ [19] Paras Jain, Ajay Jain, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Joseph Gonzalez, Kurt Keutzer, and Ion Stoica. Checkmate: Breaking the memory wall with optimal tensor rematerialization. Proceedings of Machine Learning and Systems, 2:497-511, 2020.
248
+ [20] Ravi Kumar, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua Wang. Efficient rematerialization for deep networks. Advances in Neural Information Processing Systems, 32, 2019.
249
+ [21] Mitsuru Kusumoto, Takuya Inoue, Gentaro Watanabe, Takuya Akiba, and Masanori Koyama. A graph theoretic framework of recomputation algorithms for memory-efficient backpropagation. Advances in Neural Information Processing Systems, 32, 2019.
250
+ [22] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
251
+ [23] Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, and Tom Goldstein. Training quantized nets: A deeper understanding. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/1c303b0eed3133200cf715285011b4e4-Paper.pdf.
252
+ [24] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022.
253
+ [25] Dmitry Molchanov, Armenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 2498-2507. JMLR.org, 2017.
254
+ [26] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: Generalized pipeline parallelism for dnn training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP '19, page 1-15, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368735. doi: 10.1145/3341301.3359646. URL https://doi.org/10.1145/3341301.3359646.
255
+ [27] Wolfgang J Paul and Robert Endre Tarjan. Time-space trade-offs in a pebble game. Acta Informatica, 10(2):111-115, 1978.
256
+ [28] Laurent Perron and Vincent Furnon. Or-tools. URL https://developers.google.com/optimization/.
257
+ [29] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
258
+ [30] Samyam Rajbhandari, Jeff Rasley, Olatunjri Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '20. IEEE Press, 2020. ISBN 9781728199986.
259
+ [31] Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
260
+ [32] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Galle, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
261
+
262
+ [33] Ravi Sethi. Complete register allocation problems. In Proceedings of the fifth annual ACM symposium on Theory of computing, pages 182-195, 1973.
263
+ [34] Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, et al. Meshtensorflow: Deep learning for supercomputers. In Advances in Neural Information Processing Systems, pages 10435-10444, 2018.
264
+ [35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
265
+ [36] Andreas Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. arXiv preprint arXiv:2106.10270, 2021.
266
+ [37] Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, and Yue Cao. Eva-clip: Improved training techniques for clip at scale. arXiv preprint arXiv:2303.15389, 2023.
267
+ [38] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pages 6105-6114. PMLR, 2019.
268
+ [39] Hugo Touvron, Matthieu Cord, and Hervé Jégou. Deit iii: Revenge of the vit. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIV, pages 516-533. Springer, 2022.
269
+ [40] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
270
+ [41] Ross Wightman, Hugo Touvron, and Hervé Jégou. Resnet strikes back: An improved training procedure in timm. arXiv preprint arXiv:2110.00476, 2021.
271
+ [42] Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, and Saining Xie. Convnext v2: Co-designing and scaling convnets with masked autoencoders. arXiv preprint arXiv:2301.00808, 2023.
272
+ [43] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
273
+ [44] Xunyi Zhao, Théotime Le Hellard, Lionel Eyraud-Dubois, Julia Gusak, and Olivier Beaumont. Rockmate: an efficient, fast, automatic and generic tool for re-materialization in pytorch. In International Conference on Machine Learning, 2023.
274
+
275
+ # A Checkmate
276
+
277
+ As previously introduced in the main section, Checkmate is a recomputation planner that operates using mixed integer linear programming. This appendix provides additional details on the internal workings of Checkmate.
278
+
279
+ Algorithm Let $(n_{1},\ldots ,n_{T})$ , where $n\in \mathcal{N}$ , be the original computation order. To define the $i$ -th stage of computation ( $1\leq i\leq T$ ), Checkmate uses a $T\times T$ matrix $\{R_{i,j}\}$ , where $R_{i,j}\in \{0,1\}$ . When given a matrix $R$ , the $i$ -th stage of computation is a subsequence of the original computation order, defined by $(n_j\mid R_{i,j} = 1)$ . The recomputation plan is then determined as the concatenation of $T$ stages. The maximum length of the recomputation sequence is $O(T^{2})$ as a result of this formulation.
280
+
281
+ The objective of Checkmate is to minimize $\sum_{i=1}^{T}\sum_{j=1}^{T}R_{i,j}c(n_j)$ , where $c(n_j)$ represents the cost of recomputing node $n_j$ . Additionally, Checkmate uses binary variables to determine whether to preserve the outputs of $n_j$ from stage $i$ to $i+1$ and whether to free the outputs of $n_j$ in stage $i$ after evaluating $n_k$ . The number of binary variables is $O(|\mathcal{N}|^2 + |\mathcal{N}|M)$ , where $M$ is the number of arcs in the DAG. Checkmate also introduces variables to represent the memory usage during the computation of node $n_j$ at each stage. Lastly, the memory budget constraint and the dependency constraints among the variables are added to formulate the recomputation problem as a mixed-integer linear program (MILP).
282
+
283
+ # A.1 Limitations of Checkmate
284
+
285
+ Despite the ability of Checkmate to consider node and value weights for any computation graph, it has several limitations.
286
+
287
+ Fixed topological order One such limitation is the requirement for a fixed topological order. When a computational graph has multiple possible computation orders (i.e., the topological order is not unique), the memory usage can differ significantly based on the computation order selected. Identifying the topological order that minimizes memory usage is a difficult problem [33], and Checkmate may not be able to reduce memory usage significantly if the initial computation order is not memory efficient. As feasible recomputation sequences can become exponentially long for some graphs and budgets [27], it is challenging to increase the number of stages to enable Checkmate to accommodate any feasible computation order.
288
+
289
+ LP-related limitations There are several additional limitations to Checkmate when the MILP is relaxed to linear programming (LP). Checkmate may fail to provide optimal solutions due to MILP/LP gaps, randomized rounding, and solver tolerances. In most cases, the LP relaxation method results in better objective values than the original MILP by allowing non-binary assignments for binary variables. However, Checkmate then applies randomized rounding on the binary variables that determine whether the outputs of $n_j$ at stage $i$ is kept for stage $i + 1$ , which can increase the objective value. The original paper suggests setting $0.9 \times$ budget for the LP relaxed problem, considering the memory usage increase after randomized rounding. While often there is no significant degradation in memory consumption, execution time may unexpectedly increase due to this process. We have observed that since the number of elements in $\{R_{i,j}\}$ is $O(|\mathcal{N}|^2)$ , it is less likely to have short recomputation sequences via randomized rounding when $|\mathcal{N}|$ is large. The degradation in the recomputation cost is more likely to occur when the LP solver's tolerances are not small enough. However, determining the solver tolerances becomes a trade-off between convergence condition and optimization time.
290
+
291
+ # B Application to extended problem settings
292
+
293
+ # B.1 Computational graphs with node-intermediate memory usage
294
+
295
+ In this section, we explain some of the extended problem settings where our algorithm can be applied.
296
+
297
+ The original description in Section 2 considers only the memory usage of input and output values. This setting is reasonable for two reasons: first, deep learning frameworks and neural network compilers have shape propagation, making it easy to infer the sizes of intermediate tensors. Second, in most cases, peak memory usage can be simulated with high precision even if node-intermediate memory usages are ignored.
298
+
299
+ However, precise peak memory estimation is necessary if the recomputation sequence must be highly optimized. For instance, if there is only a scarce amount of tensors alive when the memory usage reaches its peak, we must consider node-intermediate memory usage; otherwise, the estimation error will be significant. Additionally, certain operators, such as matrix multiplication, temporal values may consume a large amount of memory space. Considering node-intermediate memory usage is exceptionally important when dealing with grouped nodes. A grouped node may have substantially more node-intermediate memory usage when it involves a lot of computation. In our implementation, we estimate the node-intermediate memory of each grouped node from the temporal values produced and used within it.
300
+
301
+ Extensions in algorithm In the algorithm stated in Section 2.2, when a node $n$ is added or removed from time $t$ , we need to update the lifetime intervals for its input values and output values. To consider node intermediate values, we need an additional update: add (or subtract for removal) $s'(n)$ at time $t$ , where $s'(n)$ denotes the intermediate memory usage of $n$ .
302
+
303
+ # B.2 Memory minimizing topological ordering
304
+
305
+ Our algorithm can also be used to find a memory-minimizing computation order without recomputation. As mentioned in Appendix A.1, the problem of finding a topological order of a DAG that minimizes peak memory is known to be NP-hard. We can use our algorithm to heuristically solve this problem by prohibiting the addition and removal of nodes and allowing only node rotation during mutations in the SA.
306
+
307
+ # B.3 Offloading
308
+
309
+ Offloading, along with recomputation, is a popular method for reducing GPU memory usage. Offloading involves moving data to secondary memory and back when it is needed. Our algorithm is capable of optimizing offloading and recomputation simultaneously (although it could be applied to offloading alone).
310
+
311
+ The extended problem setting is as follows:
312
+
313
+ Given a computational graph and objective function $f$ , the task is to construct a valid sequence $S = (n_{1},\ldots ,n_{T})$ and fetching sequence $(F_{1},\dots ,F_{T})$ , where $F_{i}\subseteq \mathrm{input}(n_{i})$ , that minimize the objective function. Here, $F_{i}$ denotes which input values are fetched (instead of using previously produced ones) when computing $n_i$ . If all $F_{i}$ are set to empty, the problem is equivalent to the original recomputation. Note that the objective function $f$ is now dependent on both $S$ and the fetching sequence. For example, we can define the computational cost as $\sum_{i = 1}^{T}(c(n_{i}) + \sum_{v\in F_{i}}c^{\prime}(v)) + \sum_{v\in \{v|\exists i.v\in F_{i}\}}c^{\prime \prime}(v)$ , where $c^{\prime}(v)$ and $c''(v)$ denote the cost of fetching and offloading, respectively. This objective function assumes that offloading and fetching are done sequentially during computation. If background offloading and fetching are allowed, another suitable objective function may be defined.
314
+
315
+ Note that we only consider the timings of fetching in this problem setting, as the optimal timings of offloading can be determined when they are given.
316
+
317
+ Extensions in algorithm We initialize all $F_{i}$ to be empty and allow an additional mutation in each SA iteration.
318
+
319
+ 4. (Flip fetch) Choose a random time $t$ such that $n_t \neq \mathrm{nop}$ and random value $v \in \mathrm{input}(n_t)$ . If $v \in F_t$ , update $F_t \gets F_t \setminus \{v\}$ . Otherwise, update $F_t \gets F_t \cup \{v\}$ .
320
+
321
+ Unlike existing three mutations, this mutation is always valid, i.e., does not break the dependency of the sequence. Also, the inverse of this mutation is the mutation itself. The update of the life intervals
322
+
323
+ ![](images/012ebc6c2fd50d3bce635a0167b3734523fe26e09134749f1e2ce0271207482a.jpg)
324
+ Figure 5: The update of lifetime by node addition or node removal. By adding a recomputation node, the life intervals of X and Z change. For either case, the update of memory usage can be easily managed by the range-add / range-max segment tree.
325
+
326
+ ![](images/21918d1ca7690fd41b71cdf9db41b07329c06d5ae2c8a5b0a72245fd9f1aa7e1.jpg)
327
+ Figure 6: Example of node grouping. The blue node in the left graph is fused together with its successors. The red nodes in the right graph are the fused nodes taking $X$ as the input. Even without the SA, node grouping itself can optimize recomputation since the last value will be always recomputed from $X$ .
328
+
329
+ ![](images/94c9205868d568024f3dcfeff1a74ca9d90e435b33ba4ef90ef0f8f2c47a9ede.jpg)
330
+
331
+ of $v$ can be done in the same way as other mutations by considering the fetching and offloading as the addition or removal of a special node which outputs $v$ from empty inputs.
332
+
333
+ # C Technique for fast and better convergence
334
+
335
+ This section presents various techniques to expedite the convergence of SA or obtain better solutions.
336
+
337
+ # C.1 Node grouping
338
+
339
+ As mentioned in Section 2.2, node grouping is a crucial pre-processing step for recomputing a sequence of nodes. Without node grouping, the SA has small chances of reaching an solution that effectively reduces the cost of the target function.
340
+
341
+ Figure 6 provides an example of node grouping. Even without the SA, node grouping can be used as a simple recomputation strategy to lower peak memory usage. In practice, our implementation uses the following algorithm. The $\text{do\_fuse}(n_i)$ function in Algorithm 1 determines whether to fuse
342
+
343
+ ![](images/b810ea4c0cb3f5725b29901a30983b6e53ce156e746c07196129611d58598dcc.jpg)
344
+ Figure 7: Memory reduction using node grouping for the LLaMA model. The blue line is the optimization with node grouping with the last 2M iterations being for decomposed nodes. The orange line is the SA without node grouping. The objective function is defined by equation (1) and the memory budget $6.7\mathrm{e} + 10$ , which is $10\%$ of the original memory consumption.
345
+
346
+ a node $n_i$ with its succeeding operators. We perform the fusion if the outputs of $n_i$ can be computed from inputs with a smaller memory footprint, i.e., $\sum_{v \in \mathrm{input}(n_i)} s(v) \leq \sum_{v \in \mathrm{output}(n_i)} s(v)$ .
347
+
348
+ Algorithm 1 Node grouping
349
+ Require: Valid computational sequence $S = (n_{1},\ldots ,n_{T})$
350
+ Ensure: $S$ is a valid sequence with grouped nodes. 1: for $i = 1,\dots ,T$ do 2: if do Fuse $(n_i)$ then 3: for $j = i + 1,\dots ,T$ do 4: if output $(n_i)\subseteq$ input $(n_j)$ then 5: $n_j\gets$ grouped node of $n_i$ and $n_j$ . 6: end if 7: end for 8: $n_i\gets$ nop 9: end if 10: end for
351
+
352
+ The node grouping algorithm with the do_fusion function mentioned above has several characteristics. Firstly, Algorithm 1 does not depend on the initial topological order of the computational graph and therefore works even if the original sequence is not memory-efficient enough. Secondly, the above node grouping algorithm can optimally reduce the memory usage in specific simple cases, including the training graphs of sequential models, which were discussed in early studies of recomputation techniques by Chen et al. [7]. In fact, the above node grouping algorithm can reduce the peak memory of such graphs to a constant size providing that the all the values have the same size.
353
+
354
+ Figure 7 demonstrates the importance of node grouping in recomputation for LLaMA. When node grouping was not used, the SA converged to a final objective value 4.4 times larger than the result obtained by the SA with node grouping. On the other hand, using only node grouping (without any SA), we were able to reduce memory usage for LLaMA to $7.8\mathrm{e} + 10$ , which is close to the $6.7\mathrm{e} + 10$ budget, and lower than the optimized memory usage of $3.5\mathrm{e} + 11$ achieved without node grouping.
355
+
356
+ # C.2 LogAddExp segment tree
357
+
358
+ In our optimization approach, we assume that the objective function is a function of peak memory $M(S)$ and computational cost $C(S)$ . For each mutation in the SA, the cost may be updated, but
359
+
360
+ ![](images/c728c5cabed19ac7a891ce9352667372ab6c891b259da77887b368c269e838b9.jpg)
361
+ Figure 8: SA convergence for ViT model. We ran the first SA on grouped nodes for 20M iterations and the second SA on decomposed nodes for 2M iterations.
362
+
363
+ the peak memory $M(S)$ changes only when the mutation affects the peak timings. Suppose there is a computational graph with two memory peaks. In this case, $M(S)$ is determined by the higher of the two peaks, and mutations that reduce the memory usage around the second peak do not change $M(S)$ . In general, multiple local memory peaks make it challenging to effectively reduce the objective value.
364
+
365
+ We can overcome this issue by employing range-add/range-logaddexp segment trees instead of range-add/range-max segment trees, introduced in Section 2. The range-add/range-max segment tree recursively calculates $\max (a,b)$ for a given interval and determines the maximum value within a range $[l,r)$ . On the other hand, the range-add/range-logaddexp segment tree uses the recursive calculation of $\log \mathrm{addexp}(a,b)\coloneqq \log (\exp (a) + \exp (b))$ instead of taking the maximum. The logaddexp segment tree takes into account multiple memory peaks, enabling faster convergence of the SA.
366
+
367
+ Advantages and disadvantages While the logaddexp-based segment tree is expected to have better convergence when memory consumption is multimodal, it is essential to be wary of errors in memory usage simulation and numerical overflows. To avoid overflowing floating point operations, it is crucial to select an appropriate base for log and exp. In the implementation used for Figure 8, we re-scaled value sizes so that the maximum value is 10 and set the base to 2. However, this configuration may still cause overflow, particularly when $|\mathcal{N}|$ is large. There are important considerations to be made regarding the values held by the segment tree. In a range-add/rangemax segment tree, the $t$ -the element of the segment tree represents the memory consumption at time $t$ . However, in a logaddexp-based segment tree, this approach cannot be directly used since it leads to significant errors in calculating the peak memory. Specifically, unlike the max operator, $\log \text{addexp}(x,x) = \log 2 + x > x$ , meaning that repeatedly calculating $\log \text{addexp}(a,b)$ for values $a$ and $b$ close to zero will result in increasingly large values. In our implementation, we utilized the fact that the peak memory is only achieved for time $t$ where $n_t \neq \mathrm{nop}$ . We initialized the elements of the segment tree with a large negative constant and added a large constant again when a node is added at that time, ensuring more precise estimation of the peak memory.
368
+
369
+ Figure 8 shows the difference of the convergence by the kind of segment tree used in the SA state. The segment tree using logaddexp had slightly better convergence for the ViT model.
370
+
371
+ # C.3 Implementation of segment tree
372
+
373
+ The role of the segment tree used in our algorithm is to efficiently calculate the peak memory, which can be obtained by the range-max query. Here, we introduce another implementation variant of the segment tree.
374
+
375
+ Dynamic segment tree The dynamic segment tree is a data structure that can efficiently handle the following operations: (1) Insert a value at index $i$ . (2) Remove the value at index $i$ . (3) Add value $x$ for range $[l, r)$ . (4) Calculate the maximum value for range $[l, r)$ . This kind of data structure can be created by extending binary search trees (for details, refer to textbooks such as [9]). By using this segment tree, we do not need nop, introduced in Section 2.2, and can optimize a computation sequence consisting of only $\mathcal{N}$ . However, the dynamic segment tree often involves heavy implementation, so the actual optimization time could increase. On the other hand, it is more efficient in terms of memory as it requires $O(T)$ space, where $T$ is the maximum length of the computational sequence during optimization.
376
+
377
+ # D Experiments
378
+
379
+ In this section, additional information for the experiments done throughout the paper is provided together with details on how to obtain computational graphs and the hyperparameters used to configure both, FastSA and Checkmate.
380
+
381
+ # D.1 PyTorch Computational Graph
382
+
383
+ Optimizing PyTorch computational graph We extracted the computational graph using PyTorch's symbolic tracing mechanism, which is available for PyTorch 2.0 or later. For our experiments, we used PyTorch 2.1.0.dev20230404 as it had the necessary features for tracing unavailable in the latest stable release (v2.0.0).
384
+
385
+ Our recomputation algorithm was integrated by extending aot_module. This module acts as a wrapper of a model (torch nn-module) and optimizes the model's computational graph before the actual computation is done. Using symbolic tracing without real data, aot_module generates the computational graph for the model's forward and backward passes. It then combines them into a joint graph before optimizing and partitioning it for a new model with optimized forward and backward computations. Users can pass a custom partition function (an optional argument) to aot_module to take over the optimization process.
386
+
387
+ We implemented a custom partition function that follows these steps: it receives the joint graph and applies the recomputation algorithm to create a new computation graph that produces the same outputs from the same inputs. It returns the entire new computation graph as the forward module and configures the backward module to return only the pre-computed outputs assuming that all the tangents (i.e., gradient inputs) are one. The full graph is executed instead of separating it into forward and backward passes as it is more memory-efficient. When the graph is partitioned, some intermediate values for the backward pass need to be saved, and they remain in memory until the backward pass is complete.
388
+
389
+ Preprocess for PyTorch intermediate graph representation We perform Dead Code Elimination (DCE) before applying the recomputation algorithms. Although our algorithm automatically does DCE during the SA by removing nodes, we perform explicit DCE in advance due to two reasons: dead code can affect node grouping performance and Checkmate does not support DCE.
390
+
391
+ Next, we convert the PyTorch intermediate graph representation (torch.fx. Graph) to an internal graph format with explicit nodes $\mathcal{N}$ and values $\nu$ . In torch.fx. Graph, the computational graph is represented as a list of nodes, and each value corresponds to a single node. If an operator has multiple output values in torch.fx. Graph, it is represented as a single node that outputs a tuple, followed by get-item nodes that extract specific values from the tuple using an index. We can apply the recomputation algorithm to torch.fx. Graph intermediate representations, but it is easier to convert torch.fx. Graph to a more general graph format to handle operators with multiple outputs efficiently. After the recomputation is done, we convert our format back to torch.fx. Graph.
392
+
393
+ For Checkmate, we follow the original paper's implementation and perform optimization directly on a torch_fx.Graph styled computational graph.
394
+
395
+ # D.2 Checkmate settings
396
+
397
+ We re-implemented Checkmate [19] for the PyTorch intermediate representation. For Checkmate MILP, we used Gurobi as the internal MILP solver. For Checkmate LP, we employed PDLP solver, provided by OR-Tools [28] to solve large-scale LP problems in a multithreaded manner. PDLP was the fastest solver among the available options (CLP, GLOP, and SCIP) in OR-Tools.
398
+
399
+ The execution time and solution quality of Checkmate LP are highly dependent on the hyperparameters of the LP solver, as discussed in Appendix A.1. We adopted the default values of 1e-4 for primal tolerance and dual tolerance in Checkmate's open-source implementation. We randomly generated 100 different thresholds in addition to the default threshold 0.5 for randomized rounding and selected the best solution as the final output after testing these thresholds.
400
+
401
+ # D.3 Environment
402
+
403
+ The proposed algorithm (FastSA) and Checkmate LP were evaluated using a cluster system, with each instance configured with 8 CPU cores (Intel(R) Xeon(R) Platinum 8380 @ 2.30GHz) and 100 GiB of RAM with a NVIDIA A100 80GB GPU. Although the PDLP solver fully uses the allocated resources, FastSA only requires a single CPU. Due to Gurobi license constraints, Checkmate MILP was executed on another machine with 36 CPU cores (Intel(R) Xeon(R) Gold 6154 CPU @ 3.00 GHz) and 376 GiB of RAM. The implementation of both FastSA and Checkmate was written in C++ and compiled using GCC 10.4.0 with the O3 option. The compiled module was integrated with PyTorch 2.1.0.dev20230404 and the experiments were run using Python 3.9.12 with CUDA 11.8.
404
+
405
+ # D.4 Settings of our algorithm
406
+
407
+ Initialization of SA state We initialized the default computation sequence $S$ used in our SA as follows. First we fix the length of $S$ as $T = 2^{20}$ and initialize all of them by nop. Then, for each $n_i \in \mathcal{N}$ of the default computation sequence $(n_1, \ldots, n_k)$ , we set $S[t] \gets n_i$ , where $t = \frac{i}{k + 1} T$ .
408
+
409
+ First-stage SA The SA algorithm in the first stage is applied to grouped nodes for efficient memory usage reduction. The number of SA iterations is set to 20 million in this stage, but if the memory budget is met, the algorithm stops even if the temperature is still high.
410
+
411
+ The initial temperature is set as $0.1\%$ of the initial objective value, and the final temperature is set as $0.01\%$ of the initial temperature. The temperature in the $i$ -th iteration is calculated as $h_i \times \exp(\log(h_f / h_i) \times i / N)$ , where $h_i$ and $h_f$ denote the initial and final temperature, respectively, and $N = 2 \times 10^7$ is the maximum number of iterations.
412
+
413
+ Second-stage SA The second stage of the SA algorithm is applied to decomposed nodes and targets to remove redundant computations. The number of iterations in this stage is 2 million. The initial temperature is set as the final temperature in the first-stage SA. The final temperature is set as $0.01\%$ of the initial temperature, as in the first stage. The calculation of the temperature for each iteration is also the same.
414
+
415
+ # D.5 Models
416
+
417
+ The models are obtained from timm (PyTorch Image Models) and transformers provided by Hugging Face. The versions of timm and transformers are 0.9.1 and 4.28.1, respectively. All the models are set to the training mode. Random tensors are used for the inputs. For vision models shown in Table 3, we set batchsize as 512. For GPU benchmarks, we reduced the batchsize to 256 to compare the performance to the baseline, without recomputation. We summarize the all the models used in our experiments with their configurations below.
418
+
419
+ For text models shown in Table 4, we used batchsize 128 and context length 512 to obtain the simulated memory and computational cost after applying recomputation. For the real GPU benchmarking, we reduced the values to 64 and 256, respectively.
420
+
421
+ For language models shown in Table 5, we used batchsize 8 and context length 2048, except for GPT2 with maximum context length 1024 to obtain the simulated values. For the real GPU benchmarking, we reduced the values to 4 and 1024 for all of them.
422
+
423
+ <table><tr><td>name</td><td>full model name</td><td>resolution</td><td>|N|</td><td>|V|</td></tr><tr><td>ConvNeXt [24]</td><td>convnext_tiny</td><td>224</td><td>932</td><td>1253</td></tr><tr><td>ConvNeXt V2 [42]</td><td>convnextv2_large</td><td>224</td><td>2840</td><td>3467</td></tr><tr><td>EVA-02 [13, 37]</td><td>eva02_largePatch14_224</td><td>224</td><td>6205</td><td>6967</td></tr><tr><td>ViT [36, 12]</td><td>vit_largePatch16_224</td><td>224</td><td>2817</td><td>3362</td></tr><tr><td>ViT (small)</td><td>vit_smallPatch16_224</td><td>224</td><td>1425</td><td>1706</td></tr><tr><td>MobileNetV3 [18, 41]</td><td>mobilenetv3_large_100</td><td>224</td><td>484</td><td>1155</td></tr><tr><td>EfficientNet [38, 41]</td><td>efficientnet_b0</td><td>224</td><td>996</td><td>1766</td></tr><tr><td>DeiT III [39]</td><td>deit3_basePatch16_224</td><td>224</td><td>1545</td><td>1850</td></tr><tr><td>XCiT [1]</td><td>xcit_tiny_12_p16_224</td><td>224</td><td>2818</td><td>3550</td></tr><tr><td>BEiT [2, 12]</td><td>beit_basePatch16_224</td><td>224</td><td>1835</td><td>2187</td></tr><tr><td>CoAtNet [10]</td><td>coatnet_2_rw_224</td><td>224</td><td>2204</td><td>3128</td></tr><tr><td>VGG11 [35]</td><td>vgg11</td><td>224</td><td>69</td><td>119</td></tr><tr><td>ResNet18 [16]</td><td>resnet18</td><td>224</td><td>171</td><td>437</td></tr></table>
424
+
425
+ Table 3: List of timm models (PyTorch Image Models). $|\mathcal{N}|$ and $|\mathcal{V}|$ denote the numbers of the nodes and values of the computational graph.
426
+
427
+ <table><tr><td>name</td><td>full model name</td><td>|N|</td><td>|V|</td></tr><tr><td>ALBERT [22]</td><td>albert-base-v2</td><td>2285</td><td>2419</td></tr><tr><td>BERT [11]</td><td>bert-base-uncased</td><td>2078</td><td>2386</td></tr><tr><td>DistilBERT [31]</td><td>distilbert-base-uncased</td><td>1060</td><td>1228</td></tr><tr><td>ELECTRA [8]</td><td>google/electra-small-discriminator</td><td>2099</td><td>2409</td></tr></table>
428
+
429
+ Table 4: List of transformers text models.
430
+
431
+ <table><tr><td>name</td><td>full model name</td><td>|N|</td><td>|V|</td></tr><tr><td>GPT2 [29]</td><td>gpt2</td><td>1889</td><td>2179</td></tr><tr><td>GPT Neo [6] 125M</td><td>EleutherAI/gpt-neo-125m</td><td>2088</td><td>2378</td></tr><tr><td>GPT Neo 2.7B</td><td>EleutherAI/gpt-neo-2.7B</td><td>5488</td><td>6238</td></tr><tr><td>BLOOM [32] 560M</td><td>bigscience/bloom-560m</td><td>3667</td><td>4167</td></tr><tr><td>BLOOM 3B</td><td>bigscience/bloom-3b</td><td>4567</td><td>5187</td></tr><tr><td>OPT [43] 350M</td><td>facebook/opt-350m</td><td>3971</td><td>4581</td></tr><tr><td>OPT 6.7B</td><td>facebook/opt-6.7b</td><td>5245</td><td>6059</td></tr><tr><td>LLaMA [40] 7B</td><td>default config*</td><td>8330</td><td>8722</td></tr></table>
432
+
433
+ Table 5: List of large language models. *For LLaMA, we constructed a model using default configuration (7B parameters) provided by transformers.LlamaConfig.
434
+
435
+ # D.6 Solution stability of FastSA and Checkmate LP
436
+
437
+ Since both FastSA and Checkmate LP are randomized algorithms, the recomputation plans depend on the seed. We executed FastSA and Checkmate LP for various memory budgets and see how smoothly the solution change. Figure 9 shows the time-space tradeoff curve for different memory budgets. Since Checkmate LP could not solve the instances with more than 2300 nodes, we compared the performance difference on relatively small models. For each model, we set memory budgets as 0.15, 0.20, ..., 0.95, 1.00 and plotted the results. In most cases, FastSA found even better solutions than Checkmate LP. Although FastSA is a randomized heuristic, its performance is stable and not heavily dependent on random seeds. On the other hand, Checkmate LP's performance is not stable, mainly due to LP-related issues as discussed in Appendix A.1. For instance, in ViT (small), one of the solutions found by Checkmate LP had almost 10 times overhead. Although running Checkmate LP with different seeds or hyperparameters may still yield good solutions, it requires tremendous computational resources.
438
+
439
+ ![](images/d2ba53815b899ec05310f5b98eb3583a3616fdcd322b198889b785375a6f89e2.jpg)
440
+ Figure 9: The time-space tradeoff curves for Checkmate LP and FastSA. For most settings, FastSA could find solutions with lower cost overhead than Checkmate LP for the same relative memory reduction. Checkmate LP is not stable due to LP relaxation. For MobileNetV3, FastSA could not find solutions for small memory budgets. This would be due to the limitation of the node grouping.
441
+
442
+ ![](images/34d9b5b7bacbef0134a5af09a4e00264a0e3073a44348b2e326cd17102a48614.jpg)
443
+
444
+ ![](images/b32ec2a40f3a98a8b52c4305bd91507033fa883f66bb4f874f668cff7ca690cf.jpg)
445
+
446
+ ![](images/431dffd77296c38cc7a1b532732ad9dd46266ca5ecf07156450916719269f311.jpg)
447
+
448
+ ![](images/8db83cf7295eae38960977986723924cbd56c1e6ddd68e78ecd7104afc395950.jpg)
449
+
450
+ ![](images/276b1b72452c3b740edd6ff0ffe43050e32b466ddcac9666c1a0143e168abcd0.jpg)
451
+
452
+ ![](images/11202b6cf995a7947557a2d1f627f97e3a0579febbb7bd0dd8eb21472db53a53.jpg)
453
+ Figure 10: Time-space tradeoff for vision models with $25\%$ (darker) and $50\%$ (lighter) memory budgets. The batch size is 512. Checkmate LP was unable to find solutions within a 6-hour time limit for ViT and CoAtNet. The other models without Checkmate LP bars suffered from out-of-memory errors exceeding 100 GiB. The truncated results of the relative cost of Checkmate LP for DeiT III and BEiT were 2.82 and 3.52, respectively.
454
+
455
+ # D.7 Full experimental results for comparison with Checkmate LP
456
+
457
+ The graph in Figure 10 displays the recomputation results of our FastSA algorithm and Checkmate LP for all the vision models in Table 3. For ViT and CoAtNet, although Checkmate LP could run within a 100 GiB RAM, it failed to find a feasible solution within the 6-hour time limit. Furthermore, the recomputation overhead by Checkmate LP for DeiT III and BEiT was more than 2.8 times, possibly due to randomized rounding of non-binary LP solutions. While FastSA obtained better solutions in both time and memory for ViT (small), DeiT III, and BEiT, it could not reduce memory as much as Checkmate LP for MobileNetV3 and EfficientNet, mainly due to suboptimal node grouping.
458
+
459
+ Figure 11 displays the recomputation results for all the text models in Table 4 and recent language models in Table 5. For some text models, such as BERT and ELECTRA, our algorithm was unable to reduce memory usage sufficiently for a $25\%$ budget, either due to limitations on the models
460
+
461
+ ![](images/52dcec6bb49bec3e415367a663273f2ee407381fb56a8b47c07d5a895f1993f4.jpg)
462
+
463
+ ![](images/387c70fe66d8bc23d86cb3c337060b0db9ecd60a644709f0d6195b06ec4aafc3.jpg)
464
+ Figure 11: Time-space tradeoff for text models, including language models, with $25\%$ (darker) and $50\%$ (lighter) memory budgets. Checkmate LP failed for larger models due to out-of-memory errors exceeding 100 GiB.
465
+
466
+ themselves or convergence to suboptimal solutions. However, overall, our algorithm was successful in finding good recomputation plans with only a small increase in the recomputation overhead.
467
+
468
+ # D.8 Actual GPU memory reduction and time increase
469
+
470
+ Throughout the above experiments, we discussed the memory decrease and cost increase by estimated values. This is because it is hard to assign exact computational costs for each node without execution (note that our computational graphs were traced without execution) and/or values may share the same physical storage in actual GPU memory. In this section, we provide a summary of the results of actual memory usage and execution time measured on an NVIDIA A100 80GB GPU and discuss the differences between the actual metrics and the simulated ones.
471
+
472
+ Figure 12 displays the difference of simulated and actual CUDA memory usages for various models for three cases: no optimization, $50\%$ budget, and $25\%$ budget. On average, the actual GPU memory usages were less than the simulated values by $10\%$ for no optimization cases, but could be up to $5\%$ more for $25\%$ budget cases. In most cases, the simulated memory usages were higher than the allocated usages, mainly because the memory simulation is not precise enough. Notably, operations like reshape, expand, and slicing may produce new views of the original tensor and do not allocate new GPU memory. Our memory simulation does not consider this information, leading to overestimation of the memory usage. Memory aliasing, tensor views, and node-intermediate memory usage can account for these differences. For the $25\%$ budgets, the simulated and allocated memory usages are similar for all models. However, in some instances, the allocated memory exceeds the simulated memory, which may be due to an inaccurate memory simulation. One of the reasons is that we do not consider the node-intermediate memory usage for simulation, which underestimates the memory usage. We can address this issue in our problem setting by simulating this value (please refer to Appendix B.1).
473
+
474
+ Figure 13 shows the difference between the actual increases in execution time and the simulated ones by the unit cost. We observed that the simulated cost is relatively close to the actual measured overhead. The difference in simulated time increase from the actual time increase was at most $7\%$ and $12\%$ for $50\%$ and $25\%$ memory budgets respectively. We observe that practical recomputation plans can be successfully obtained for many models using the unit cost. However, the reason behind
475
+
476
+ ![](images/4d3dada35e4b40eb5188b4b25c75164bfd83ff3a18df5f1f0da5a635bacd5c4e.jpg)
477
+ Figure 12: The actual memory usage of optimized models by FastSA. For each model, the three bars indicate the actual memory usage of the model when executed without optimization, and with $50\%$ and $25\%$ budget constraints. The simulated memory is indicated by the red line. The actual memory is the total size of CUDA tensor allocated by the PyTorch CUDA allocator. As detailed in Section D.5, the batchesizes and context lengths are modified to run the original model without recomputation for comparison.
478
+
479
+ ![](images/6b867e17d5226d5481db7676d4fc386931825d709cc3eaf6dac1c379fb4a7fe3.jpg)
480
+ Figure 13: Overhead of execution time by recomputation. For each model, the two bars correspond to $50\%$ , and $25\%$ memory budget setting, respectively. The simulated increase of the computational cost is indicated by the blue line.
481
+
482
+ similar simulated and actual overhead is a balanced mix of operators with high costs, such as matrix multiplication, and operators with negligible costs, such as reshape, effectively averaging the cost. By accurately profiling the operators, it is possible to obtain more efficient recomputation plans.
afastheuristictooptimizetimespacetradeoffforlargemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14a08241e96bc2f23abd26552435f1eb3283e6505db464bf570b34f6c87f2a80
3
+ size 764194
afastheuristictooptimizetimespacetradeoffforlargemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:584d1fd56b79706941fbbf0b47c065f1e21c479cfb9592f014f458d79a206a56
3
+ size 747062
agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/60d9acb9-0b15-470c-8ed1-f8a10c11a7f3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec3f884f8ffcd58e8958bf635043a426eac7c1e02b8e7eb28d96944116cf4736
3
+ size 79939
agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/60d9acb9-0b15-470c-8ed1-f8a10c11a7f3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee243cc3cebf246b567f73c1b18ee5fa264012151fd45e2dc5291b6f3723c8ac
3
+ size 98885
agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/60d9acb9-0b15-470c-8ed1-f8a10c11a7f3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c16d18dc4919bbb01b2b3c8cc6844518baf87000892021b63c566a98fea54a98
3
+ size 4757984
agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/full.md ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A generative model of the hippocampal formation trained with theta driven local learning rules
2
+
3
+ Tom M George $^{1}$
4
+ Caswell Barry $^{2}$ Kimberly Stachenfeld $^{3,4}$ Claudia Clopath $^{5,1}$ Tomoki Fukai $^{6}$
5
+
6
+ $^{1}$ Sainsbury Wellcome Centre, UCL, UK $^{2}$ Dept. of Cell and Developmental Biology, UCL, UK
7
+
8
+ <sup>3</sup>Google DeepMind, London, UK
9
+ <sup>4</sup>Columbia University, New York, NY
10
+
11
+ <sup>5</sup>Bioengineering Dept., Imperial College, UK <sup>6</sup>Okinawa Institute of Science and Technology, Japan
12
+
13
+ tom.george.20@ucl.ac.uk
14
+
15
+ # Abstract
16
+
17
+ Advances in generative models have recently revolutionised machine learning. Meanwhile, in neuroscience, generative models have long been thought fundamental to animal intelligence. Understanding the biological mechanisms that support these processes promises to shed light on the relationship between biological and artificial intelligence. In animals, the hippocampal formation is thought to learn and use a generative model to support its role in spatial and non-spatial memory. Here we introduce a biologically plausible model of the hippocampal formation tantamount to a Helmholtz machine that we apply to a temporal stream of inputs. A novel component of our model is that fast theta-band oscillations (5-10 Hz) gate the direction of information flow throughout the network, training it akin to a high-frequency wake-sleep algorithm. Our model accurately infers the latent state of high-dimensional sensory environments and generates realistic sensory predictions. Furthermore, it can learn to path integrate by developing a ring attractor connectivity structure matching previous theoretical proposals and flexibly transfer this structure between environments. Whereas many models trade-off biological plausibility with generality, our model captures a variety of hippocampal cognitive functions under one biologically plausible local learning rule.
18
+
19
+ # 1 Introduction
20
+
21
+ Generative models seek to create new data samples which are similar to those from the training set. To do so they must learn the probability distribution of the training data, comprising a rich, generalisable and accurate model of the world. Many of the recent advances in AI have involved types of generative models: VAEs [1], GANs [2], diffusion models [3] and autoregressive models [4] have seeded improvements in AI capabilities ranging from data compression [5] to image generation [6] and natural language [7]. In neuroscience, the animal brain has long been known to exploit generative models [8, 9]. The ability to generate representative sensory data samples can be used directly, for example during offline planning or memory recall. It can also be used indirectly to aid training of inference networks with the goal of processing rich, noisy and high dimensional streams of incoming sensory stimuli, as discussed in the predictive coding literature [10]. In a sentence: "What I cannot create [generate], I do not understand [inference]" (R. Feynman).
22
+
23
+ ![](images/4a16b48a9ce3d445f398554a9acfaa0408c8b0ded3f3729548d443e76aa84f26.jpg)
24
+ Figure 1: A biologically plausible generative model is trained with theta frequency wake-sleep cycles and a local learning rule. a Network schematic: high-D stimuli from an underlying environmental latent, $z$ , arrive at the basal dendrites of the sensory layer, $p$ , and map to the hidden layer, $g$ (this is the inference model, weights in green). Simultaneously, top-down predictions from the hidden layer $g$ arrive at the apical dendrites of $p$ (this is the generative model, weights in blue). b Neurons in layers $p$ and $g$ have three compartments. A fast oscillation, $\theta(t)$ , gates which dendritic compartment - basal $(p_B, g_B)$ or apical $(p_A, g_A)$ - drives the soma. A local learning rule adjusts input weights to minimise the prediction error between dendritic compartments and the soma. c This equates to rapidly switching "wake" and "sleep" cycles which train the generative and inference models. Panel c displays just two updates per theta-cycle, in reality there are many $(\delta t << T_\theta)$ .
25
+
26
+ The hippocampal-entorhinal system (aka. hippocampal formation) – a brain structure implicated in spatial [11] and non-spatial [12] memory – provides a pertinent example. Its primary role seems to be inference [13]: mapping sensory inputs into a robust and decodable representation of state (grid cells [14], place cells [11] etc. [15]). A generative model is thought to have a dual role in learning: supporting offline tasks such as route planning [16] and memory consolidation [17], and online during behaviour with path integration [18]. Path integration enables the hippocampal network to maintain an up-to-date and accurate estimate of its position in the absence of reliable sensory data by integrating self-motion cues. A recent flurry of computational [19, 20, 21] and theoretical [22, 21] work has highlighted the importance of path integration as a key objective explaining hippocampal function and representations.
27
+
28
+ Existing computational generative models of the hippocampal formation [23, 24] account for many of its cognitive functions and internal representations but require non-trivial learning rules and message passing protocols which don't connect with known aspects of biology. Computational models of path integration [25, 26, 27] have mostly focussed on continuous attractor networks which, although experimentally supported [28], alone lack the complexity or expressivity required of a fully general model of the hippocampal memory system.
29
+
30
+ The primary contribution of this paper is to introduce a biologically plausible model of sequence learning in the hippocampus which unifies its capacities as a generative model of sensory stimuli and path integration under one schema. To do this we propose modeling the hippocampal formation as a Helmholtz machine [29] which learns to predict sensory stimuli given the current hidden state and action (e.g. velocity). We propose a deep connection between the hippocampal theta oscillation [30] and the unsupervised wake-sleep algorithm [31] for training Helmholtz machines. Though this class of generative models isn't widely used, and lacks the scalability of the fastest transformer-based sequence learners, it excels in this context since it has many natural points of contact with biology (both in terms of architecture and neural dynamics) yet still maintains the expressiveness afforded to models of the brain by deep neural networks.
31
+
32
+ # In this paper we:
33
+
34
+ - introduce a new model of the hippocampal formation which learns the latent structure of an incoming stream of sensory stimuli analogous to a Helmholtz machine.
35
+ - describe a biologically plausible learning regime: Theta-oscillations gate information flow through multi-compartmental neurons which rapidly switches the system between "wake" and "sleep" phases. All plasticity is local.
36
+
37
+ - train our model on stimuli from a biologically relevant spatial exploration task and show it learns to path integrate by developing a ring attractor connectivity structure (comparable to theoretical predictions and empirical results in deep recurrent neural networks trained with gradient descent). Learning generalises: when the agent moves to a new environment, path integration capabilities recover without needing to relearn the path integration weights.
38
+
39
+ Our model of the hippocampal formation simultaneously (i) accounts for its role as a generative model of sensory stimuli, (ii) can learn to path integrate and (iii) can transfer structural knowledge between environments. The model, though here applied to the hippocampus, can be viewed as a step towards a general solution for how biological neural networks in many brain regions (for example visual cortex [10]) can learn generative models of the world.
40
+
41
+ # 1.1 Related work
42
+
43
+ A recent generative model of the hippocampus, the Tolman-Eichenbaum Machine [23], proposed that the hippocampal formation be thought of as a hierarchical network performing latent state inference. Medial entorhinal cortex (MEC) sits atop the hierarchy and learns an abstract representation of space which is mapped to the hippocampus (HPC) where it is bound onto incoming sensory stimuli. Once trained the system can act in a generative fashion by updating the hidden representation with idiotheistic action signals and then predicting the upcoming sensory experience. The drawback of this model, and others which share a similar philosophical approach [32, 24], is that it requires training via backpropagation through time (or equivalent end-to-end optimisation schemes, as in [24]) without clear biological correlates. Related hierarchical network architectures have also been studied in the context of reinforcement learning [33] and hippocampal associative memory [34].
44
+
45
+ Historically, hippocampal models of path integration have focused on continuous attractor networks (CANs) [25, 26, 27, 21] in entorhinal cortex. A bump of activity representing location is pushed around the CAN by speed and/or head-direction selective inputs, thus integrating self-motion. CANs have received substantial experimental support [28] but few studies adequately account for how this structure is learned by the brain in the first place. One exception exists outside the hippocampal literature: Vafidis et al. [35] built a model of path integration in the fly head-direction system which uses local learning rules. Here we go further by embedding our path integrator inside a hierarchical generative model. Doing so additionally relaxes the assumption (made by Vafidis et al. [35] and others [36]) that sensory inputs into the path integrator are predefined and fixed. Instead, by allowing all incoming and outgoing synapses to be learned from random initialisations, we achieve a more generalisable model capable of transferring structure between environments (see section 3.3).
46
+
47
+ Hippocampal theta oscillations have been linked to predictive sequence learning before [37, 38, 39] where research has focused on the compressive effects of theta sequences and how these interplay with short timescale synaptic plasticity. Instead of compression, here we hypothesize the role of theta is to control the direction information flows through the hierarchical network.
48
+
49
+ Finally, a recent theoretical work by Bredenberg et al. [41] derived, starting from principles of Bayesian variational inference, a biologically plausible learning algorithm for approximate Bayesian inference of a hierarchical network model built from multi-compartmental neurons and trained with local learning rules using wake-sleep cycles. Here we build a similar network to theirs (i) extending it to a spatial exploration task and mapping the hidden layers onto those in the hippocampal formation, (ii) simplifying the learning rules and relaxing a discrete-time assumption – instead, opting for a temporally continuous formulation more applicable to biological tasks such as navigation – and (iii) adapting the hidden layer to allow idiotetic action signals to guide updates (aka. path integration). Their work provides a theoretical foundation for our own, helping to explaining why learning converges on accurate generative models.
50
+
51
+ # 2 A biologically plausible generative model trained with rapidly switching wake-sleep cycles and local learning rules
52
+
53
+ In sections 2 and 3 we give concise, intuitive descriptions of the model and experiments; expanded details can be found in the supplementary material.
54
+
55
+ # 2.1 Basic model summary
56
+
57
+ We consider learning in an environment defined by a latent state, $z(t)$ , which updates according to stochastic dynamics initially unknown to the network,
58
+
59
+ $$
60
+ \frac {d z}{d t} = f _ {z} (t). \tag {1}
61
+ $$
62
+
63
+ These dynamics depends on the task; first we consider $z(t)$ to be a set of mutually independent random variables and later we consider the more realistic task of an agent moving on a 1D track.
64
+
65
+ The network recieves sensory input which is a function of the latent state into a sensory layer, $\mathbf{p}(t)$ , and communicates this to a hidden layer (aka "internal state"), $\mathbf{g}(t)$ . The network contains both an inference (aka. recognition) model which infers the hidden state from the sensory input (green arrows, Fig. 1a) and a generative model which updates the hidden state with recurrent synapses and maps this back to the sensory layer (blue arrows). As we will soon identify these processes with Basal and Apical dendritic compartments of pyramidal neurons we label activations sampled from the inference model with the subscript $B$ and those from the generative model with the subscript $A$ . In summary
66
+
67
+ $$
68
+ \left. \begin{array}{l} \mathbf {p} _ {B} (t + \delta t) = \bar {\mathbf {p}} (z (t)) \\ \mathbf {g} _ {B} (t + \delta t) = \sigma_ {g _ {B}} \left(\mathbf {w} _ {g _ {B}} \mathbf {p} (t)\right) \end{array} \right\} \quad \text {I n f e r e n c e m o d e l} \tag {2}
69
+ $$
70
+
71
+ $$
72
+ \left. \begin{array}{l} \mathbf {g} _ {A} (t + \delta t) = \sigma_ {g _ {A}} \left(\mathbf {w} _ {g _ {A}} \mathbf {g} (t)\right) \\ \mathbf {p} _ {A} (t + \delta t) = \sigma_ {p _ {A}} \left(\mathbf {w} _ {p _ {A}} \mathbf {g} (t)\right) \end{array} \right\} \quad \text {G e n e r a t i v e m o d e l .} \tag {3}
73
+ $$
74
+
75
+ $\mathbf{w}_{g_B}, \mathbf{w}_{p_A}, \mathbf{w}_{g_B}$ are matrices of randomly initialised and plastic synaptic weights. $\bar{\mathbf{p}}$ maps the environmental latent into a vector of neural inputs. $\sigma$ 's denote activation functions applied to the dendritic pre-activations - either the identity $(\sigma(x) = x)$ or rectified tanh functions $(\sigma(x) = \max(0, \tanh(x)))$ . A small amount of noise is added to the dendritic activations to simulate realistic biological learning.
76
+
77
+ We believe that the widely adopted convention of modelling neurons as single-compartment perceptrons is limiting. By considering, in a minimal extension, the distributed dendritic structure of real neurons we can tap into significant potential for explaining hippocampal learning. Theoretical [42, 43, 44, 45] and experimental [46, 47, 48] research into credit assignment in biological neurons has identified different roles for basal and apical dendrites: basal dendrites are thought to receive bottom-up drive from sensory inputs whereas apical dendrites receive top-down drive from higher layers in the sensory hierarchy [49]. Following this line of research — and matching an equivalent theoretical model of latent state inference described by [41] — we identify the inference process with synaptic inputs into a basal dendritic compartment of pyramidal neurons and the generative process with synaptic inputs into an apical dendritic compartment. In summary, each $\mathbf{p}$ and $\mathbf{g}$ neuron in our model has three compartments: a somatic compartment, a basal dendritic compartment and an apical dendritic compartment (Fig. 1b). Only the somatic activation is used for communication between layers (right hand side of Eqns. (2) and (3)) while dendritic compartment activations are variables affecting internal neuronal dynamics and learning as described below (Eqns. (4) and (6)).
78
+
79
+ # 2.2 Theta oscillations gate the direction of information flow through the network
80
+
81
+ The dynamics of the somatic activations $\mathbf{p}(t)$ and $\mathbf{g}(t)$ are as follows: the voltage in each soma is either equal to the voltage in the basal compartment or the voltage in the apical compartment depending on the phase of an underlying theta oscillation. This is achieved by a simple theta-gating mechanism (Fig. 1b):
82
+
83
+ $$
84
+ \mathbf {p} (t) = \theta (t) \mathbf {p} _ {B} (t) + (1 - \theta (t)) \mathbf {p} _ {A} (t)
85
+ $$
86
+
87
+ $$
88
+ \mathbf {g} (t) = \theta (t) \mathbf {g} _ {B} (t) + (1 - \theta (t)) \mathbf {g} _ {A} (t). \tag {4}
89
+ $$
90
+
91
+ where $\theta (t)$ is a $5\mathrm{Hz}$ global theta oscillation variable defined by the square wave function:
92
+
93
+ $$
94
+ \theta (t) = \left\{ \begin{array}{l l} 1, & \text {i f} t / T \mod 1 \leq 0. 5 \\ 0, & \text {i f} t / T \mod 1 > 0. 5 \end{array} \right. \tag {5}
95
+ $$
96
+
97
+ for $T = 1 / f_{\theta}$ and $f_{\theta} = 5 \mathrm{~Hz}$ , matching the hippocampal theta frequency (5-10 Hz) [50]. According to this model theta-band oscillations in the hippocampal local field potential gate which dendritic compartment drives the soma. Experimental [47, 51, 52] and modelling work [53] gives provisional support for this assumption.
98
+
99
+ These local theta-dynamics have global consequences: the early phase $(\theta(t) = 1)$ of each theta cycle can be thought of as a "wake" phase where information flows upwards through the network from the environment to the hidden layer, sampling the inference model. The latter phase $(\theta(t) = 0)$ of each theta cycle is a "sleep" phase where information flows down from the hidden layer to the sensory units, sampling the generative model. These dynamics are displayed in Fig. ①
100
+
101
+ # 2.3 Hebbian-style learning rules train synapses to minimise local prediction errors
102
+
103
+ In contrast to comparable models which are optimised end-to-end using backpropagation through time our model learns synaptic weights according to a local plasticity rule which is a simplified variant of a rule proposed by Urbanczik and Senn [43]. Incoming synaptic projections are continually adjusted in order to minimize the discrepancy between the somatic activation and the dendritic activation. The full learning rules are described in the supplement but simplified versions are given here:
104
+
105
+ $$
106
+ \frac {d \mathbf {w} _ {g _ {B}}}{d t} \propto (\mathbf {g} (t) - \mathbf {g} _ {B} (t)) \mathbf {p} (t) ^ {\intercal}
107
+ $$
108
+
109
+ $$
110
+ \frac {d \mathbf {w} _ {p _ {A}}}{d t} \propto (\mathbf {p} (t) - \mathbf {p} _ {A} (t)) \mathbf {g} (t) ^ {\intercal}
111
+ $$
112
+
113
+ $$
114
+ \frac {d \mathbf {w} _ {g _ {A}}}{d t} \propto (\mathbf {g} (t) - \mathbf {g} _ {A} (t)) \mathbf {g} (t) ^ {\top} \tag {6}
115
+ $$
116
+
117
+ Notably this learning rule is equivalent for all plastic synapses in the model: $\mathbf{p}$ to $\mathbf{g}$ , $\mathbf{g}$ to $\mathbf{p}$ and the recurrent $\mathbf{g}$ to $\mathbf{g}$ synapses (see Fig. 1b). If a local prediction error is detected, for example the somatic activation is larger than the dendritic activation, then the synaptic strength of inputs into that dendritic compartment which are positive/negative are strengthened/weakened to reduce the error. This model can equivalently be viewed as a type of Hebbian learning – weight change is proportional to the correlation of pre- and post-synaptic activity (the first term) – regularised (by the second term) to prevent unbounded growth.
118
+
119
+ During the wake phase the weights of the generative model $(\mathbf{w}_{p_A}$ and $\mathbf{w}_{g_A}$ ) are trained and plasticity on the inference weights $(\mathbf{w}_{g_B})$ falls to zero. This occurs naturally because $\mathbf{p} = \mathbf{p}_B$ so there will be no basal prediction errors to correct. During sleep the reverse occurs; the weights of the inference model are trained and plasticity on the generative model falls to zero. Experimentally, apical activity is known to guide plasticity at basal synapses in CA1 [46]. This alternating, coordinated regime of sampling and learning (sample-inference-train-generative, then sample-generative-train-inference) is a hallmark of the wake-sleep algorithm. It fundamentally differs from the forward and backward sweeps of backpropagation since neurons remain provisionally active at all times so the process of learning minimally perturbs perception. Also, whereas backpropagation sends error signals down through the network to train synaptic weights, here only predictions are sent between layers and error signals are calculated locally at each dendrite.
120
+
121
+ As discussed in section 1, Bredenberg et al. [41] mathematically derive learning rules similar to these starting from a loss function closely related to the evidence lower bound (ELBO). As such our identification of early- and late-theta phases as "wake" and "sleep" cycles can be considered precise: from a Bayesian perspective our hippocampal model is minimising a modified ELBO loss (see supplement) thus learns to find approximately optimal inference and generative models accounting from the temporally varying stimulus stream it is presented.
122
+
123
+ # 2.4 Velocity inputs into the hidden layer
124
+
125
+ For path integration, the hidden state needs access to an idiotetic (internally generated) velocity signal. To satisfy this we endow the hidden layer, $\mathbf{g}$ , with conjunctive velocity inputs, henceforth "conjunctive cells", as shown in Fig. 3a & b. Conjunctive cells are organised into two groups: $\mathbf{g}_{v_L}$ is responsible for leftward motion and $\mathbf{g}_{v_R}$ for rightward motion. Each conjunctive cell receives input from the hidden units and either the leftward $(v_{L} = \max(0, -\dot{x}))$ or rightward $(v_{R} = \max(0, \dot{x}))$ component of the velocity. For the results shown this connectivity is one-to-one $[\mathbf{w}_{g_{v_L}}]_{ij} = [\mathbf{w}_{g_{v_R}}]_{ij} = \delta_{ij}$ but
126
+
127
+ ![](images/0fa92ab1feb6b35b57d69d351b7f81cb2e9d5112a855bc26551ed295fc7379e4.jpg)
128
+
129
+ ![](images/f09a8e03ea82495ccf9a71762f5d18ac2d2c06e08926f92f428ead4dabc9dd74.jpg)
130
+ Figure 2: Learning in an environment of temporally varying latents. a In this artificial task the latent space comprises of $N_z = 5$ independent random variables with an autocorrelation decay timescale of 1 s. b Prediction errors (difference between apical and basal activations) in sensory and hidden layers reduce over training time. c Tested in wake mode $(\theta = 1)$ after training, the ground truth stimulus matches apical prediction for all stimulus dimensions (one shown) implying the network is efficiently "autoencoding" the sensory inputs into and back out of the compressed hidden layer. d Tested in sleep mode $(\theta = 0$ , no environmental inputs), generated data from the hidden units, $g$ , have an autocorrelation curve which matches that of the true latents implying a statistically accurate generative model has been learned. More extensive samples from this model, before and after training, can be found in Fig. S1
131
+
132
+ ![](images/a09d8e18278869beb321c0969733da86c60038fd990cdd6b9e575b4140604d36.jpg)
133
+
134
+ ![](images/c91015a56cbad93dd0984461ddae3f5d82d900e97eccffcac43cc3ae53a1d335.jpg)
135
+
136
+ random connectivity works too, see supplement. Finally, conjunctive cells send return connections back to the apical dendritic compartment of the hidden units via a randomly initialised plastic synaptic weight matrix. This inputs are what drive the hidden units to path integrate.
137
+
138
+ This model takes inspiration from so-called conjunctive grid cells [54] found in the medial entorhinal cortex (MEC). These cells, though to be an integral component of the mammilian path integration system [27], are jointly tuned to head direction and location much like the conjunctive cells in our model. An important and novel aspect of our model is that synaptic weights between or into the hidden units are learned. This deviates from other models for example that by Burak and Fiete [27] (where all connectivity is predefined and fixed) or Vafidis et al. [35] and Widloski and Fiete [36] (where sensory inputs to the hidden units are pre-defined and fixed). This is not only more realistic but affords the model flexibility to translate path integration abilities between environments without having to relearn them, a form of transfer learning which we demonstrate in section 3.3
139
+
140
+ # 3 Results
141
+
142
+ # 3.1 Validation on an artificial latent learning task
143
+
144
+ We begin by testing the basic model (i.e. without conjunctive inputs, Fig. 1a) on an artificial task. $N_{z} = 5$ latents, $z_{i}(t)$ , are independently sampled from a smooth, random process with an autocorrelation timescale of 1 second (Fig. 2a). The sensory layer, $N_{p} = 50$ , then receives a high-dimensional random linear mixture of the latents into the basal compartments:
145
+
146
+ $$
147
+ \mathbf {p} _ {B} (t) = \mathbf {A} \mathbf {z} (t), \tag {7}
148
+ $$
149
+
150
+ where $\mathbf{A} \in \mathbb{R}^{50 \times 5}$ and $[\mathbf{A}]_{ij} \sim \mathcal{N}(0, \frac{1}{\sqrt{N_z}})$ . The hidden layer, $\mathbf{g}(t)$ , is matched in size to the latent process, $N_g = N_z = 5$ , and all dendritic activation functions are linear. We train the model for 30 minutes of simulated time and track prediction errors, the difference between the basal and apical activations in the sensory and hidden layers, which reliably decreased throughout training (Fig. 2b). We then perform two tests designed to confirm whether the model has learnt accurate inference and generative models.
151
+
152
+ ![](images/5610ffe6e0a0218d27d92f0223d60f5232027c0d6ffcb064f81d7634f89d83af.jpg)
153
+
154
+ ![](images/483d819b57e7c0c9f98ec59784ba778207827b8b9133647118ff46a1562f9651.jpg)
155
+
156
+ ![](images/6b1d3aea8e83580cd468a46da3d8752cf0a7c372e75f7a2c16ff823437963a03.jpg)
157
+ Figure 3: The hippocampal model learns to path integrate on a 1D track using a ring attractor. a Position selective (place cell) inputs drive basal dendrites of the sensory layer p (HPC). b Hidden units (MEC) are connected to two sets of "conjunctive cells" which each connect back to one of the hidden neurons (g) and either the leftward (for $\mathbf{g}_{v_L}$ ) or rightward (for $\mathbf{g}_{v_L}$ ) velocity of the agent allowing velocity information to enter the network. Synaptic strengths of the return connections from the conjunctive cells to the MEC hidden units, as well as those for the MEC recurrent connectivity (collective denoted $\mathbf{w}_{g_A}$ ), are randomly initialised and plastic. c After training, reordering the hidden units by the position of peak activity reveals a ring attractor in the synaptic weight matrices. Centre-surround recurrent connectivity stabilises an activity bump which is then "pushed" around the attractor manifold by asymmetric connections from the conjunctive cells, integrating velocity. Bands of zero weights show MEC neurons which have become perpetually inactive (aka "died"). The bottom panel displays the matrix row-averages, utilizing the circular symmetry of the environment to align rows before averaging. d Learning plateaus after 15 mins of simulated time. e Path integration ability is demonstrated in a lesion study: after 10 seconds in the normal oscillatory mode the network is placed into sleep mode (aka generative mode), lesioning the position-dependent sensory inputs. Despite this HPC continues to accurately encode position, evidence that the MEC ring attractor is path integrating the velocity inputs and sending predictions back to HPC. Lower panel shows the accumulated decoding error as well as the mean±SEM over 50 trials.
158
+
159
+ ![](images/1d8c03d75bbba6eddf05799571ef70c07a3947667d0ded92b9a312745dc4f3a1.jpg)
160
+
161
+ First, we set the dynamics of the model to "wake" mode ( $\theta = 1$ ) and measure the basal and apical activations of one of the sensory neurons for 60 seconds. Close correspondence (Fig. 2c) confirms that the network accurately "autoencodes" the high-dimensional sensory inputs through the compressed hidden layer. Since all activation functions are linear this implies that $\mathbf{w}_{g_B}$ and $\mathbf{w}_{p_A}$ are pseudoinverses. Next, we place the network in "sleep" mode ( $\theta = 0$ ) and allow the generative model to run freely. The autocorrelation of the generated hidden states ( $\mathbf{g}(t|\theta = 0)$ , displayed fully in the supplement) match that of the true environmental latents ( $\mathbf{z}(t)$ ), Fig. 2d, implying the generative model has statistics closely matching those of the true underlying generative process.
162
+
163
+ # 3.2 Learnable path integration with a hidden ring attractor
164
+
165
+ Next we turn our attention to the hippocampal formation's role in spatial navigation, and our central result. The environment consists of an agent randomly moving around a 1 m 1D circular track (motion and cell data is generated using the RatInABox package [55]). The basal compartment of each HPC neuron is spatially tuned to a single different Gaussian input however non-Gaussian
166
+
167
+ randomly spatially tuned inputs work as well (see supplement Fig. S2b):
168
+
169
+ $$
170
+ \left[ \mathbf {p} _ {B} (t) \right] _ {i} = \exp \left[ - \frac {\left(x (t) - x _ {i}\right)}{2 \sigma^ {2}} \right]. \tag {8}
171
+ $$
172
+
173
+ $x(t)$ is the position of the agent and $\{x_i\}_{i=1}^{N_p}$ are the centres of the Gaussian inputs ( $\sigma = 6\mathrm{cm}$ ), intended to simulate hippocampal place fields, evenly spaced at $1\mathrm{cm}$ intervals along the track. MEC (i.e. the hidden layer, $\mathbf{g}(t)$ ) is matched in size $N_g = N_p = 100$ with rectified tanh activation functions on both dendritic compartments ( $\sigma_{g_B}(x) = \sigma_{g_A}(x) = \max(0, \tanh(x))$ ) and HPC (the sensory layer $\mathbf{p}(t)$ ) is linear ( $\sigma_{p_A}(x) = x$ ). Two populations of conjunctive cells (Fig. 3a & b) feed into the apical compartments of the MEC recurrent units. Random initialisation of $\mathbf{w}_{g_B}$ means that MEC neurons start off with random non-Gaussian spatial tunings. $\mathbf{w}_{g_A}$ and $\mathbf{w}_{p_A}$ are also randomly initialised.
174
+
175
+ The network is trained for 30 minutes with learning plateauing after 15 (Fig. 3d). A lesion study, designed to test path integration, is then performed as follows: First, the network is run for 10 seconds normally (i.e. with theta-oscillating periods of wake and sleep). Since the simulated HPC neurons receive place-tuned inputs uniformly ordered along the track (i.e. $x_{j} > x_{i}\forall i,j > i$ ) an activity heatmap of HPC reveals a bump of activity accurately tracking agent's position (Fig. 3e, left). The network is then placed into a sleep phase ( $\theta = 0$ ) for 20 seconds. This amounts to a full sensory lesion since top-down MEC inputs, not bottom-up place-tuned sensory inputs, drive HPC. Despite the full sensory lesion, hippocampal activity remains approximately unperturbed and the activity bump continues to accurately track position, slowly accumulating errors (Fig. 3e right). Since our HPC layer has no recurrent connectivity it cannot support this post-lesion activity on its own. Instead feed-forward drive from an MEC ring attractor, which we turn our attention to now, is responsible for maintaining the HPC code.
176
+
177
+ To find the ring attractor we must first reorder the MEC cells. We do this according to the position of the peak of their receptive fields (defined in the supplement). After reordering, the recurrent connectivity matrix can be seen to have acquired a centre-surround connectivity profile. Nearby MEC cells were, on average, strongly and positively recurrently connected to one another. Those far apart weakly inhibit one another (Fig. 3c, left; band of strong positive weights along diagonal flanked by weak negative weights). This profile matches that of a quasi-continuous ring attractor: local excitatory and long-range inhibitory connections stabilise a bump of activity on the attractor manifold in the absence of sensory input [56]. Weights from the conjunctive cells acquired asymmetric connectivity (Fig. 3c, middle & right) skewed towards the velocity direction for which they are selective. These asymmetric connections enable conjunctive cells to "push" the activity bump around the manifold, integrating velocity (see supplement for a visualisation of the MEC bump attractor). Theoretical work on ring attractors has demonstrated that for accurate path integration the asymmetric weights must be proportional to the derivative of the symmetric weights [56], approximately observed here. A noteworthy observation is that some MEC neurons become perpetually inactive; this is a consequence of the fact that both top-down and bottom-up synapses into the hidden layer are plastic and can fall to zero (Fig. 3c bands of zero-weights) satisfying a trivial $g_{A} = g_{B} = 0$ solution for minimising the prediction error. Despite this, not all MEC neurons die and the surviving subset are sufficient for path integration. In supplementary section 5.4.2 we discuss additional results showing when the network learns robust path integrate under a variety of plasticity, initialisation and noise manipulations.
178
+
179
+ Crucially, what sets this model apart from others [19, 20, 21, 22] is that the network is not optimized using a conventional path-integration objective and backpropagation. Instead, it has been demonstrated how path integration can naturally arise in a biologically constrained network subject to a much simpler (yet more broadly applicable) local objective, in cases where idiotetic velocity signals are available to the hidden layers.
180
+
181
+ # 3.3 Remapping: transfer of structural knowledge between environments
182
+
183
+ Finally, we demonstrate how our trained network can transfer structural knowledge – which here means the ring attractor and thereby path integration – between environments. We start by training the network as in section 3.2; the only difference is that for simplicity we choose to fix $\mathbf{w}_{g_B} = \delta_{ij}$ giving rise to MEC representations which, like HPC, are unimodal (this constraint can be relaxed and, in the more general case, MEC units typically have multiple receptive fields, Fig S4d, reminiscent of grid cells). We then simulate a hippocampal “remapping” event by shuffling the sensory inputs to the HPC layer (Fig. 4a & b, top panel) and retraining the network for a further 30 minutes but this time
184
+
185
+ ![](images/5f79c8f792d6bc39d55f32b2e997f756ecfde58846df9d30fb44d717c43cf57e.jpg)
186
+ Figure 4: Remapping and transfer of structural knowledge between environments. a After training (as in Fig. 2) place cell inputs are shuffled to simulate a "remapping" event observed when an agent moves to a new environment. The agent then retrans for an additional 30 minutes: during this period internal MEC weights, and weights from the conjunctive cells to MEC are held fixed while MEC $\leftrightarrow$ HPC weights remain plastic. b Receptive fields of the HPC and MEC neuronal populations at different stages in the experiment: Initially after remapping HPC and MEC inputs are randomised. MEC relearns rate maps as they were before remapping but with a constant phase shift. Note: neurons are ordered by the position of their peak activity on the track before remapping and this ordering is maintained in subsequent panels. c The error $(\pm$ SEM over 50 trials) after 1 second of path integration is shown at different stages of the experiment. Although path integration is initially disrupted after remapping it recovers despite no relearning of the MEC synapses where the ring attractor is stored.
187
+
188
+ ![](images/59d9a889749142b0384f0377b179f221044d21190f88e85a7bf7cdaf1df10f1d.jpg)
189
+
190
+ holding weights in the hidden layer, $\mathbf{w}_{g_A}$ . Only the HPC $\leftrightarrow$ MEC synapses ( $w_{g_B}$ & $w_{p_A}$ ) remain plastic during retraining. Biologically this may be accounted for by the observation that cortical plasticity is substantially slower than hippocampal plasticity [57].
191
+
192
+ During biological remapping events place cells remap independently whereas grid cells remap en masse with entire modules shifting by the same constant phase [58]. This observation is reproduced in our model: after retraining MEC units regroup with receptive fields as they were before remapping but with a constant phase shift along the track. This re-emergence of structure occurs because the ring attractor seeds a bump of activity on the attractor manifold (during the "sleep" phases of retraining) onto which the shuffled HPC inputs then bind. Since nothing constrains where on the circularly symmetric attractor manifold this regrouping can initiate, only relative correlations, modulo a phase shift, are preserved.
193
+
194
+ Decoding error one second after a sensory lesion is tested just before remapping, just after remapping and after retraining (Fig. 4c). After the remapping path integration abilities temporarily disappear because the MEC ring attractor is still tuned to the old and invalid HPC receptive fields. After relearning – and despite no adjustments to the MEC weights, $\mathbf{w}_{g_A}$ , where the ring attractor is stored – path integration recovers to almost the level before remapping. This differs substantially from other local models of path integration learning [35, 36] which don’t consider plasticity on the ring attractor inputs. In these models, adaptation to a new environment necessarily requires complete relearning of the ring attractor. Instead our model exploits the basic fact that movement (path integration) in one environment is fundamentally the same as in another, one must simply learn a new mapping to/from the ring attractor, “translating” it to fit the new sensory stimuli.
195
+
196
+ # 4 Discussion
197
+
198
+ We propose that the hippocampal formation resembles a Helmholtz machine, simultaneously learning an inference and generative model of sensory stimuli. Like previous models [23] medial entorhinal
199
+
200
+ cortex (MEC) sits hierarchically above the hippocampus (HPC) to which it sends generative predictions. Our model differs in the learning rules and neural dynamics: local prediction errors are minimised between distinct dendritic compartments receiving bottom-up and top-down signals. Theta oscillations regulate internal neural dynamics, switching the network between wake and sleep phases. In a navigation task our MEC model forms a ring attractor capable of path integration. Despite simple learning rules and dynamics our model retains key cognitive capabilities of the hippocampal formation including the ability to transfer knowledge across different sensory environments.
201
+
202
+ Local learning rules are commonly recognised as essential in biologically plausible learning algorithms [43]. However, the importance of learning scheduling – how neural systems coordinate or multiplex distinct phases of forward and backward information flow – is often overlooked [59]. Neural oscillations such as theta, hypothesized to temporally coordinate communication between neuronal populations [60], likely play an underexplored role in this regard (neural “bursting” has also been pointed out as a potential solution to multiplexing [61]). One advantage of the wake-sleep algorithm, which this study suggests neural oscillations can support, compared to forward and backward sweeps is that, during convergence, the two phases become highly similar, allowing learning to proceed without affecting perception.
203
+
204
+ While our discussion has primarily focused on theta oscillations as a mechanism for learning, they have also been proposed as a mechanism for short-range future prediction via so-called "mind-travel" [62]. During the latter phase of each theta cycle (i.e. the sleep phase) gain amplified velocity signals might rapidly drive the MEC activity bump along the manifold allowing the agent to assess nearby upcoming locations. This complimentary proposition could neatly integrate into the framework proposed here and emphasizes the need for further investigation into the multifaceted functions of neural rhythms within the hippocampal/entorhinal system.
205
+
206
+ Beyond theta oscillations, both faster gamma cycles [63] and the slower physiological states of sleep and wake [64] have been associated with learning. Based on our model we suggest a tentative hypothesis that theta oscillations may be favored due to an optimality criterion; whilst faster oscillations could be a mechanism to prevent extreme drift during sleep that might disrupt learning their frequency might by upper bounded biophysically by the neural time constants associated with the biophysical processes supporting dendritic gating the soma. These ideas, their relevance to other brain regions involved in generative learning, 2D spatial dynamics, and offline memory consolidation/replay remain exciting questions for future theoretical and experimental investigation.
207
+
208
+ # References
209
+
210
+ [1] Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2022.
211
+ [2] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014.
212
+ [3] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2256-2265, Lille, France, 07-09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/sohl-dickstein15.htm1.
213
+ [4] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762
214
+ [5] Tom M George and Pietro Lio. Unsupervised machine learning for data encoding applied to ovarian cancer transcriptomes. November 2019. doi: 10.1101/855593. URL https://doi.org/10.1101/855593.
215
+ [6] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. CoRR, abs/2102.12092, 2021. URL https://arxiv.org/abs/2102.12092
216
+ [7] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4, 2023.
217
+ [8] Karl Friston. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2):127-138, January 2010. doi: 10.1038/nrn2787. URL https://doi.org/10.1038/nrn2787
218
+ [9] Samuel J. Gershman. The generative adversarial brain. Frontiers in Artificial Intelligence, 2, September 2019. doi: 10.3389/frai.2019.00018. URL https://doi.org/10.3389/frai.2019.00018.
219
+ [10] Rajesh P. N. Rao and Dana H. Ballard. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1):79-87, January 1999. doi: 10.1038/4580. URL https://doi.org/10.1038/4580
220
+ [11] John OKeefe. Place units in the hippocampus of the freely moving rat. Experimental Neurology, 51(1):78-109, January 1976. doi: 10.1016/0014-4886(76)90055-8. URL https://doi.org/10.1016/0014-4886(76)90055-8
221
+ [12] Larry R. Squire. "memory and the hippocampus: A synthesis from findings with rats, monkeys, and humans": Correction. Psychological Review, 99(3):582-582, July 1992. doi: 10.1037/0033-295x.99.3.582. URL https://doi.org/10.1037/0033-295x.99.3.582
222
+ [13] Honi Sanders, Matthew A Wilson, and Samuel J Gershman. Hippocampal remapping as hidden state inference. eLife, 9, June 2020. doi: 10.7554/elife.51140. URL https://doi.org/10.7554/elife.51140
223
+ [14] Torkel Hafting, Marianne Fyhn, Sturla Molden, May-Britt Moser, and Edvard I. Moser. Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052):801-806, June 2005. doi: 10.1038/nature03721. URL https://doi.org/10.1038/nature03721.
224
+ [15] Edvard I Moser, May-Britt Moser, and Bruce L McNaughton. Spatial representation in the hippocampal formation: a history. Nature Neuroscience, 20(11):1448-1464, November 2017. doi: 10.1038/nn.4653. URL https://doi.org/10.1038/nn.4653
225
+ [16] Hugo J. Spiers and Eleanor A. Maguire. Thoughts, behaviour, and brain dynamics during navigation in the real world. NeuroImage, 31(4):1826-1840, July 2006. doi: 10.1016/j.neuroimage.2006.01.037. URL https://doi.org/10.1016/j.neuroimage.2006.01.037.
226
+ [17] Margaret F Carr, Shantanu P Jadhav, and Loren M Frank. Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval. Nature Neuroscience, 14(2): 147-153, January 2011. doi: 10.1038/nn.2732. URL https://doi.org/10.1038/nn.2732.
227
+
228
+ [18] B. L. McNaughton, C. A. Barnes, J. L. Gerrard, K. Gothard, M. W. Jung, J. J. Knierim, H. Kudrimoti, Y. Qin, W. E. Skaggs, M. Suster, and K. L. Weaver. Deciphering the hippocampal polyglot: the hippocampus as a path integration system. Journal of Experimental Biology, 199 (1):173-185, January 1996. doi: 10.1242/jeb.199.1.173. URL https://doi.org/10.1242/jeb.199.1.173
229
+ [19] Christopher J. Cueva and Xue-Xin Wei. Emergence of grid-like representations by training recurrent neural networks to perform spatial localization, 2018.
230
+ [20] Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J. Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beattie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hassabis, Raia Hadsell, and Dharshan Kumaran. Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705):429-433, May 2018. doi: 10.1038/s41586-018-0102-6. URL https://doi.org/10.1038/s41586-018-0102-6
231
+ [21] Ben Sorscher, Gabriel C. Mel, Samuel A. Ocko, Lisa M. Giocomo, and Surya Ganguli. A unified theory for the computational and mechanistic origins of grid cells. Neuron, 111(1): 121-137.e13, 2023. ISSN 0896-6273. doi: https://doi.org/10.1016/j.neuron.2022.10.003. URL https://www.sciencedirect.com/science/article/pii/S0896627322009072.
232
+ [22] William Dorrell, Peter E. Latham, Timothy E. J. Behrens, and James C. R. Whittington. Actionable neural representations: Grid cells from minimal constraints, 2023.
233
+ [23] James C.R. Whittington, Timothy H. Muller, Shirley Mark, Guifen Chen, Caswell Barry, Neil Burgess, and Timothy E.J. Behrens. The tolman-eichenbaum machine: Unifying space and relational memory through generalization in the hippocampal formation. Cell, 183(5):1249-1263.e23, November 2020. doi: 10.1016/j.cell.2020.10.024. URL https://doi.org/10.1016/j.cell.2020.10.024
234
+ [24] Dileep George, Rajeev V. Rikhye, Nishad Gothoskar, J. Swaroop Guntupalli, Antoine Dedieu, and Miguel Lázaro-Gredilla. Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps. Nature Communications, 12(1), April 2021. doi: 10.1038/s41467-021-22559-5. URL https://doi.org/10.1038/s41467-021-22559-5.
235
+ [25] W. E. Skaggs, J. J. Knierim, H. S. Kudrimoti, and B. L. McNaughton. A model of the neural basis of the rat's sense of direction. Advances in neural information processing systems, 7: 173-180, 1995. ISSN 1049-5258.
236
+ [26] Alexei Samsonovich and Bruce L. McNaughton. Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience, 17(15):5900-5920, 1997. ISSN 0270-6474. doi: 10.1523/JNEUROSCI.17-15-05900.1997. URL https://www.jneurosci.org/content/17/15/5900
237
+ [27] Yoram Burak and Ila R. Fiete. Accurate path integration in continuous attractor network models of grid cells. PLoS Computational Biology, 5(2):e1000291, February 2009. doi: 10.1371/journal.pcbi.1000291. URL https://doi.org/10.1371/journal.pcbi.1000291.
238
+ [28] Mikail Khona and Ila R Fiete. Attractor and integrator networks in the brain. arXiv, 2021. doi: 10.48550/axiv.2112.03978.
239
+ [29] Peter Dayan, Geoffrey E. Hinton, Radford M. Neal, and Richard S. Zemel. The helmholtz machine. Neural Computation, 7(5):889-904, September 1995. doi: 10.1162/neco.1995.7.5.889. URL https://doi.org/10.1162/neco.1995.7.5.889.
240
+ [30] Gyorgy Buzsaki. Theta oscillations in the hippocampus. Neuron, 33(3):325-340, January 2002. doi: 10.1016/s0896-6273(02)00586-x. URL https://doi.org/10.1016/s0896-6273(02)00586-x.
241
+ [31] Geoffrey E. Hinton, Peter Dayan, Brendan J. Frey, and Radford M. Neal. The "wake-sleep" algorithm for unsupervised neural networks. Science, 268(5214):1158-1161, May 1995. doi: 10.1126/science.7761831. URL https://doi.org/10.1126/science.7761831.
242
+ [32] Benigno Uria, Borja Ibarz, Andrea Banino, Vinicius Zambaldi, Dharshan Kumaran, Demis Hassabis, Caswell Barry, and Charles Blundell. A model of egocentric to allocentric understanding in mammalian brains. November 2020. doi: 10.1101/2020.11.11.378141. URL https://doi.org/10.1101/2020.11.11.378141
243
+
244
+ [33] Dongqi Han, Kenji Doya, and Jun Tani. Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks. Neural Networks, 129:149-162, September 2020. doi: 10.1016/j.neunet.2020.06.002. URL https://doi.org/10.1016/j.neunet.2020.06.002
245
+ [34] Sugandha Sharma, Sarthak Chandra, and Ila Fiete. Content addressable memory without catastrophic forgetting by heteroassociation with a fixed scaffold. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 19658-19682. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/sharma22b.html
246
+ [35] Pantelis Vafidis, David Oswald, Tiziano D'Albis, and Richard Kempter. Learning accurate path integration in ring attractor models of the head direction system. eLife, 11:e69841, jun 2022. ISSN 2050-084X. doi: 10.7554/eLife.69841. URL https://doi.org/10.7554/eLife.69841
247
+ [36] John Widloski and Ila R. Fiete. A Model of Grid Cell Development through Spatial Exploration and Spike Time-Dependent Plasticity. Neuron, 83(2):481–495, 2014. ISSN 0896-6273. doi: 10.1016/j.neuron.2014.06.018.
248
+ [37] William E. Skaggs, Bruce L. McNaughton, Matthew A. Wilson, and Carol A. Barnes. Theta phase precession in hippocampal neuronal populations and the compression of temporal sequences. Hippocampus, 6(2):149-172, 1996. doi: 10.1002/(sici)1098-1063(1996)6:2<149:: aid-hipo6>3.0.co;2-k. URL https://doi.org/10.1002/(sici)1098-1063(1996)6:2<149::aid-hipo6>3.0.co;2-k.
249
+ [38] M.R. Mehta, M.C. Quirk, and M.A. Wilson. Experience-Dependent Asymmetric Shape of Hippocampal Receptive Fields. Neuron, 25:707-715, 2000.
250
+ [39] Tom M George, William de Cothi, Kimberly L Stachenfeld, and Caswell Barry. Rapid learning of predictive maps with STDP and theta phase precession. eLife, 12, March 2023. doi: 10.7554/elife.80663. URL https://doi.org/10.7554/elife.80663
251
+ [40] Tom M George. Theta sequences as eligibility traces: A biological solution to credit assignment. In International Conference on Learning Representations 2023 (TinyPapers track), 2023. doi: https://doi.org/10.48550/arXiv.2305.08124. URL https://openreview.net/forum?id=vd16AYbem3Z
252
+ [41] Colin Bredenberg, Eero P Simoncelli, Benjamin S H Lyo, and Cristina Savin. Impression learning: Online representation learning with synaptic plasticity. page 13, 2021.
253
+ [42] Konrad P. Kording and Peter Konig. Supervised and unsupervised learning with two sites of synaptic integration. Journal of Computational Neuroscience, 11(3):207-215, 2001. doi: 10.1023/a:1013776130161. URL https://doi.org/10.1023/a:1013776130161.
254
+ [43] Robert Urbanczik and Walter Senn. Learning by the Dendritic Prediction of Somatic Spiking. Neuron, 81(3):521-528, 2014. ISSN 08966273. doi: 10.1016/j.neuron.2013.11.030. URL https://linkinghub.elsevier.com/retrieve/pii/S0896627313011276
255
+ [44] Joao Sacramento, Rui Ponte Costa, Yoshua Bengio, and Walter Senn. Dendritic cortical microcircuits approximate the backpropagation algorithm. In Advances in Neural Information Processing Systems, pages 8721-8732, 2018.
256
+ [45] Blake A Richards and Timothy P Lillicrap. Dendritic solutions to the credit assignment problem. Current Opinion in Neurobiology, 54:28-36, February 2019. doi: 10.1016/j.conb.2018.08.003. URL https://doi.org/10.1016/j.conb.2018.08.003
257
+ [46] Katie C Bittner, Christine Grienberger, Sachin P Vaidya, Aaron D Milstein, John J Macklin, Junghyup Suh, Susumu Tonegawa, and Jeffrey C Magee. Conjunctive input processing drives feature selectivity in hippocampal CA1 neurons. Nature Neuroscience, 18(8):1133-1142, July 2015. doi: 10.1038/nn.4062. URL https://doi.org/10.1038/nn.4062
258
+ [47] Jurij Brankack, Mark Stewart, and Steven E. Fox. Current source density analysis of the hippocampal theta rhythm: associated sustained potentials and candidate synaptic generators. Brain Research, 615(2):310-327, July 1993. doi: 10.1016/0006-8993(93)90043-m. URL https://doi.org/10.1016/0006-8993(93)90043-m.
259
+
260
+ [48] Kenji Mizuseki, Anton Sirota, Eva Pastalkova, and György Buzsáki. Theta oscillations provide temporal windows for local circuit computation in the entorhinal-hippocampal loop. Neuron, 64(2):267–280, October 2009. doi: 10.1016/j.neuron.2009.08.037. URL https://doi.org/10.1016/j.neuron.2009.08.037.
261
+ [49] Matthew E. Larkum. Are dendrites conceptually useful? Neuroscience, 489:4-14, May 2022. doi: 10.1016/j.neuroscience.2022.03.008. URL https://doi.org/10.1016/j.neuroscience.2022.03.008
262
+ [50] David J. Foster and Matthew A. Wilson. Hippocampal theta sequences. Hippocampus, 17 (11):1093-1099, 2007. doi: 10.1002/hipo.20345. URL https://doi.org/10.1002/hipo.20345
263
+ [51] Christian Holscher, Roger Anwyl, and Michael J. Rowan. Stimulation on the positive phase of hippocampal theta rhythm induces long-term potentiation that can be depotentiated by stimulation on the negative phase in area cal. The Journal of Neuroscience, 17(16):6470-6477, August 1997. doi: 10.1523/jneurosci.17-16-06470.1997. URL https://doi.org/10.1523/jneurosci.17-16-06470.1997.
264
+ [52] Yoko Yamaguchi, Yoshito Aota, Bruce L. McNaughton, and Peter Lipa. Bimodality of theta phase precession in hippocampal place cells in freely running rats. Journal of Neurophysiology, 87(6):2629-2642, June 2002. doi: 10.1152/jn.2002.87.6.2629. URL https://doi.org/10.1152/jn.2002.87.6.2629
265
+ [53] Michael E. Hasselmo, Clara Bodelón, and Bradley P. Wyble. A proposed function for hippocampal theta rhythm: Separate phases of encoding and retrieval enhance reversal of prior learning. Neural Computation, 14(4):793-817, April 2002. doi: 10.1162/089976602317318965. URL https://doi.org/10.1162/089976602317318965
266
+ [54] Francesca Sargolini, Marianne Fyhn, Torkel Hafting, Bruce L. McNaughton, Menno P. Witter, May-Britt Moser, and Edvard I. Moser. Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science, 312(5774):758-762, May 2006. doi: 10.1126/science.1125572. URL https://doi.org/10.1126/science.1125572.
267
+ [55] Tom M George, William de Cothi, Claudia Clopath, Kimberly Stachenfeld, and Caswell Barry. RatInABox: A toolkit for modelling locomotion and neuronal activity in continuous environments. aug 2022. doi: 10.1101/2022.08.10.503541. URL https://doi.org/10.1101%2F2022.08.10.503541.
268
+ [56] K Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. The Journal of Neuroscience, 16(6):2112-2126, March 1996. doi: 10.1523/jneurosci.16-06-02112.1996. URL https://doi.org/10.1523/jneurosci.16-06-02112.1996
269
+ [57] Ceren Ergorul and Howard Eichenbaum. Essential role of the hippocampal formation in rapid learning of higher-order sequential associations. The Journal of Neuroscience, 26(15):4111-4117, April 2006. doi: 10.1523/jneurosci.0441-06.2006. URL https://doi.org/10.1523/jneurosci.0441-06.2006.
270
+ [58] Marianne Fyhn, Torkel Hafting, Alessandro Treves, May-Britt Moser, and Edvard I. Moser. Hippocampal remapping and grid realignment in entorhinal cortex. Nature, 446(7132):190-194, February 2007. doi: 10.1038/nature05601. URL https://doi.org/10.1038/nature05601.
271
+ [59] Jordan Guerguiev, Timothy P Lillicrap, and Blake A Richards. Towards deep learning with segregated dendrites. eLife, 6, December 2017. doi: 10.7554/elife.22901. URL https://doi.org/10.7554/elife.22901
272
+ [60] Pascal Fries. Rhythms for cognition: Communication through coherence. Neuron, 88(1): 220-235, October 2015. doi: 10.1016/j.neuron.2015.09.034. URL https://doi.org/10.1016/j.neuron.2015.09.034
273
+ [61] Alexandre Payeur, Jordan Guerguiev, Friedemann Zenke, Blake A. Richards, and Richard Naud. Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits. Nature Neuroscience, 24(7):1010-1019, May 2021. doi: 10.1038/s41593-021-00857-x. URL https://doi.org/10.1038/s41593-021-00857-x.
274
+ [62] Honi Sanders, César Rennó-Costa, Marco Idiart, and John Lisman. Grid cells and place cells: An integrated view of their navigational and memory function. Trends in Neurosciences, 38
275
+
276
+ (12):763-775, December 2015. doi: 10.1016/j.tins.2015.10.004. URL https://doi.org/10.1016/j.tins.2015.10.004.
277
+ [63] Kwan Tung Li, Junhao Liang, and Changsong Zhou. Gamma oscillations facilitate effective learning in excitatory-inhibitory balanced neural circuits. Neural Plasticity, 2021:1-18, January 2021. doi: 10.1155/2021/6668175. URL https://doi.org/10.1155/2021/6668175
278
+ [64] William E. Skaggs and Bruce L. McNaughton. Replay of neuronal firing sequences in rat hippocampus during sleep following spatial experience. Science, 271(5257):1870-1873, March 1996. doi: 10.1126/science.271.5257.1870. URL https://doi.org/10.1126/science.271.5257.1870.
agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28074331c5bc07f32ab62b17a43e14f2ecb49062b106502d1841d9b0a20b900f
3
+ size 336586
agenerativemodelofthehippocampalformationtrainedwiththetadrivenlocallearningrules/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b96c09cef3020d10db5a9add69cf0f5de84ec26eb3dc08fe6a7027dcf152fc53
3
+ size 379711
amassivescalesemanticsimilaritydatasetofhistoricalenglish/8cee2e8f-6f6e-4d50-898a-1be35c7324f7_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:888c6f37f58530545de100fb8ab86ecb761fb7545540f7420e45114be38846bb
3
+ size 77079
amassivescalesemanticsimilaritydatasetofhistoricalenglish/8cee2e8f-6f6e-4d50-898a-1be35c7324f7_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3923848f5d94e30d844578ec634a9eaeb079f7d83da82017df5b3efde0f7819a
3
+ size 91066
amassivescalesemanticsimilaritydatasetofhistoricalenglish/8cee2e8f-6f6e-4d50-898a-1be35c7324f7_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e34746dd67d9b3562c098899508d75221eee2eebd9705c176a3d4b664a3d561f
3
+ size 12874609
amassivescalesemanticsimilaritydatasetofhistoricalenglish/full.md ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Massive Scale Semantic Similarity Dataset of Historical English
2
+
3
+ Emily Silcock<sup>1</sup>, Abhishek Arora<sup>1</sup>, Melissa Dell<sup>1,2*</sup>
4
+
5
+ <sup>1</sup>Harvard University; Cambridge, MA, USA.
6
+
7
+ $^{2}$ National Bureau of Economic Research; Cambridge, MA, USA.
8
+
9
+ *Corresponding author: melissadell@fas.harvard.edu.
10
+
11
+ # Abstract
12
+
13
+ A diversity of tasks use language models trained on semantic similarity data. While there are a variety of datasets that capture semantic similarity, they are either constructed from modern web data or are relatively small datasets created in the past decade by human annotators. This study utilizes a novel source, newly digitized articles from off-copyright, local U.S. newspapers, to assemble a massive-scale semantic similarity dataset spanning 70 years from 1920 to 1989 and containing nearly 400M positive semantic similarity pairs. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridgement. The headlines of reproduced articles form positive semantic similarity pairs. The resulting publicly available HEADLINES dataset is significantly larger than most existing semantic similarity datasets and covers a much longer span of time. It will facilitate the application of contrastively trained semantic similarity models to a variety of tasks, including the study of semantic change across space and time.
14
+
15
+ # 1 Introduction
16
+
17
+ Transformer language models contrastively trained on large-scale semantic similarity datasets are integral to a variety of applications in natural language processing (NLP). Contrastive training is often motivated by the anisotropic geometry of pre-trained transformer models like BERT [5], which complicates working with their hidden representations. Representations of low frequency words are pushed outwards on the hypersphere, the sparsity of low frequency words violates convexity, and the distance between embeddings is correlated with lexical similarity. This leads to poor alignment between semantically similar texts and poor performance when individual term representations are pooled to create a representation for longer texts [21]. Contrastive training reduces anisotropy [25].
18
+
19
+ A variety of semantic similarity datasets have been used for contrastive training [18]. Many of these datasets are relatively small, and the bulk of the larger datasets are created from recent web texts; e.g., positive pairs are drawn from the texts in an online comment thread or from questions marked as duplicates in a forum. To provide a semantic similarity dataset that spans a much longer length of time and a vast diversity of topics, this study develops HEADLINES (Historical Enormous-Scale Abstractive DupLIcate News Summaries), a massive dataset containing nearly 400 million high quality semantic similarity pairs drawn from 70 years of off copyright U.S. newspapers. Historically, around half of content in the many thousands of local newspapers across the U.S. was taken from centralized sources such as the Associated Press wire [8]. Local newspapers reprinted wire articles
20
+
21
+ but wrote their own headlines, which form abstractive summaries of the articles. Headlines written by different papers to describe the same wire article form positive semantic similarity pairs.
22
+
23
+ To construct HEADLINES, we digitize front pages of off-copyright local newspapers, localizing and OCRing individual content regions like headlines and articles. The headlines, bylines, and article texts that form full articles span multiple bounding boxes - often arranged with complex layouts - and we associate them using a model that combines layout information and language understanding [14]. Then, we use neural methods from [23] to accurately predict which articles come from the same underlying source, in the presence of noise and abridgement. HEADLINES allows us to leverage the collective writings of many thousands of local editors across the U.S., spanning much of the 20th century, to create a massive, high-quality semantic similarity dataset. HEADLINES captures semantic similarity with minimal noise, as positive pairs summarize the same underlying texts.
24
+
25
+ This study is organized as follows. Section 2 describes HEADLINES, and Section 3 relates it to existing datasets. Section 4 describes and evaluates the methods used for dataset construction, Section 5 benchmarks the dataset, and Section 6 discusses limitations and intended usage.
26
+
27
+ # 2 Dataset Description
28
+
29
+ HEADLINES contains 393,635,650 positive headline pairs from off-copyright newspapers. Figure 1 plots the distribution of content by state.
30
+
31
+ ![](images/23b5a5b4f005af957773ce772b5e64ed38be3fbf7c16cb18604e7a9ccd17a418.jpg)
32
+ Figure 1: Geographic variation in source of headlines
33
+
34
+ Dataset statistics by decade are provided in Table 1. Content declines sharply in the late 1970s, due to a major copyright law change effective January 1, 1978.
35
+
36
+ <table><tr><td>Decade</td><td>Headline Count</td><td>Cluster Count</td><td>Positive Pair Count</td><td>Word Count</td><td>Words Per Headline</td><td>Line Count</td><td>Lines Per Headline</td><td>Character Error Rate</td></tr><tr><td>1920s</td><td>4,889,942</td><td>1,032,108</td><td>28,928,226</td><td>68,486,589</td><td>14.0</td><td>18,983,014</td><td>3.9</td><td>4.3%</td></tr><tr><td>1930s</td><td>5,519,472</td><td>1,126,566</td><td>37,529,084</td><td>75,210,423</td><td>13.6</td><td>21,905,153</td><td>4.0</td><td>3.7%</td></tr><tr><td>1940s</td><td>6,026,940</td><td>1,005,342</td><td>62,397,004</td><td>61,629,003</td><td>10.2</td><td>19,538,729</td><td>3.2</td><td>2.4%</td></tr><tr><td>1950s</td><td>7,530,810</td><td>1,192,858</td><td>100,527,238</td><td>61,127,313</td><td>8.1</td><td>20,823,786</td><td>2.8</td><td>2.3%</td></tr><tr><td>1960s</td><td>6,533,071</td><td>926,819</td><td>108,415,279</td><td>46,640,311</td><td>7.1</td><td>16,408,148</td><td>2.5</td><td>3.7%</td></tr><tr><td>1970s</td><td>3,664,201</td><td>585,782</td><td>52,981,097</td><td>24,472,831</td><td>6.7</td><td>7,829,510</td><td>2.1</td><td>3.2%</td></tr><tr><td>1980s</td><td>703,052</td><td>170,507</td><td>2,857,722</td><td>5,161,537</td><td>7.3</td><td>1,502,893</td><td>2.1</td><td>1.5%</td></tr><tr><td>Total</td><td>34,867,488</td><td>6,039,982</td><td>393,635,650</td><td>342,728,007</td><td>9.8</td><td>106,991,233</td><td>3.1</td><td></td></tr></table>
37
+
38
+ Table 1: Descriptive statistics of HEADLINES.
39
+
40
+ The supplementary materials summarize copyright law for works first published in the United States. The newspapers in HEADLINES are off-copyright because they were published without a copyright notice or did not renew their copyright, required formalities at the time. Far from being an oversight, it was rare historically to copyright news, outside the nation's most widely circulated papers. The headlines in our dataset were written by editors at these local papers, and hence are in the public domain and anyone can legally use or reference them without permission.
41
+
42
+ It is possible that a newspaper not itself under copyright could reproduce copywritten content from some third party - the most prevalent example of this is comics - but this does not pose a problem for HEADLINES, since the dataset is built around the locally written headlines that describe the same
43
+
44
+ wire articles. If we were to accidentally include a syndicated headline, it would be dropped by our post-processing, since we drop headline pairs within a Levenshtein edit distance threshold of each other. It is also worth noting that a detailed search of U.S. copyright catalogs by [19] did not turn up a single instance of a wire service copyrighting their articles. (Even if they had, however, it would not pose a problem for headlines, since they were written locally.)
45
+
46
+ Figure 2 shows examples of semantic similarity pairs.
47
+
48
+ ![](images/f19a03863abc54415135136374285dc634ef4af686648f0271f97ed823370064.jpg)
49
+ Figure 2: Semantic similarity examples, showing article image crops and OCR'ed headlines.
50
+
51
+ We quantify variation in HEADLINES across years, using a measure reminiscent of Earth Mover distance. This measure computes how much each text in a query dataset (e.g., 1920 headlines) would have to change (in embedding space) to have the same representation as the closest text in a key dataset (e.g., 1930 headlines).
52
+
53
+ Specifically, we first take a random sample of 10,000 texts per year. For year $j$ , we embed texts $t_{1j} \ldots t_{10,000j}$ using all-mpnet-base-v2. We choose MPNet because it has been shown to perform well across a variety of embedding datasets and tasks [18]. For each of these $t_{ij}$ , we compute the most similar embedding in year $k$ , measured by cosine similarity. This gives us a vector of similarity measures $s_{1jk} \ldots s_{10,000jk}$ , that for each text in year $j$ measure proximity to the most similar text in year $k$ . We average these similarities to calculate $SIM_{jk}$ .<sup>1</sup> Figure 3, which plots the $SIM_{jk}$ , shows that similarity increases with temporal proximity. The dark square towards the upper left is World War 2, during which newspapers coverage was more homogeneous due to the centrality of the war.
54
+
55
+ ![](images/076242314886c22982d9358522196e954fc65e5cfe7ee7b024d14aa0d222acfb.jpg)
56
+ Figure 3: Average similarity between different years of HEADLINES.
57
+
58
+ HEADLINES is useful for training and evaluating models that aim to capture abstractive similarity, whether using the embeddings for tasks like clustering, nearest neighbor retrieval, or semantic search [18]. Because it contains chronological content over a long span of time, it can be used to evaluate dynamic language models for processing continuously evolving content [1, 15], as well as how large language models can be adapted to process historical content [17, 4, 16]. Likewise, it can be used to train or evaluate models that predict the region or year a text was written [20]. In addition, it is useful for training models and developing benchmarks for a number of downstream tasks, such as topic classification of vast historical and archival documents, which have traditionally been classified by hand. This is an extremely labor-intensive process, and as a result many historical archives and news collections remain largely unclassified. Similarly, it could facilitate creating a large scale dataset to measure term-level semantic change, complementing existing smaller-scale SemEval tasks.
59
+
60
+ HEADLINES has a Creative Commons CC-BY license, to encourage widespread use, and is available on Huggingface.2
61
+
62
+ # 3 Existing Semantic Similarity Datasets
63
+
64
+ There is a dense literature on semantic similarity, with datasets covering diverse types of textual similarity and varying greatly in size. The focus of HEADLINES on semantic similarity in historical texts sets it apart from other widely used datasets. It also dwarfs the size of most existing datasets, aggregating the collective work of 20th century newspapers editors, from towns across the U.S. Its paired headlines summarize the same text, rather than being related by other forms of similarity frequently captured by datasets, such as being in the same conversation thread or answering a corresponding question.
65
+
66
+ One related class of semantic similarity datasets consists of duplicate questions from web platforms, e.g., questions tagged by users as duplicates from WikiAnswers (77.4 million positive pairs) [6], duplicate stack exchange questions (around 304,000 duplicate title pairs) [2], and duplicate quora questions (around 400,000 duplicate pairs) [12]. Alternatively, MS COCO [3] used Amazon's Mechanical Turk to collect five captions for each image in the dataset, resulting in around 828,000 positive caption pairs. In Flickr [26], 317,695 positive semantic similarity pairs describe around 32,000 underlying images. Like HEADLINES, positive pairs in these datasets refer to the same underlying content, but are describing an image rather than providing an abstractive summary of a longer text. In future work, HEADLINES could be expanded to include caption pairs describing the same underlying photo wire image, as local papers frequently wrote their own captions.
67
+
68
+ Online comment threads have also been used to train semantic similarity models. For example, the massive scale Reddit Comments [11] draws positive semantic similarity pairs from Reddit conversation threads between 2016 and 2018, providing 726.5 million positive pairs. Semantic similarity between comments in an online thread reflects conversational similarity, to the extent the thread stays on topic, rather than abstractive similarity. Likewise, question-answer and natural-language inference datasets are widely used for semantic similarity training. While other datasets exploit abstractive summaries - e.g., Semantic Scholar (S2ORC) has been used to create semantic similarity pairs of the titles and abstracts of papers that cite each other - to our knowledge there are not large-scale datasets with abstractive summaries of the same underlying texts.
69
+
70
+ A wide variety of text embedding datasets have been combined into the Massive Text Embedding Benchmark (MTEB) [18], which evaluates 8 embedding tasks on 58 datasets covering 112 languages. We measure the similarity between HEADLINES and the English datasets in MTEB, using the Earth Mover-style distance, described in Section 2. As above, we first take a random sample of (up to) 10,000 texts from each decade of HEADLINES, as well as each of the English datasets in MTEB (if the dataset contains fewer than 10K texts, we use the full dataset and limit the comparison dataset to the same number of randomly selected texts). For dataset $j$ , we embed texts $t_{1j} \ldots t_{10,000j}$ . For each of these $t_{ij}$ , we compute the most similar embedding in dataset $k$ , averaging these across all texts in $j$ to compute $SIM_{jk}$ . $SIM_{jk}$ need not be symmetric. Suppose dataset $j$ is highly homogeneous, whereas dataset $k$ is heterogeneous. $SIM_{jk}$ may be high, because the similar embeddings in homogeneous dataset $j$ are close to a subset of embeddings in dataset $k$ . On the other hand, $SIM_{kj}$ may be low, because most texts in dataset $k$ are dissimilar from texts in homogeneous dataset $j$ .
71
+
72
+ ![](images/7f11321fc4be7550313aa6fed803ab0913ec4e06d4a94d6afa2265020733c0de.jpg)
73
+ Figure 4: Average similarity between HEADLINES (by decade) and the English datasets in the Massive Text Embedding Benchmark. The similarity measure is described in the text.
74
+
75
+ Figure 4 shows the entire similarity matrix between HEADLINES and the English datasets in MTEB; rows are the query dataset and columns are the key. The style of the figure was adapted from [18]. The average similarity between HEADLINES and MTEB datasets is 31.4, whereas the average similarity between MTEB datasets and HEADLINES is 33.5, supporting our supposition that HEADLINES is diverse on average relative to other benchmarks. This average max similarity shows that there is ample information in HEADLINES not contained in existing datasets.
76
+
77
+ Table 2 shows examples where the nearest text to a headline in an MTEB dataset is highly similar, versus of average similarity. In some cases, highly similar texts in fact have a different meaning, but the limited context in the MTEB datasets makes this difficult to capture.
78
+
79
+ <table><tr><td>Headline</td><td>Highly similar texts</td><td>Similarity</td></tr><tr><td>&quot;Inflation Cuts are Questioned&quot;</td><td>Reddit: “Today FOMC Resumed Meeting Inflation getting out of hand... Maybe... Maybe Not”</td><td>0.55</td></tr><tr><td>“Bear Bites Off Arm of Child”</td><td>StackExchange: “How to handle animal AI biting and holding onto the character...”</td><td>0.46</td></tr><tr><td>“British Cruiser Reported Sunk”</td><td>Twitter: “That&#x27;s It, Britain is Sunk”</td><td>0.61</td></tr><tr><td>“Will Free Press Dance to Government Tune”</td><td>Twitter: “Donald Trump v. a free press”</td><td>0.60</td></tr><tr><td>“Partitioning Plan Unsatisfactory, Ike Declares”</td><td>Ubuntu Questions: “Partitioning Issues”</td><td>0.51</td></tr><tr><td>Headline</td><td>Average similarity texts</td><td>Similarity</td></tr><tr><td>“Reds Knot Strong Tie”</td><td>ArXiv: “Knots and Polytopes”</td><td>0.34</td></tr><tr><td>“SHOWERS PROMISED TO END HEAT WAVE Two Deaths From Heat Over Weekend”</td><td>StackOverflow: “how to annotate heatmap with text in matplotlib?”</td><td>0.27</td></tr><tr><td>“Salary Boost Due For Some On Labor Day”</td><td>Twitter: “Glassdoor will now tell you if you&#x27;re being underpaid”</td><td>0.31</td></tr><tr><td>“40 Old Ladies Now In Senate Says Rogers”</td><td>Quora: “Is 19 young?”</td><td>0.28</td></tr></table>
80
+
81
+ Table 2: This table shows similarities between example texts.
82
+
83
+ # 4 Dataset Construction and Evaluation
84
+
85
+ # 4.1 Digitization
86
+
87
+ We digitized front pages from off-copyright newspapers spanning 1920-1989. We recognize layouts using Mask RCNN [10] and OCR the texts. We transcribed the headlines using Tesseract. The digitization was performed using Azure F-Series CPU nodes.
88
+
89
+ We evaluate this OCR on a hand annotated sample of 300 headlines per decade. Table 1 reports the character error rate, defined as the Levenshtein distance between the transcribed text and the ground truth, normalized by the length of the ground truth. As expected, OCR quality improves over time, as there is less damage from aging and fewer unusual fonts.
90
+
91
+ # 4.2 Article Association
92
+
93
+ Newspaper articles have complex and irregular layouts that can span multiple columns (Figure 5).
94
+
95
+ ![](images/97e241a43a2d045084cacf1e88d10752ecd23ea99373c408032437a5ec1c37a7.jpg)
96
+ (a) The green article is notcontiguous and it is unclear if the second part continues the purple, green, or even blue articles if the text is not used.
97
+
98
+ ![](images/20f6b8fab56cb2365b4180a6bc85431e0f6600dddf98c44f6729571c252efa04.jpg)
99
+ (b) The green article is nested inside the blue article, which causes rule-based and image-based models to either associate the green and blue articles together, or miss the second half of the blue article.
100
+
101
+ ![](images/ebff855c51959797698fb5b92968809d187f27a4ce759fc323ce8d17d9f136eb.jpg)
102
+ (c) The smallest rectangle that contains the green article also contains the purple and red articles.
103
+ Figure 5: Articles that are misassociated with rule-based or image-based methods
104
+
105
+ We associate the (potentially multiple) headline bounding boxes with the (potentially multiple) article bounding boxes and byline boxes that comprise a single article using a combination of layout information and language understanding. A rule-based approach using the document layouts gets many of the associations correct, but misses some difficult cases where article bounding boxes are arranged in complex layouts. Language understanding can be used to associate such articles but must be robust to noise, from errors in layout detection (e.g. from cropping part of a content bounding box or adding part of the line below) and from OCR character recognition errors.
106
+
107
+ Hand annotating a sufficiently large training dataset would have been infeasibly costly. Instead, we devise a set of rules that - while recall is relatively low - have precision above 0.99, as measured on an evaluation set of 3,803 labeled bounding boxes. The algorithm exploits the positioning of article bounding boxes relative to headline boxes (as in Figure 6), first grouping an article bounding box with a headline bounding box if the rules are met and then associating all article bounding boxes grouped with the same headline together. Since precision is above 0.99, the rule generates nearly perfect silver-quality training data.
108
+
109
+ ![](images/ebaa21f7902d144672fc7886651e7fede5d046235c97bf9efc8b1024a2e60cf3.jpg)
110
+ (a) Article bounding boxes that are under the same headline and at the bottom of the page are used to create training data.
111
+
112
+ ![](images/3fc3f1318a6d254b9be45b80a27553894b6506beb41ae78194c88a169192de5f.jpg)
113
+ (b) At inference time, all bounding boxes that are directly under the same headline are associated with each other.
114
+
115
+ ![](images/ed98308b6ecaf1a8ab0eb83bd17610b38cd22ffbd1e22d5c573adb99450a8033.jpg)
116
+
117
+ Figure 6: Illustration of article association pipeline
118
+ ![](images/68619ad38df9f741dff8ec1c7e5e3fb559cdbb0991f345708f3bea7708e65490.jpg)
119
+ (d) This orphan (green bounding box) is compared with the bounding box above it. In this case, it is a separate article without a headline so it is not associated.
120
+
121
+ ![](images/457c96fe7e30b2c0fa2a9e53044b55c4b032e9f50b48b03aa5dd8abd80efcb72.jpg)
122
+ (e) Orphans are also compared with bounding boxes to the right. This orphan (green bounding box) is associated with the bounding box directly above.
123
+
124
+ ![](images/e80a81560ba154385cea266907245707c0a5e0f62aba0dafc05abd92e1163128.jpg)
125
+ (c) Any other article with a headline directly above it is not compared, leaving only a few orphans that are left to be associated.
126
+ (f) The final orphan is compared with two columns to the right, as sometimes articles skip columns. If further columns to the right exist, they are not compared.
127
+
128
+ To train a language model to predict whether article box B follows box A, we embed the first and last 64 tokens of the texts in boxes B and A, respectively, with a RoBERTa base model [14]. The pair is positive when B follows A. The training set includes 12,769 positive associated pairs, with training details described in the supplementary materials. At inference time, we first associate texts using the rule-based approach, described in Figure 6, which has extremely high precision. To improve recall, we then apply the RoBERTa cross-encoder to remaining article boxes that could plausibly be associated, given their coordinates. Texts cannot be followed by a text that appears to the left, as layouts always proceed from left to right, so these combinations are not considered.
129
+
130
+ <table><tr><td></td><td>(1) F1</td><td>(2) Recall</td><td>(3) Precision</td></tr><tr><td>Full Article Association</td><td>93.7</td><td>88.3</td><td>99.7</td></tr></table>
131
+
132
+ Table 3: This table evaluates the full article association model.
133
+
134
+ We evaluate this method on a hand-labeled dataset of 214 scans. Full details of this dataset are given in the appendix. Table 3 evaluates recall, precision and F1 for associated articles. The F1 of 93.7
135
+
136
+ is high, and precision is extremely high. Errors typically occur when there is an error in the layout analysis or when contents are very similar, e.g., grouping multiple obituaries into a single article.
137
+
138
+ # 4.3 Detecting Reproduced Content
139
+
140
+ Accurately detecting reproduced content can be challenging, as articles were often heavily abridged by local papers to fit within their space constraints and errors in OCR or article association can add significant noise. Table 4 shows examples of reproduced articles.
141
+
142
+ ![](images/34c5bf8edbd10c7d4ca38cedcf7d9da848f6af7577b6ed3a846b33a87d446ebc.jpg)
143
+
144
+ SAN FRANCISCO (#—Presi-dent Eisenhower was reported ready to open the U. N.'s 10th anniversary session today with an important policy declaration.
145
+
146
+ This word came from informed sources as the Big Four foreign ministers prepared for a private huddle tonight to plan the meeting of their chiefs of government in Geneva July 18.
147
+
148
+ The President was scheduled to speak at 6 p.m., EDT. Some diplomats believed his speech would have special significance, coming as it does just a month before the top level talks.
149
+
150
+ The first direct contact between Soviet Foreign Minister V. M. Molotov and the Western leaders was established last night at a dinner given by Colombian Ambassador Eduardo Zuleta Angel There was no information as tc whether any serious discussion: were held, but the two sides dic have a chance to sense the at-]mosphere which will prevail to-]night.. Diplomatic sources reported th Russian likely would press for : Declaration of some sort by the ! 10th anniversary meeting
151
+
152
+ ![](images/662e1b23fe2a86d5e4f529b6b4e24309437c3276a644972421e81859c2b8d865.jpg)
153
+
154
+ SAN FRANCISCO #President Eisenhower was reported ready to open the U_N's 10th anniversary session today with an important policy declaration.
155
+
156
+ This word came from informed sources as the Big Four foreign ministers prepared for a private huddle tonight to plan the meeting of their chiefs of government in Geneva July 18.
157
+
158
+ Some diplomats believed Eisenhower's speech would have special significance, coming as it does just a month before the top-level talks! The first direct contact between (Soviet Foreign Minister V. M.) Molotov and the Western leaders! was established last night "at a dinner given by Colombian Ami bassador Eduardo Zulcta Angel.
159
+
160
+ ! Diplomatic sources reported the Russians likely would press for a declaration of some sort by the 11th anniversary meeting.
161
+
162
+ ![](images/b8f4e0a3f5cdde2d59296d94818a796832de39d0acbc3e3eed73fb4a79e16d92.jpg)
163
+
164
+ "SY MAA
165
+ MARKKRKRELSUN
166
+ SAN FRANCISCO
167
+ #8—President Eisenhower was reported ready to open the U.N.'s 10th anniversary session today with an important! policy declaration,
168
+
169
+ This word came from informed sources as the Big Four foreign ministers prepared for a private huddle tonight to plan the meeting of their chiefs of government in Geneva July 18.
170
+
171
+ Some diplomats believed Eisen.hower's speech would have special! significance, coming as it does just a month before the top-level talks.
172
+
173
+ Mulles to Bo Host The first direct contact between Soviet Foreign Minister V.M. Molotov and the Western leaders was established last night at a dinner given by Colombian Ambassador Eduardo Zuleta Angel.
174
+
175
+ Diplomatic sources reported the Russians likely would press for a declaration of some sort by the 10th anniversary meeting.
176
+
177
+ Table 4: Examples of reproduced articles. Additions are highlighted, and OCR errors are underlined.
178
+
179
+ We use the model developed by [23], who show that a contrastively trained neural MPNet bi-encoder - combined with single linkage clustering of article representations - accurately and cheaply detects reproduced content. This bi-encoder is contrastively trained on a hand-labeled dataset (detailed in the appendix) to create similar representations of articles from the same wire source and dissimilar representations of articles from different underlying sources, using S-BERT's online contrastive loss [9] implementation.
180
+
181
+ We run clustering on the article embeddings by year over all years in our sample. In post-processing, we use a simple set of rules exploiting the dates of articles within clusters to remove content like weather forecasts and legal notices, that are highly formulaic and sometimes cluster together when they contain very similar content (e.g. a similar 5-day forecast) but did not actually come from the same underlying source. We remove all headline pairs that are below a Levenshtein edit distance, normalized by the min length in the pair, of 0.1, to remove pairs that are exact duplicates up to OCR noise. Training and inference were performed on an A6000 GPU card. More details are provided in the supplementary materials.
182
+
183
+ To evaluate how well the model detects reproduced content, we use a labeled sample of all front page articles appearing in the newspaper corpus for three days in the 1930s and 1970s, taken from [23]. This sample consists of 54,996 positive reproduced article pairs and 100,914,159 negative pairs. The
184
+
185
+ large-scale labeled evaluation dataset was generated using the above pipeline, so the evaluation is inclusive of any errors that result from upstream layout detection, OCR, or article association errors.
186
+
187
+ The neural bi-encoder methods achieve a high adjusted rand index (ARI) of 91.5, compared to 73.7 for an optimal local sensitive hashing specification, chosen on the validation set. This shows that our neural methods substantially outperform commonly used sparse methods for detecting reproduced content. The neural bi-encoder is slightly outperformed by adding a re-ranking step that uses a neural cross-encoder on the best bi-encoder matches (ARI of 93.7). We do not implement this method because the cross-encoder doesn't scale well. In contrast, the bi-encoder pipeline can be scaled to 10 million articles on a single GPU in a matter of hours, using a FAISS [13] backend.
188
+
189
+ <table><tr><td></td><td>Neural</td><td>Non-Neural</td></tr><tr><td>Most scalable</td><td>Bi-encoder (91.5)</td><td>LSH (73.7)</td></tr><tr><td>Less scalable</td><td>Re-ranking (93.7)</td><td>N-gram overlap (75.0)</td></tr></table>
190
+
191
+ Table 5: The numbers in parentheses are the Adjusted Rand Index for four different models - a bi-encoder, a "re-ranking" strategy that combines a bi- and cross-encoder, locally sensitive hashing (LSH), and $N$ -gram overlap. Hyperparameters were chosen on the NEWS-COPY validation set, and all models were evaluated on the NEWS-COPY test set.
192
+
193
+ An error analysis is provided in [23]. Errors typically consist of articles about the same story from different wire services (e.g. the Associated Press and the United Press) or updates to a story as new events unfolded. Both types of errors will plausibly still lead to informative semantic similarity pairs.
194
+
195
+ # 5 Benchmarking
196
+
197
+ We benchmark HEADLINES using a variety of different language models and the MTEB clustering task. This task embeds texts using different base language models and then uses $k$ - the number of clusters in the ground truth data - for k-means clustering. Following MTEB, we score the model using the v-measure [22]. We should note that real-world problems are often framed as clustering tasks - rather than as classification tasks - because $k$ is unknown. By using $k$ from the ground truth, it makes the task easier. Nevertheless, we examine this task to allow for comparison with the rest of the literature.
198
+
199
+ ![](images/cba20a5cab198c5e7cd7ac243954d5f03ac255f1002eba10c364a6dcf22e4942.jpg)
200
+ Figure 7: This figure benchmarks HEADLINES on the MTEB clustering task. The x-axis shows the year that the sample was taken from and the y-axis gives the v-measure.
201
+
202
+ Figure 7 plots the results of this benchmarking exercise. MTEB benchmarks clustering on Arxiv, Bioarxiv, Medarxiv, Reddit,StackExchange,and Twenty Newsgroups. Texts are labeled with their classification (e.g., fields like ComputerVision for the Arxiv datasets; the subreddit for Reddit). The
203
+
204
+ best average v-score across these datasets, from MPNet, is 43.69. The best average v-score across decades for HEADLINES, from ST5-XXL, is around 78. This difference is likely to reflect, at least in part, that our cluster labels are less noisy, since texts in the same cluster summarize the same content. In contrast, titles of Reddit posts in the same subreddit may be only loosely linked to each other, and many could be within the domain of another subreddit cluster. While a user happened to post in one subreddit, another user could have reasonably made the same titled post in a different subreddit. The under-identification of the clustering tasks for some texts in the MTEB datasets is suggested by the very low v-scores across state-of-the-art language models. Overall, this suggests the high quality of clusters in HEADLINES relative to many web text datasets. Yet there is still ample scope for improvement to the state-of-the-art model.
205
+
206
+ # 6 Limitations and Recommended Usage
207
+
208
+ HEADLINES contains some transcription errors. For working with historical texts, these are more a feature than a bug, as most historical texts are transcribed and also contain various OCR errors. Training a model on transcribed texts likely makes it more robust to transcription errors at inference time. However, researchers requiring completely clean texts should seek another corpus.
209
+
210
+ HEADLINES contains historical language, that reflects the semantics and cultural biases of many thousands of local newspaper editors. This is a distinguishing feature of HEADLINES, that is core to many potential applications. We do not attempt to filter texts with antiquated terms or that may be considered offensive, as this would invalidate the use of the dataset for studying semantic change and historical contexts. At the same time, this makes HEADLINES less suited for tasks that require texts that fully conform to current cultural standards or semantic norms. For these reasons, we recommend against the use of HEADLINES for training generative models. Rather, with nearly 400M positive semantic similarity pairs spanning much of the 20th century, it can plausibly play an important role in facilitating the application of large language models to historical texts.
211
+
212
+ # Acknowledgements
213
+
214
+ Funding was provided by the Harvard Data Science Initiative, Harvard Catalyst, and Microsoft Azure compute credits. We thank Luca D'Amico-Wong for excellent research assistance.
215
+
216
+ # References
217
+
218
+ [1] AMBA HOMBAIAH, S., CHEN, T., ZHANG, M., BENDERSKY, M., AND NAJORK, M. Dynamic language models for continuously evolving content. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021), pp. 2514-2524.
219
+ [2] BERT, S. stackexchange, 2021.
220
+ [3] CHEN, X., FANG, H., LIN, T.-Y., VEDANTAM, R., GUPTA, S., DOLLÁR, P., AND ZIT-NICK, C. L. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 (2015).
221
+ [4] EHRMANN, M., HAMDI, A., PONTES, E. L., ROMANELLO, M., AND DOUCET, A. Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021).
222
+ [5] ETHAYARAJH, K. How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings. arXiv preprint arXiv:1909.00512 (2019).
223
+ [6] FADER, A., ZETTLEMOYER, L., AND ETZIONI, O. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (2014), pp. 1156-1165.
224
+ [7] FALKNER, S., KLEIN, A., AND HUTTER, F. BOHB: Robust and efficient hyperparameter optimization at scale. In Proceedings of the 35th International Conference on Machine Learning (10-15 Jul 2018), J. Dy and A. Krause, Eds., vol. 80 of Proceedings of Machine Learning Research, PMLR, pp. 1437-1446.
225
+ [8] GUARNERI, J. Newsprint Metropolis. University of Chicago Press, 2017.
226
+ [9] HADSELL, R., CHOPRA, S., AND LECUN, Y. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06) (2006), vol. 2, IEEE, pp. 1735-1742.
227
+ [10] HE, K., GKIOXARI, G., DOLLÁR, P., AND GIRSHICK, R. Mask r-cnn. Proceedings of the IEEE international conference on computer vision (2017), 2961-2969.
228
+ [11] HENDERSON, M., BUDZIANOWSKI, P., CASANUEVA, I., COOPE, S., GERZ, D., KUMAR, G., MRKSIC, N., SPITHOURAKIS, G., SU, P.-H., VULIC, I., ET AL. A repository of conversational datasets. In Proceedings of the First Workshop on NLP for Conversational AI (2019), pp. 1-10.
229
+ [12] IYER, S., DANDEKAR, N., AND CSERNAI, K. First quora dataset release: Question pairs, 2017.
230
+ [13] JOHNSON, J., DOUZE, M., AND JÉGOU, H. Billion-scale similarity search with gpus. IEEE Transactions on Big Data 7, 3 (2019), 535-547.
231
+ [14] LIU, Y., OTT, M., GOYAL, N., DU, J., JOSHI, M., CHEN, D., LEVY, O., LEWIS, M., ZETTLEMOYER, L., AND STOYANOV, V. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
232
+ [15] LOUREIRO, D., BARBIERI, F., NEVES, L., ANKE, L. E., AND CAMACHO-COLLADOS, J. Timelms: Diachronic language models from twitter. arXiv preprint arXiv:2202.03829 (2022).
233
+ [16] MANJAVACAS, E., AND FONTEYN, L. Macberth: Development and evaluation of a historically pre-trained language model for english (1450-1950). In Proceedings of the Workshop on Natural Language Processing for Digital Humanities (2021), pp. 23-36.
234
+ [17] MANJAVACAS, E., AND FONTEYN, L. Adapting vs. pre-training language models for historical languages. Journal of Data Mining & Digital Humanities, Digital humanities in languages (2022).
235
+ [18] MUENNIGHOFF, N., TAZI, N., MAGNE, L., AND REIMERS, N. Mteb: Massive text embedding benchmark. arXiv preprint arXiv:2210.07316 (2022).
236
+
237
+ [19] OCKERBLOOM, J. M. Everybody's library questions: Newspaper copyrights, notices, and renewals, 2019.
238
+ [20] RASTAS, I., RYAN, Y. C., TIIHONEN, I., QARAEI, M., REPO, L., BABBAR, R., MÄKELÄ, E., TOLONEN, M., AND GINTER, F. Explainable publication year prediction of eighteenth century texts with the bert model. In Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change (2022), pp. 68-77.
239
+ [21] REIMERS, N., AND GUREVYCH, I. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019).
240
+ [22] ROSENBERG, A., AND HIRsCHBERG, J. V-measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL) (2007), pp. 410-420.
241
+ [23] SILCOCK, E., D'AMICO-WONG, L., YANG, J., AND DELL, M. Noise-robust de-duplication at scale. International Conference on Learning Representations (2023).
242
+ [24] SONG, K., TAN, X., QIN, T., LU, J., AND LIU, T.-Y. Mpnet: Masked and permuted pretraining for language understanding. Advances in Neural Information Processing Systems 33 (2020), 16857-16867.
243
+ [25] WANG, T., AND ISOLA, P. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. International Conference on Machine Learning 119 (2020), 9929-9939.
244
+ [26] YOUNG, P., LAI, A., HODOSH, M., AND HOCKENMAIER, J. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 2 (2014), 67-78.
245
+
246
+ # Appendices
247
+
248
+ # A-1 Methods to Associate Articles
249
+
250
+ Figure 6 in the main text illustrates the full article association procedure.
251
+
252
+ First, we used a rule-based algorithm using associate article bounding boxes that are under the same headline, as these are part of the same article with extremely high probability. Algorithm 1 gives pseudocode for this method. We set the parameters as $P_{S} = 100$ , $P_{T} = 20$ , $P_{B} = 50$ .
253
+
254
+ For training data, where we want article pairs that are not only part of the same article, but also where they appear in the given order, we further narrow down the pairs. Specifically, we use only those pairs which are horizontally next to each other, and which have no other bounding boxes below them, as for these pairs, we can guarantee that the pair of bounding follows directly after one another (whereas for other article bounding boxes that share a headline, there may be a third bounding box in between). Algorithm 2 shows pseudocode for this procedure, and we used $P_{C} = 5$ , and it is further illustrated in panel A of figure 6 in the main text.
255
+
256
+ For hard negatives, we used article boxes under the same headline in reverse reading order (right to left). For standard negatives, we took pairs of articles on the same page, where B was above and to the left of A, as articles do not read from right to left. One twelfth of our training data were positive pairs, another twelfth were hard negative pairs and the remainder were standard negative pairs. This outperformed a more balanced training sample.
257
+
258
+ We use this dataset to finetune a cross-encoder using a RoBERTa base model [14]. We used a Bayesian search algorithm [7] to find optimal hyperparameters on one tenth of our training data (limited compute prevented us from running this search with the full dataset), which led to a learning rate of 1.7e-5, with a batch size of 64 and $29.57\%$ warm up. We trained for 26 epochs with an AdamW optimizer, and optimize a binary cross-entropy loss.
259
+
260
+ We evaluate these methods on a hand-labeled dataset of 214 scans, randomly selected from 1968 and 1955. These scans were labeled by a highly-trained undergraduate research assistant. Summary statistics of this dataset are given in table A-1 and evaluation results are given in the main text.
261
+
262
+ <table><tr><td>Scan count</td><td>Article bounding boxes</td><td>Headline bounding boxes</td><td>Article-article associations</td></tr><tr><td>214</td><td>3,803</td><td>2,805</td><td>1,851</td></tr></table>
263
+
264
+ Table A-1: Descriptive statistics of article association training data.
265
+
266
+ # A-2 Methods to Detect Reproduced Content
267
+
268
+ To detect reproduced content, we use the contrastively trained bi-encoder model developed by [23], which is trained to learn similar representations for reproduced articles and dissimilar representations for non-reproduced articles. This model is based on an S-BERT MPNET model [21, 24] and is fine-tuned on a hand-labelled dataset of articles from the same underlying wire source, using S-BERT's online contrastive loss [9] implementation, with a 0.2 margin and cosine similarity as the distance metric. The learning rate is 2e-5 with $100\%$ warm up and a batch size of 32. It uses an AdamW optimizer, and the model is trained for 16 epochs. This bi-encoder is trained and evaluated on a hand-labeled dataset, which is detailed in A-2. The results of this evaluation are given in the main text.
269
+
270
+ To create clusters from the bi-encoder embeddings, we use highly scalable single-linkage clustering, with a cosine similarity threshold of 0.94. We build a graph using articles as nodes, and add edges if the cosine similarity is above this threshold. As edge weights we use the negative exponential of the difference in dates (in days) between the two articles. We then apply Leiden community detection to the graph to control false positive edges that can otherwise merge disparate groups of articles.
271
+
272
+ We further remove clusters that have over 50 articles and contain articles with greater than five different dates. We also remove clusters that contain over 50 articles, when the number of articles is more than double the number of unique newspapers from which these articles are sourced. This
273
+
274
+ Algorithm 1 Rule-based association of article bounding boxes
275
+ INPUT: $b_{1},\ldots ,b_{n}\in B$ : set of bounding boxes that appear on the same scan, with their coordinates, denoted left $(b_{i})$ right $(b_{i})$ top $(b_{i})$ bottom $(b_{i})$ , and type (headline, article, byline etc.), denoted type $(b_{i})$
276
+ PARAMETERS: W: width of scan H: height of scan $P_S$ : fraction of width, for creating side margin $P_T$ : fraction of height, for creating top margin $P_B$ : fraction of height, for creating bottom margin
277
+ OUTPUT: ArticleArticlePairs = $\{(b_i,b_j)\in B\times B|b_i,b_j$ predicted to be part of the same full article and type $(b_{i}) =$ type $(b_{j}) =$ article} 1: Initialise: $M_S = W / P_S$ , the side margin, $M_B = H / P_B$ , the bottom margin, $M_T =$ $H / P_T$ , the top margin, MatchedHeadlines $= \{\}$ , HeadlineArticlePairs $= \{\}$ ArticleArticlePairs $= \{\}$ 2: for all $b_{0}$ in B where type(b0) is article do 3: Create $B_0\subset B$ where: - All bounding boxes of type byline are removed - $b_{0}$ is removed - All bounding boxes are removed that do not share at least $M_S$ of the horizontal axis - All bounding boxes are removed whose bottom is more than $M_B$ below the top of $b_{0}$ - All bounding boxes are removed whose bottom is more than $M_T$ above the top of $b_{0}$ 4: if $B_0$ is not empty then 5: Let $b_{1}$ be the element of $B_{0}$ that has the lowest bottom coordinate 6: if type $(b_{1})$ is headline then 7: MatchedHeadlines $=$ MatchedHeadlines $\cup \{b_1\}$ 8: HeadlineArticlePairs $=$ HeadlineArticlePairs $\cup \{(b_0,b_1)\}$ 9: end if 10: end if 11: end for 12: for all $b_{h}$ in MatchedHeadlines do 13: Let $H_{1}\subset$ HeadlineArticlePairs be all pairs that contain that headline, $b_{h}$ 14: if $H_{1}$ has at least two elements then 15: Let A be all the bounding boxes of type article from the pairs in $H_{1}$ 16: Let C be all combinations of 2 elements of A 17: ArticleArticlePairs $=$ ArticleArticlePairs $\cup C$ 18: end if 19: end for
278
+
279
+ Algorithm 2 Selection of ordered article pairs
280
+ INPUT: ArticleArticlePairs, B, from algorithm 1.
281
+ PARAMETERS:
282
+ $P_C$ : fraction of column width, for creating margin
283
+ OUTPUT: OrderedPairs $\subset$ ArticleArticlePairs
284
+ 1: Initialise: OrderedPairs = {}
285
+ 2: for $p$ in ArticleArticlePairs do
286
+ 3: Let $p_l$ be the element of $p$ with the furthest left coordinate
287
+ 4: Let $p_r$ be the other element
288
+ 5: if $left(p_r)$ is not further to the right of $right(p_l)$ than $width(p_l) / F_C$ then
289
+ 6: if there are no other bounding boxes below $p_l$ then
290
+ 7: OrderedPairs = OrderedPairs $\cup$ {p}
291
+ 8: end if
292
+ 9: end if
293
+ 10: end for
294
+
295
+ removes clusters of content that are correctly clustered in the sense of being based on the same underlying source, but are not useful for the HEADLINES dataset. For example, an advertisement
296
+
297
+ <table><tr><td></td><td>Positives Pairs</td><td>Negative Pairs</td><td>Reproduced Articles</td><td>Singleton Articles</td><td>Total Articles</td></tr><tr><td colspan="6">Training Data</td></tr><tr><td>Training</td><td>36,291</td><td>37,637</td><td>891</td><td>-</td><td>7,728</td></tr><tr><td>Validation</td><td>3,042</td><td>3,246</td><td>20</td><td>-</td><td>283</td></tr><tr><td colspan="6">Full Day Evaluation</td></tr><tr><td>Validation</td><td>28,547</td><td>12,409,031</td><td>447</td><td>2,162</td><td>4,988</td></tr><tr><td>Test</td><td>54,996</td><td>100,914,159</td><td>1,236</td><td>8,046</td><td>14,211</td></tr><tr><td>Full Dataset</td><td>122,876</td><td>113,364,073</td><td>2,594</td><td>10,208</td><td>27,210</td></tr></table>
298
+
299
+ Table A-2: Summary statistics of training and evaluation data for detecting duplicate content.
300
+
301
+ (misclassified as an article due to an article-like appearance) might be repeated by the same newspaper on multiple different dates and would be removed by these rules, or weather forecasts can be very near duplicates across space and time, forming large clusters.
302
+
303
+ # A-3 A Summary of Copyright Law for Works Published in the United States
304
+
305
+ <table><tr><td>Date of Publication</td><td>Conditions</td><td>Copyright Term</td></tr><tr><td colspan="3">Public Domain</td></tr><tr><td>Anytime</td><td>Works prepared by an officer/employee of the U.S. Government as part of their official duties</td><td>None</td></tr><tr><td>Before 1928</td><td>None</td><td>None. Copyright expired.</td></tr><tr><td>1928 through 1977</td><td>Published without a copyright notice</td><td>None. Failure to comply with required formalities</td></tr><tr><td>1978 to 1 March 1989</td><td>Published without notice and without subsequent registration within 5 years</td><td>None. Failure to comply with required formalities</td></tr><tr><td>1928 through 1963</td><td>Published with notice but copyright was not renewed</td><td>None. Copyright expired</td></tr><tr><td colspan="3">Copyrighted</td></tr><tr><td>1978 to 1 March 1989</td><td>Published without notice, but with subsequent registration within 5 years</td><td>70 (95) years after the death of author (corporate author)</td></tr><tr><td>1928 through 1963</td><td>Published with notice and the copyright was renewed</td><td>95 years after publication</td></tr><tr><td>1964 through 1977</td><td>Published with notice</td><td>95 years after publication</td></tr><tr><td>1978 to 1 March 1989</td><td>Created after 1977 and published with notice</td><td>70 (95) years after the death of author (corporate author) or 120 years after creation, if earlier</td></tr><tr><td>1978 to 1 March 1989</td><td>Created before 1978 and first published with notice in the specified period</td><td>The greater of the term specified in the previous entry or 31 December 2047</td></tr><tr><td>From 1 March 1989 through 2002</td><td>Created after 1977</td><td>70 (95) years after the death of author (corporate author) or 120 years after creation, if earlier</td></tr><tr><td>From 1 March 1989 through 2002</td><td>Created before 1978 and first published in this period</td><td>The greater of the term specified in the previous entry or 31 December 2047</td></tr><tr><td>After 2002</td><td>None</td><td>70 (95) years after the death of author (corporate author) or 120 years after creation, if earlier</td></tr></table>
306
+
307
+ Table A-3: This table summarizes U.S. copyright law, based on a similar table produced by the Cornell libraries. For concision, we focus on works initially published in the United States. A variety of other cases are also covered at https://guides.library.cornell.edu/copyright.
amassivescalesemanticsimilaritydatasetofhistoricalenglish/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:053a8d1c97cddc1c23a982bddf0fbad5d5b95bd71868c95b84e5d291191b99be
3
+ size 1070871
amassivescalesemanticsimilaritydatasetofhistoricalenglish/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b42e4fba46a13b20811d377b2cb3a66abe104e582bf917aa8b78daa2f7e5f1e6
3
+ size 375507
ameasuretheoreticaxiomatisationofcausality/7f33c9b5-94b0-4aba-b49e-f07f33b2a4ac_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c8cf78139c198532e9fba04daeb601c2054e64b8de12b0e90f9719fe03411bc
3
+ size 228400
ameasuretheoreticaxiomatisationofcausality/7f33c9b5-94b0-4aba-b49e-f07f33b2a4ac_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8d4d5573110765176b15c4148c2f8306020afab95fffa22e28154f4951a3788
3
+ size 272704
ameasuretheoreticaxiomatisationofcausality/7f33c9b5-94b0-4aba-b49e-f07f33b2a4ac_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d416e46e7b450509d7dc28eeb07f9399d1a848ba22fe9245b5bd5a1e92a8c27d
3
+ size 556310
ameasuretheoreticaxiomatisationofcausality/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ameasuretheoreticaxiomatisationofcausality/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cff4f5b6f2520e14c2dd172e0c53469aa082a13095cbfa8c893cd4ea664b9b48
3
+ size 808392
ameasuretheoreticaxiomatisationofcausality/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b5ed3909bb06b87bd927b4f9af5322d63d6f4cc0779485001d2cdd1670d7ae1
3
+ size 2143949
ametadatadrivenapproachtounderstandgraphneuralnetworks/aec2d363-b0ba-4098-9d3d-7d04bf3f82e5_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f7203153454597ed685be77f6339d51b290c750ed3c48c3913972ad182d57cd
3
+ size 139603
ametadatadrivenapproachtounderstandgraphneuralnetworks/aec2d363-b0ba-4098-9d3d-7d04bf3f82e5_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:293861a6a9b4f9b2f37ec967eeae4d073ca9929141ca81648d4d24ad2aa29acf
3
+ size 165959
ametadatadrivenapproachtounderstandgraphneuralnetworks/aec2d363-b0ba-4098-9d3d-7d04bf3f82e5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d9597010462aba0f30beb3ffc5ccf7df224a9ba41c25cf8b26f087d7768357f
3
+ size 414717
ametadatadrivenapproachtounderstandgraphneuralnetworks/full.md ADDED
@@ -0,0 +1,663 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Metadata-Driven Approach to Understand Graph Neural Networks
2
+
3
+ Ting Wei Li
4
+
5
+ University of Michigan tingwl@umich.edu
6
+
7
+ Qiaozhu Mei
8
+
9
+ University of Michigan qmei@umich.edu
10
+
11
+ Jiaqi Ma
12
+
13
+ University of Illinois Urbana-Champaign jiaqima@illinois.edu
14
+
15
+ # Abstract
16
+
17
+ Graph Neural Networks (GNNs) have achieved remarkable success in various applications, but their performance can be sensitive to specific data properties of the graph datasets they operate on. Current literature on understanding the limitations of GNNs has primarily employed a model-driven approach that leverages heuristics and domain knowledge from network science or graph theory to model the GNN behaviors, which is time-consuming and highly subjective. In this work, we propose a metadata-driven approach to analyze the sensitivity of GNNs to graph data properties, motivated by the increasing availability of graph learning benchmarks. We perform a multivariate sparse regression analysis on the metadata derived from benchmarking GNN performance across diverse datasets, yielding a set of salient data properties. To validate the effectiveness of our data-driven approach, we focus on one identified data property, the degree distribution, and investigate how this property influences GNN performance through theoretical analysis and controlled experiments. Our theoretical findings reveal that datasets with a more balanced degree distribution exhibit better linear separability of node representations, thus leading to better GNN performance. We also conduct controlled experiments using synthetic datasets with varying degree distributions, and the results align well with our theoretical findings. Collectively, both the theoretical analysis and controlled experiments verify that the proposed metadata-driven approach is effective in identifying critical data properties for GNNs.
18
+
19
+ # 1 Introduction
20
+
21
+ Graph Neural Networks (GNNs), as a broad family of graph machine learning models, have gained increasing research interests in recent years. However, unlike the ResNet model [14] in computer vision or the Transformer model [36] in natural language processing, there has not been a dominant GNN architecture that is universally effective across a wide range of graph machine learning tasks. This may be attributed to the inherently diverse nature of graph-structured data, which results in the GNN performance being highly sensitive to specific properties of the graph datasets. Consequently, GNNs that demonstrate high performance on certain benchmark datasets often underperform on others with distinct properties. For example, early GNNs have been shown to exhibit degraded performance when applied to non-homophilous graph datasets, where nodes from different classes are highly interconnected and mixed [45, 46, 32, 11, 9].
22
+
23
+ However, it is non-trivial to identify and understand critical graph data properties that are highly influential on GNN performance. Current literature primarily employs what we term as a model-driven approach, which attempts to model GNN performance using specific heuristics or domain knowledge derived from network science or graph theory [41, 45]. Although this approach can offer an in-depth understanding of GNN performance, it can also be time-consuming, subjective, and may not fully capture the entire spectrum of relevant data properties.
24
+
25
+ To address these limitations and complement the model-driven approach, we propose a metadata-driven approach to identify critical data properties affecting GNN performance. With the increasing availability of diverse benchmark datasets for graph machine learning [16, 27], we hypothesize that critical graph data properties can be inferred from the benchmarking performance of GNNs on these datasets, which can be viewed as the metadata of the datasets. More concretely, we carry out a multivariate sparse regression analysis on the metadata obtained from large-scale benchmark experiments [27] involving multiple GNN models and a variety of graph datasets. Through this regression analysis, we examine the correlation between GNN performance and the data properties of each dataset, thereby identifying a set of salient data properties that significantly influence GNN performance.
26
+
27
+ To validate the effectiveness of the proposed metadata-driven approach, we further focus on a specific salient data property, degree distribution, identified from the regression analysis, and investigate the mechanism by which this data property affects GNN performance. In particular, our regression analysis reveals a decline in GNN performance as the degree distribution becomes more imbalanced. We delve deeper into this phenomenon through a theoretical analysis and a controlled experiment.
28
+
29
+ We initiate our investigation with a theoretical analysis of the GNN performance under the assumption that the graph data is generated by a Degree-Corrected Contextual Stochastic Block Model (DC-CSBM). Here, we define DC-CSBM by combining and generalizing the Contextual Stochastic Block Model [4] and the Degree-Corrected Stochastic Block Model [17]. Building upon the analysis by Baranwal et al. [3], we establish a novel theoretical result on how the degree distribution impacts the linear separability of the GNN representations and subsequently, the GNN performance. Within the DC-CSBM context, our theory suggests that more imbalanced degree distribution leads to few nodes being linearly separable in their GNN representations, thus negatively impacting GNN performance.
30
+
31
+ Complementing our theoretical analysis, we conduct a controlled experiment, evaluating GNN performance on synthetic graph datasets with varying degree distribution while holding other properties fixed. Remarkably, we observe a consistent decline in GNN performance correlating with the increase of the Gini coefficient of degree distribution, which reflects the imbalance of degree distribution. This observation further corroborates the findings of our metadata-driven regression analysis.
32
+
33
+ In summary, our contribution in this paper is two-fold. Firstly, we introduce a novel metadata-driven approach to identify critical graph data properties affecting GNN performance and demonstrate its effectiveness through a case study on a specific salient data property identified by our approach. Secondly, we develop an in-depth understanding of how the degree distribution of graph data influences GNN performance through both a novel theoretical analysis and a carefully controlled experiment, which is of interest to the graph machine learning community in its own right.
34
+
35
+ # 2 Related Work
36
+
37
+ # 2.1 Analysis on the Limitations of GNNs
38
+
39
+ There has been a wealth of existing literature investigating the limitations of GNNs. However, most of the previous works employ the model-driven approach. Below we summarize a few well-known limitations of GNNs while acknowledging that an exhaustive review of the literature is impractical. Among the limitations identified, GNNs have been shown to be sensitive to the extent of homophily in graph data, and applying GNNs to non-homophilous data often has degraded performance [1, 9, 23, 46, 45]. In addition, over-smoothing, a phenomenon where GNNs lose their discriminative power with deeper layers [20, 34, 6], is a primary concern particularly for node-level prediction tasks where distinguishing the nodes within the graph is critical. Further, when applied to graph-level prediction tasks, GNNs are limited by their ability to represent and model specific functions or patterns on graph-structured data, an issue often referred to as the expressiveness problem of GNNs. [41, 30, 25, 43]. Most of these limitations are understood through a model-driven approach, which offers in-depth insights but is time-consuming and highly subjective. In contrast, this paper presents a metadata-driven approach, leveraging metadata from benchmark datasets to efficiently screen through a vast array of data properties.
40
+
41
+ # 2.2 Data-Driven Analysis in Graph Machine Learning
42
+
43
+ With the increasing availability of graph learning benchmarks, there have been several recent studies that leverage diverse benchmarks for data-driven analysis. For example, Liu et al. [24] presents a principled pipeline to taxonomize benchmark datasets. Specifically, by applying a number of different perturbation methods on each dataset and obtaining the sensitivity profile of the resulting GNN performance on perturbed datasets, they perform hierarchical clustering on these sensitivity profiles to cluster statistically similar datasets. However, this study only aims to categorize datasets instead of identifying salient data properties that influence GNN performance. Ma et al. [27] establish a Graph Learning Indexer (GLI) library that curates a large collection of graph learning benchmarks and GNN models and conducts a large-scale benchmark study. We obtain our metadata from their benchmarks. Palowitch et al. [31] introduce a GraphWorld library that can generate diverse synthetic graph datasets with various properties. These synthetic datasets can be used to test GNN models through controlled experiments. In this paper, we have used this library to verify the effectiveness of the identified critical data properties.
44
+
45
+ # 2.3 Impact of Node Degrees on GNN Performance
46
+
47
+ There have also been a few studies investigating the impact of node degrees on GNNs. In particular, it has been observed that within a single graph dataset, there tends to be an accuracy discrepancy among nodes with varying degrees [35, 22, 44, 39]. Typically, GNN predictions on nodes with lower degrees tend to have lower accuracy. However, the finding of the Gini coefficient of the degree distribution as a strong indicator of GNN performance is novel. Furthermore, this indicator describes the dataset-level characteristics, allowing comparing GNN performance across different graph datasets. In addition, this paper presents a novel theoretical analysis, directly relating the degree distribution to the generalization performance of GNNs.
48
+
49
+ # 3 A Metadata-Driven Analysis on GNNs
50
+
51
+ # 3.1 Understanding GNNs with Metadata
52
+
53
+ Motivation. Real-world graph data are heterogeneous and incredibly diverse, contrasting with images or texts that often possess common structures or vocabularies. The inherent diversity of graph data makes it particularly challenging, if not unfeasible, to have one model to rule all tasks and datasets in the graph machine learning domain. Indeed, specific types of GNN models often only perform well on a selected set of graph learning datasets. For example, the expressive power of GNNs [41] is primarily relevant to graph-level prediction tasks rather than node-level tasks – higher-order GNNs with improved expressive power are predominantly evaluated on graph-level prediction tasks [30, 41]. As another example, several early GNNs such as Graph Convolution Networks (GCN) [19] or Graph Attention Networks (GAT) [37] only work well when the graphs exhibit homophily [45]. Consequently, it becomes crucial to identify and understand the critical data properties that influence the performance of different GNNs, allowing for more effective model design and selection.
54
+
55
+ The increasing availability of graph learning benchmarks that offer a wide range of structural and feature variations [16, 27] presents a valuable opportunity: one can possibly infer critical data properties from the performance of GNNs on these datasets. To systematically identify these critical data properties, we propose to conduct a regression analysis on the metadata of the benchmarks.
56
+
57
+ Regression Analysis on Metadata. In the regression analysis, the performance metrics of various GNN models on each dataset serve as the dependent variables, while the extracted data properties from each dataset act as the independent variables. Formally, we denote the number of datasets as $n$ , the number of GNN models as $q$ , and the number of data properties as $p$ . Define the response variables $\{\mathbf{y}_i\}_{i\in [q]}$ to be GNN model performance operated on each dataset and the covariate variables $\{\mathbf{x}_j\}_{j\in [p]}$ to be properties of each dataset. Note that $\mathbf{y}_i\in \mathbb{R}^n$ , $\forall i\in [q]$ and $\mathbf{x}_j\in \mathbb{R}^n$ , $\forall j\in [p]$ . For ease of notation, we define $\mathbf{Y} = (\mathbf{y}_1,\dots,\mathbf{y}_q)\in \mathbb{R}^{n\times q}$ to be the response matrix of $n$ samples and $q$ variables, and $\mathbf{X} = (\mathbf{x}_1,\dots,\mathbf{x}_p)\in \mathbb{R}^{n\times p}$ to be the covariate matrix of $n$ samples and $p$ variables.
58
+
59
+ Given these data matrices, we establish the following multivariate linear model to analyze the relationship between response matrix $\mathbf{Y}$ and covariate matrix $\mathbf{X}$ , which is characterized by the coefficient matrix $\mathbf{B}$ .
60
+
61
+ Definition 3.1 (Multivariate Linear Model).
62
+
63
+ $$
64
+ \mathbf {Y} = \mathbf {X B} + \mathbf {W}, \tag {1}
65
+ $$
66
+
67
+ where $\mathbf{B} \in \mathbb{R}^{p \times q}$ is the coefficient matrix and $\mathbf{W} = (\mathbf{w}_1, \dots, \mathbf{w}_q) \in \mathbb{R}^{n \times q}$ is the matrix of error terms.
68
+
69
+ Our goal is to find the most salient data properties that correlate with the performance of GNN models given a number of samples. To this end, we introduce two sparse regularizers for feature selections, which leads to the following Multivariate Sparse Group Lasso problem.
70
+
71
+ Definition 3.2 (Multivariate Sparse Group Lasso Problem).
72
+
73
+ $$
74
+ \underset {\mathbf {B}} {\operatorname {a r g m i n}} \frac {1}{2 n} \| \mathbf {Y} - \mathbf {X B} \| _ {2} ^ {2} + \lambda_ {1} \| \mathbf {B} \| _ {1} + \lambda_ {g} \| \mathbf {B} \| _ {2, 1}, \tag {2}
75
+ $$
76
+
77
+ where $\| \mathbf{B}\| _1 = \sum_{i = 1}^p\sum_{j = 1}^q |\mathbf{B}_{ij}|$ is the $L_{1}$ norm of $\mathbf{B}$ , $\| \mathbf{B}\|_{2,1} = \sum_{i = 1}^{p}\sqrt{\sum_{j = 1}^{q}\mathbf{B}_{ij}^{2}}$ is the $L_{2,1}$ group norm of $\mathbf{B}$ , and $\lambda_1,\lambda_g > 0$ are the corresponding penalty parameters.
78
+
79
+ In particular, the $L_{1}$ penalty encourages the coefficient matrix $\mathbf{B}$ to be sparse, only selecting salient data properties. The $L_{2,1}$ penalty further leverages the structure of the dependent variables and tries to make only a small set of the GNN models' performance depends on each data property, thus differentiating the impacts on different GNNs.
80
+
81
+ To solve for the coefficient matrix $\mathbf{B}$ in Equation 2, we employ an R package, MSGLasso [21], using matrices $\mathbf{Y}$ and $\mathbf{X}$ as input. To ensure proper input for the MSGLasso solver [21], we have preprocessed the data by standardizing the columns of both $\mathbf{Y}$ and $\mathbf{X}$ .
82
+
83
+ # 3.2 Data Properties and Model Performance
84
+
85
+ Next, we introduce the metadata used for the regression analysis. We obtain both the benchmark datasets and the model performance using the Graph Learning Indexer (GLI) library [27].
86
+
87
+ Data Properties. We include the following benchmark datasets in our regression analysis: cora [42], citeseer [42], pubmed [42], texas [33], cornell [33], wisconsin [33], actor [33], squirrel [33], chameleon [33], arxiv-year [23], snap-patents [23], penn94 [23],pokec [23], genius [23], and twitch-gamers [23]. For each graph dataset, we calculate 15 data properties, which can be categorized into the following six groups:
88
+
89
+ - Basic: Edge Density, Average Degree, Degree Assortativity;
90
+ - Distance: Pseudo Diameter;
91
+ - Connectivity: Relative Size of Largest Connected Component (RSLCC);
92
+ - Clustering: Average Clustering Coefficient (ACC), Transitivity, Degeneracy;
93
+ - Degree Distribution: Gini Coefficient of Degree Distribution (Gini-Degree);
94
+ - Attribute: Edge Homogeneity, In-Feature Similarity, Out-Feature Similarity, Feature Angular SNR, Homophily Measure, Attribute Assortativity.
95
+
96
+ The formal definition of these graph properties can be found in Appendix A.
97
+
98
+ Model Performance. For GNN models, we include GCN [19], GAT [37], GraphSAGE [13], MoNet [29], MixHop [1], and LINKX [23] into our regression analysis. We also include a non-graph model, Multi-Layer Perceptron (MLP). The complete experimental setup for these models can be found in Appendix B.
99
+
100
+ Table 1: The estimated coefficient matrix $\mathbf{B}$ of the multivariate sparse regression analysis. Each entry indicates the strength (magnitude) and direction (+, -) of the relationship between a graph data property and the performance of a GNN model. The six most salient data properties are indicated in bold.
101
+
102
+ <table><tr><td>Graph Data Property</td><td>GCN</td><td>GAT</td><td>GraphSAGE</td><td>MoNet</td><td>MixHop</td><td>LINKX</td><td>MLP</td></tr><tr><td>Edge Density</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0.0253</td><td>0.0983</td></tr><tr><td>Average Degree</td><td>0.2071</td><td>0</td><td>0.1048</td><td>0.1081</td><td>0</td><td>0.3363</td><td>0</td></tr><tr><td>Pseudo Diameter</td><td>0</td><td>-0.349</td><td>-0.1531</td><td>0</td><td>-0.4894</td><td>-0.3943</td><td>-0.6119</td></tr><tr><td>Degree Assortativity</td><td>0</td><td>0</td><td>0</td><td>-0.0744</td><td>0</td><td>0</td><td>0</td></tr><tr><td>RSLCC</td><td>0.1019</td><td>0</td><td>0</td><td>0.0654</td><td>0</td><td>0.1309</td><td>0</td></tr><tr><td>ACC</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>-0.0502</td></tr><tr><td>Transitivity</td><td>0</td><td>-0.0518</td><td>0</td><td>-0.1372</td><td>0</td><td>0.2311</td><td>0</td></tr><tr><td>Degeneracy</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>-0.1657</td></tr><tr><td>Gini-Degree</td><td>-0.4403</td><td>-0.2961</td><td>-0.3267</td><td>-0.2944</td><td>-0.4205</td><td>-0.367</td><td>-0.1958</td></tr><tr><td>Edge Homogeneity</td><td>0.7094</td><td>0.4705</td><td>0.7361</td><td>0.8122</td><td>0.6407</td><td>0.2006</td><td>0.4776</td></tr><tr><td>In-Feature Similarity</td><td>0.3053</td><td>0.1081</td><td>0.1844</td><td>0.1003</td><td>0.4613</td><td>0.6396</td><td>0.2399</td></tr><tr><td>Out-Feature Similarity</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Feature Angular SNR</td><td>0.2522</td><td>0</td><td>0.2506</td><td>0</td><td>0.2381</td><td>0.3563</td><td>0.3731</td></tr><tr><td>Homophily Measure</td><td>0</td><td>0.4072</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Attribute Assortativity</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr></table>
103
+
104
+ # 3.3 Analysis Results
105
+
106
+ The estimated coefficient matrix $\mathbf{B}$ is presented in Table 1. As can be seen, the estimated coefficient matrix is fairly sparse, allowing us to identify salient data properties. Next, we will discuss the six most salient data properties that correlate to some or all of the GNN models' performance. For the data properties that have an impact on all GNNs' performance, we call them Widely Influential Factors; for the data properties that have an impact on over one-half of GNNs' performance, we call them Narrowly Influential Factors. Notice that the $(+, -)$ sign after the name of the factors indicates whether this data property has a positive or negative correlation with the GNN performance.
107
+
108
+ Widely Influential Factors. We discover that the Gini coefficient of the degree distribution (Gini-Degree), Edge Homogeneity, and In-Feature Similarity impact all GNNs' model performance consistently.
109
+
110
+ - Gini-Degree $(-)$ measures how the graph's degree distribution deviates from the perfectly equal distribution, i.e., a regular graph. This is a crucial data property that dramatically influences GNNs' performance but remains under-explored in prior literature.
111
+ - Edge Homogeneity (+) is a salient indicator for all GNN models' performance. This phenomenon coincides with the fact that various GNNs assume strong homophily condition [28] to obtain improvements on node classification tasks [13, 19, 37].
112
+ - In-feature Similarity (+) calculates the average of feature similarity within each class. Under the homophily assumption, GNNs work better when nodes with the same labels additionally have similar node features, which also aligns with existing findings in the literature [15].
113
+
114
+ Narrowly Influential Factors. We find that Average Degree, Pseudo Diameter, and Feature Angular SNR are salient factors for a subset of GNN models, although we do not yet have a good understanding on the mechanism of how these data properties impact model performance.
115
+
116
+ - Average Degree (+) is more significant for GCN, GraphSAGE, MoNet, and LINKX.
117
+ - Pseudo Diameter $(-)$ is more significant for GAT, GraphSAGE, MixHop, LINKX, and MLP.
118
+ - Feature Angular SNR (+) is more significant for GCN, GraphSAGE, MixHop, LINKX, and MLP.
119
+
120
+ We note that the regression analysis only indicates associative relationships between data properties and the model performance. While our analysis has successfully identified well-known influential
121
+
122
+ data properties, e.g., Edge Homogeneity, the mechanism for most identified data properties through which they impact the GNN performance remains under-explored.
123
+
124
+ To further verify the effectiveness of the proposed metadata-driven approach in identifying critical data properties, we perform an in-depth analysis for Gini-Degree, one of the most widely influential factors. In the following Section 4 and 5, we conduct theoretical analysis and controlled experiments to understand how Gini-Degree influences GNNs' performance.
125
+
126
+ # 4 Theoretical Analysis on the Impact of Degree Distribution
127
+
128
+ In this section, we present a theoretical analysis on influence of graph data's degree distribution on the performance of GNNs. Specifically, our analysis investigates the linear separability of node representations produced by applying graph convolution to the node features. In the case that the graph data comes from a Degree-Corrected Stochastic Block Model, we show that nodes from different classes are more separable when their degree exceeds a threshold. This separability result relates the graph data's degree distribution to the GNN performance. Finally, we discuss the role of Gini-Degree on the GNN performance using implications of our theory.
129
+
130
+ # 4.1 Notations and Sketch of Analysis
131
+
132
+ The Graph Data. Let $\mathcal{G} = \{\mathcal{V},\mathcal{E}\}$ be an undirected graph, where $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges. The information regarding the connections within the graph can also be summarized as an adjacency matrix $\mathbf{A}\in \{0,1\}^{|\mathcal{V}|\times |\mathcal{V}|}$ , where $|\mathcal{V}|$ is the number of nodes in the graph $\mathcal{G}$ . Each node $i\in \mathcal{V}$ possesses a $d$ -dimensional feature vector $\mathbf{x}_i\in \mathbb{R}^d$ . The features for all nodes in $\mathcal{G}$ can be stacked and represented as a feature matrix $\mathbf{X}\in \mathbb{R}^{|\mathcal{V}|\times d}$ . In the context of node classification, each node $i$ is associated with a class label $y_{i}\in \mathcal{C}$ , where $\mathcal{C}$ is the set of labels.
133
+
134
+ Graph Convolutional Network [19]. In our analysis, we consider a single-layer graph convolution, which can be defined as an operation on the adjacency matrix and feature matrix of a graph $\mathcal{G}$ to produce a new feature matrix $\tilde{\mathbf{X}}$ . Formally, the output of a single-layer graph convolution operation can be represented as $\tilde{\mathbf{X}} = \mathbf{D}^{-1}\tilde{\mathbf{A}}\mathbf{X}$ , where $\tilde{\mathbf{A}} = \mathbf{A} + \mathbf{I}$ is the augmented adjacency matrix with added self-loops, and $\mathbf{D}$ is the diagonal degree matrix with $\mathbf{D}_{ii} = \deg(i) = \sum_{j \in [n]} \tilde{\mathbf{A}}_{ij}$ . Hence, for each node $i \in \mathcal{V}$ , the new node representation will become $\tilde{\mathbf{x}}_i \in \mathbb{R}^d$ , which is the $i$ th row of the output matrix $\tilde{\mathbf{X}}$ .
135
+
136
+ Sketch of Our Analysis. Our analysis builds upon and generalizes the theoretical framework introduced by Baranwal et al. [3], where they demonstrate that in comparison to raw node features, the graph convolution representations of nodes have better linear separability if the graph data comes from Contextual Stochastic Block Model (CSBM) [4, 8]. However, in CSBM, the nodes within the same class all have similar degrees with high probability, which prevents us to draw meaningful conclusions about the impact of degree distribution.
137
+
138
+ To better understand the role of degree distribution in the GNN performance, we develop a non-trivial generalization of the theory by Baranwal et al. [3]. Specifically, we first coin a new graph data generation model, Degree-Corrected Contextual Stochastic Block Model (DC-CSBM) that combines and generalizes Degree-Corrected SBM (DC-SBM) [17] and CSBM, and leverages heterogeneity in node degrees into consideration. Under DC-CSBM, we find that node degrees play a crucial role in the statistical properties of the node representations, and the node degrees have to exceed a certain threshold in order for the node representations to sufficiently leverage the neighborhood information and become reliably separable. Notably, the incorporation of the node degree heterogeneity into the analysis requires a non-trivial adaptation of the analysis by Baranwal et al. [3].
139
+
140
+ # 4.2 Degree-Corrected Contextual Stochastic Block Model (DC-CSBM)
141
+
142
+ In this section, we introduce the DC-CSBM that models the generation of graph data. Specifically, we assume the graph data is randomly sampled from a DC-CSBM with 2 classes.
143
+
144
+ DC-CSBM With 2 Classes. Let us define the class assignments $(\epsilon_{i})_{i\in [n]}$ as independent and identically distributed (i.i.d.) Bernoulli random variables coming from $\mathrm{Ber}(\frac{1}{2})$ , where $n = |\mathcal{V}|$ is the number of nodes in the graph $\mathcal{G}$ . These class assignments divide $n$ nodes into 2 classes: $C_0 = \{i\in [n]:\epsilon_i = 0\}$ and $C_1 = \{i\in [n]:\epsilon_i = 1\}$ . Assume that inter-class edge probability is $q$ and intra-class edge probability is $p$ , and no self-loops are allowed. For each node $i$ , we additionally introduce a degree-correction parameter $\theta_{i}\in (0,n]$ , which can be interpreted as the propensity of node $i$ to connect with others. Note that to keep the DC-SBM identifiable and easier to analyze, we adopt a normalization rule to enforce the following constraint: $\sum_{i\in C_0}\theta_i = |C_0|$ , $\sum_{i\in C_1}\theta_i = |C_1|$ and thus $\sum_{i\in V}\theta_i = n$ .
145
+
146
+ Assumptions on Adjacency Matrix and Feature Matrix. Conditioning on $(\epsilon_{i})_{i\in [n]}$ , each entry of the adjacency matrix $\mathbf{A}$ is a Poisson random variable with $\mathbf{A}_{ij} \sim \mathrm{Poi}(\theta_i\theta_jp)$ if $i, j$ are in the same class and $\mathbf{A}_{ij} \sim \mathrm{Poi}(\theta_i\theta_jq)$ if $i, j$ are in different classes. On top of this, let $\mathbf{X} \in \mathbb{R}^{n\times d}$ be the feature matrix where each row $\mathbf{x}_i$ represents the node feature of node $i$ . Assume each $\mathbf{x}_i$ is an independent $d$ -dimensional Gaussian random vector with $\mathbf{x}_i \sim \mathcal{N}(\boldsymbol{\mu}, \frac{1}{d}\mathbf{I})$ if $i \in C_0$ and $\mathbf{x}_i \sim \mathcal{N}(\boldsymbol{\nu}, \frac{1}{d}\mathbf{I})$ if $i \in C_1$ . We let $\boldsymbol{\mu}, \boldsymbol{\nu} \in \mathbb{R}^d$ to be fixed $d$ -dimensional vectors with $\| \boldsymbol{\mu} \|_2, \| \boldsymbol{\nu} \|_2 \leq 1$ , which serve as the Gaussian mean for the two classes.
147
+
148
+ Given a particular choice of $n, \mu, \nu, p, q$ and $\theta = (\theta_i)_{i \in [n]}$ , we can define a class of random graphs generated by these parameters and sample a graph from such DC-CSBM as $\mathcal{G} = (\mathbf{A}, \mathbf{X}) \sim \mathrm{DCCSBM}(n, \mu, \nu, p, q, \theta)$ .
149
+
150
+ # 4.3 Linear Separability After Graph Convolution
151
+
152
+ Linear Separability. Linear separability refers to the ability to linearly differentiate nodes in the two classes based on their feature vectors. Formally, for any $\mathcal{V}_s\subseteq \mathcal{V}$ , we say that $\{\tilde{\mathbf{x}}_i:i\in \mathcal{V}_s\}$ is linearly separable if there exists some unit vector $\mathbf{v}\in \mathbb{R}^d$ and a scalar $b$ such that $\mathbf{v}^{\top}\tilde{\mathbf{x}}_i + b < 0$ , $\forall i\in C_0\cap \mathcal{V}_s$ and $\mathbf{v}^{\top}\tilde{\mathbf{x}}_i + b > 0$ , $\forall i\in C_1\cap \mathcal{V}_s$ . Note that linear separability is closely related to GNN performance. Intuitively, more nodes being linearly separable will lead to better GNN performance.
153
+
154
+ Degree-Thresholded Subgroups of $C_0$ and $C_1$ . To better control the behavior of graph convolution operation, we will focus on particular subgroups of $C_0$ and $C_1$ where the member nodes having degree-corrected factor larger or equal to a pre-defined threshold $\alpha > 0$ . Slightly abusing the notations, we denote these subgroups as $C_0(\alpha)$ and $C_1(\alpha)$ , which are formally defined below.
155
+
156
+ Definition 4.1 ( $\alpha$ -Subgroups). Given any $\alpha \in (0, n]$ , define $\alpha$ -subgroups of $C_0$ and $C_1$ as follows:
157
+
158
+ $$
159
+ C _ {0} (\alpha) = \{j \in [ n ]: \theta_ {j} \geq \alpha a n d j \in C _ {0} \},
160
+ $$
161
+
162
+ $$
163
+ C _ {1} (\alpha) = \{j \in [ n ]: \theta_ {j} \geq \alpha a n d j \in C _ {1} \}.
164
+ $$
165
+
166
+ Let $\mathcal{V}_{\alpha} := C_0(\alpha) \cup C_1(\alpha)$ , we are interested in analyzing the linear separability of the node representations after the graph convolution operation, namely $\{\tilde{\mathbf{x}}_i : i \in \mathcal{V}_{\alpha}\}$ . Recall that for each node $i$ , $\tilde{\mathbf{x}}_i = \frac{1}{\deg(i)} \sum_{j \in \mathcal{N}(i)} \mathbf{x}_j$ , where $\mathcal{N}(i)$ is the set of neighbors of node $i$ .
167
+
168
+ Relationship Between $\alpha$ and Linear Separability. We first make the following assumptions about the DC-CSBM, closely following the assumptions made by Baranwal et al. [3].
169
+
170
+ Assumption 4.2 (Graph Size). Assume the relationship between the graph size $n$ and the feature dimension $d$ follows $\omega(d \log d) \leq n \leq O(poly(d))$ .
171
+
172
+ Assumption 4.3 (Edge Probabilities). Define $\Gamma(p, q) \coloneqq \frac{p - q}{p + q}$ . Assume the edge probabilities $p, q$ satisfy $p, q = \omega (\log^2(n)/n)$ and $\Gamma(p, q) = \Omega(1)$ .
173
+
174
+ Theorem 4.4 asserts that if the threshold $\alpha$ is not too small, then the set $\mathcal{V}_{\alpha} = C_0(\alpha) \cup C_1(\alpha)$ can be linear separated with high probability. The proof of Theorem 4.4 can be found in Appendix C.
175
+
176
+ Theorem 4.4 (Linear Separability of $\alpha$ -Subgroups). Suppose that Assumption 4.2 and 4.3 hold. For any $(\mathbf{X},\mathbf{A})\sim DC-CSBM(n,\pmb {\mu},\pmb {\nu},p,q,\theta)$ , if $\alpha = \omega$ $\left(\max \left(\frac{1}{\log n},\frac{\log n}{dn(p + q)\|\pmb{\mu} - \pmb{\nu}\|_2^2}\right)\right)$ , then
177
+
178
+ $$
179
+ \mathbb {P} \left(\left\{\tilde {\mathbf {x}} _ {i}: i \in \mathcal {V} _ {\alpha} \right\} \text {i s l i n e a r l y s e p a r a b l e}\right) = 1 - o _ {d} (1),
180
+ $$
181
+
182
+ where $o_d(1)$ is a quantity that converges to 0 as $d$ approaches infinity.
183
+
184
+ Note that Theorem 4.4 suggests that, when the heterogeneity of node degrees is taken into consideration, the nodes with degrees exceeding a threshold $\alpha$ are more likely to be linearly separable. And the requirement for the threshold $\alpha$ depends on the DC-CSBM parameters: $n,p,q,\mu ,\nu$
185
+
186
+ Remark 4.5. If we let $p, q \in \Theta\left(\frac{\log^3 n}{n}\right)$ and $\| \boldsymbol{\mu} - \boldsymbol{\nu} \|_2$ be fixed constant, then the requirement can be reduced to $\alpha \in \omega\left(\frac{1}{\log n}\right)$ , which is not too large. Given this particular setting and reasonable selection of $p, q$ , the regime of acceptable $\alpha$ is broad and thus demonstrates the generalizability of Theorem 4.4.
187
+
188
+ # 4.4 Implications on Gini-Degree
189
+
190
+ Finally, we qualitatively discuss the relationship between Gini-Degree and GNNs' performance using the results from Theorem 4.4. For any $\alpha >0$ that meets the criteria in the statement, we can consider,
191
+
192
+ 1. Negative correlation between Gini-Degree and the size of $\mathcal{V}_{\alpha}$ : If the number of nodes and edges is fixed, a higher Gini-Degree implies more high-degree nodes in the network and thus the majority of nodes are receiving lower degrees. Clearly, if most of the nodes have lower degrees, then there will be fewer nodes having degrees exceeding a certain threshold proportional to $\alpha^1$ and being placed in $\mathcal{V}_{\alpha}$ . Hence, a dataset with a higher (or lower) Gini-Degree will lead to a smaller (or larger) size of $\mathcal{V}_{\alpha}$ .
193
+ 2. Positive correlation between the size of $\mathcal{V}_{\alpha}$ and model performance: Intuitively, the GNN performance tends to be better if there are more nodes that can be linearly separable after graph convolution. Consequently, the GNN performance is positively relevant to the size of $\mathcal{V}_{\alpha}$ corresponding to the minimum possible $\alpha$ .
194
+
195
+ Combining the two factors above, our analysis suggests that Gini-Degree tends to have a negative correlation with GNNs' performance.
196
+
197
+ # 5 Controlled Experiment on Gini-Degree
198
+
199
+ To further verify whether there is a causal relationship between the degree distribution of graph data (in particular, measured by Gini-Degree) and the GNN performance, we conduct a controlled experiment using synthetic graph datasets.
200
+
201
+ Experiment Setup. We first generate a series of synthetic graph datasets using the GraphWorld library [31]. To investigate the causal effect of Gini-Degree, we manipulate the data generation parameters to obtain datasets with varying Gini-Degree while keeping a bunch of other properties fixed. Specifically, we use the SBM generator in GraphWorld library and set the number of nodes $n = 5000$ , the average degree as 30, the number of clusters as 4, cluster size slope as 0.5, feature center distance as 0.5, the edge probability ratio $p / q = 4.0$ , feature dimension as 16, feature cluster variance as 0.05. The parameters above are fixed throughout our experiments, and their complete definition can be found in the Appendix. By manipulating the power law exponent parameter of the generator, we obtain five synthetic datasets with Gini-Degree as 0.906, 0.761, 0.526, 0.354, and 0.075, respectively.
202
+
203
+ Then we train the same set of GNN models and MLP model as mentioned in Table 1 on each dataset. We randomly split the nodes into training, validation, and test sets with a ratio of 3:1:1. We closely follow the hyperparameters and the training protocol in the GLI library [27], which is where we obtain the metadata in Section 3. We run five independent trials with different random seeds.
204
+
205
+ Experiment Results. The experiment results are shown in Table 2. We observe an evident monotonically decreasing trend for the performance of the graph-based models, GCN, GAT, GraphSAGE, MoNet, MixHop, and LINKX, as Gini-Degree increases. However, there is no clear pattern for the non-graph model, MLP. This result suggests that these widely-used GNN models are indeed sensitive to Gini-Degree, which validates our result of sparse regression analysis. Note that MLP does not take the graph structure into consideration, and hence the degree distribution has less influence
206
+
207
+ Table 2: Controlled experiment results for varying Gini-Degree. Standard deviations are derived from 5 independent runs. The performances of all models except for MLP have an evident negative correlation with Gini-Degree.
208
+
209
+ <table><tr><td>Gini-Degree</td><td>GCN</td><td>GAT</td><td>GraphSAGE</td><td>MoNet</td><td>MixHop</td><td>LINKX</td><td>MLP</td></tr><tr><td>0.906</td><td>0.798±0.004</td><td>0.659±0.01</td><td>0.76±0.005</td><td>0.672±0.002</td><td>0.804±0.005</td><td>0.832±0.002</td><td>0.595±0.006</td></tr><tr><td>0.761</td><td>0.817±0.001</td><td>0.732±0.005</td><td>0.818±0.004</td><td>0.696±0.015</td><td>0.817±0.004</td><td>0.849±0.002</td><td>0.756±0.002</td></tr><tr><td>0.526</td><td>0.874±0.004</td><td>0.742±0.006</td><td>0.825±0.013</td><td>0.8±0.028</td><td>0.826±0.003</td><td>0.853±0.002</td><td>0.655±0.005</td></tr><tr><td>0.354</td><td>0.906±0.002</td><td>0.737±0.008</td><td>0.857±0.008</td><td>0.83±0.013</td><td>0.837±0.002</td><td>0.867±0.002</td><td>0.66±0.07</td></tr><tr><td>0.075</td><td>0.948±0.002</td><td>0.746±0.005</td><td>0.878±0.002</td><td>0.92±0.002</td><td>0.84±0.002</td><td>0.893±0.001</td><td>0.705±0.002</td></tr></table>
210
+
211
+ on the performance of MLP. The result on MLP also indicates that we have done a reasonably well-controlled experiment.
212
+
213
+ # 6 Conclusion
214
+
215
+ In this work, we propose a novel metadata-driven approach that can efficiently identify critical graph data properties influencing the performance of GNNs. This is a significant contribution given the diverse nature of graph-structured data and the sensitivity of GNN performance to these specific properties. We also verify the effectiveness of the proposed approach through an in-depth case study around one identified salient graph data property.
216
+
217
+ As a side product, this paper also highlights the considerable impact of the degree distribution, a salient data property identified through our metadata-driven regression analysis, on the GNN performance. We present a novel theoretical analysis and a carefully controlled experiment to demonstrate this impact.
218
+
219
+ # Acknowledgement
220
+
221
+ The authors would like to thank Pingbang Hu for the feedback on the draft.
222
+
223
+ # References
224
+
225
+ [1] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, pages 21–29. PMLR, 2019.
226
+ [2] Robert J Adler, Jonathan E Taylor, et al. Random fields and geometry, volume 80. Springer, 2007.
227
+ [3] Aseem Baranwal, Kimon Fountoulakis, and Aukosh Jagannath. Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization. arXiv preprint arXiv:2102.06966, 2021.
228
+ [4] Norbert Binkiewicz, Joshua T Vogelstein, and Karl Rohe. Covariate-assisted spectral clustering. Biometrika, 104(2):361-377, 2017.
229
+ [5] Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.
230
+ [6] Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318, 2020.
231
+ [7] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In International conference on machine learning, pages 1725-1735. PMLR, 2020.
232
+ [8] Yash Deshpande, Subhabrata Sen, Andrea Montanari, and Elchanan Mossel. Contextual stochastic block models. Advances in Neural Information Processing Systems, 31, 2018.
233
+
234
+ [9] Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, and Philip S Yu. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 315-324, 2020.
235
+ [10] Jian Du, Shanghang Zhang, Guanhang Wu, José MF Moura, and Soummya Kar. Topology adaptive graph convolutional networks. arXiv preprint arXiv:1710.10370, 2017.
236
+ [11] Alex Fout, Jonathon Byrd, Basir Shariat, and Asa Ben-Hur. Protein interface prediction using graph convolutional networks. Advances in neural information processing systems, 30, 2017.
237
+ [12] Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Gunnemann. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997, 2018.
238
+ [13] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
239
+ [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
240
+ [15] Yifan Hou, Jian Zhang, James Cheng, Kaili Ma, Richard TB Ma, Hongzhi Chen, and Ming-Chang Yang. Measuring and improving the use of graph information in graph neural networks. In International Conference on Learning Representations, 2019.
241
+ [16] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020.
242
+ [17] Brian Karrer and Mark EJ Newman. Stochastic blockmodels and community structure in networks. Physical review E, 83(1):016107, 2011.
243
+ [18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
244
+ [19] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
245
+ [20] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
246
+ [21] Yanming Li, Bin Nan, and Ji Zhu. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure. Biometrics, 71(2):354-363, 2015.
247
+ [22] Langzhang Liang, Zenglin Xu, Zixing Song, Irwin King, and Jieping Ye. Resnorm: Tackling long-tailed degree distribution issue in graph neural networks via normalization. arXiv preprint arXiv:2206.08181, 2022.
248
+ [23] Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34:20887-20902, 2021.
249
+ [24] Renming Liu, Semih Cantürk, Frederik Wenkel, Sarah McGuire, Xinyi Wang, Anna Little, Leslie O'Bray, Michael Perlmutter, Bastian Rieck, Matthew Hirn, et al. Taxonomy of benchmarks in graph representation learning. In Learning on Graphs Conference, pages 6-1. PMLR, 2022.
250
+ [25] Xiao Liu, Lijun Zhang, and Hui Guan. Uplifting message passing neural network with graph original information, 2023.
251
+ [26] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
252
+
253
+ [27] Jiaqi Ma, Xingjian Zhang, Hezheng Fan, Jin Huang, Tianyue Li, Ting Wei Li, Yiwen Tu, Chenshu Zhu, and Qiaozhu Mei. Graph learning indexer: A contributor-friendly and metadata-rich platform for graph learning benchmarks. arXiv preprint arXiv:2212.04537, 2022.
254
+ [28] Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415-444, 2001.
255
+ [29] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5115-5124, 2017.
256
+ [30] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 4602-4609, 2019.
257
+ [31] John Palowitch, Anton Tsitsulin, Brandon Mayer, and Bryan Perozzi. Graphworld: Fake graphs bring real insights for gnns. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3691-3701, 2022.
258
+ [32] Shashank Pandit, Duen Horng Chau, Samuel Wang, and Christos Faloutsos. Netprobe: a fast and scalable system for fraud detection in online auction networks. In Proceedings of the 16th international conference on World Wide Web, pages 201-210, 2007.
259
+ [33] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287, 2020.
260
+ [34] T Konstantin Rusch, Michael M Bronstein, and Siddhartha Mishra. A survey on oversmoothing in graph neural networks. arXiv preprint arXiv:2303.10993, 2023.
261
+ [35] Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, and Suhang Wang. Investigating and mitigating degree-related biases in graph convolutional networks. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1435–1444, 2020.
262
+ [36] Ashish Vavwani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
263
+ [37] Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
264
+ [38] Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018.
265
+ [39] Quanmin Wei, Jinyan Wang, Xingcheng Fu, Jun Hu, and Xianxian Li. Aic-gnn: Adversarial information completion for graph neural networks. Information Sciences, 2023.
266
+ [40] Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In International conference on machine learning, pages 6861-6871. PMLR, 2019.
267
+ [41] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
268
+ [42] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40-48. PMLR, 2016.
269
+ [43] Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 10737-10745, 2021.
270
+
271
+ [44] Sukwon Yun, Kibum Kim, Kanghoon Yoon, and Chanyoung Park. Lte4g: Long-tail experts for graph neural networks. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 2434-2443, 2022.
272
+ [45] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793-7804, 2020.
273
+ [46] Jiong Zhu, Ryan A Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K Ahmed, and Danai Koutra. Graph neural networks with heterophily. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11168-11176, 2021.
274
+
275
+ # A Definitions of Dataset Properties
276
+
277
+ We introduce the formal definitions of the dataset properties mentioned in Section 3.2. Following the definitions in Section 4.1, we further define $n = |\mathcal{V}|$ and $m = |\mathcal{E}|$ to denote the number of nodes and edges of graph $\mathcal{G}$ . Also, in the context of the node classification task, we define $\mathcal{Y} \in \mathbb{R}^n$ as the vector of node labels and $C$ as the number of classes.
278
+
279
+ # A.1 Basic
280
+
281
+ Edge Density: The edge density for an undirected graph is calculated as $\frac{2m}{n(n - 1)}$ , while for a directed graph, it is computed as $\frac{m}{n(n - 1)}$ .
282
+
283
+ Average Degree: The average degree for an undirected graph is defined as $\frac{2m}{n}$ , while for a directed graph, it is defined as $\frac{m}{n}$ .
284
+
285
+ Degree Assortativity: The degree assortativity is the average Pearson correlation coefficient of all pairs of connected nodes. It quantifies the tendency of nodes in a network to be connected to nodes with similar or dissimilar degrees and ranges between -1 and 1.
286
+
287
+ # A.2 Distance
288
+
289
+ Pseudo Diameter: The pseudo diameter is an approximation of the diameter of a graph and provides a lower bound estimation of its exact value.
290
+
291
+ # A.3 Connectivity
292
+
293
+ Relative Size of Largest Connected Component (RSLCC): The relative size of the largest connected component is determined by calculating the ratio between the size of the largest connected component and $n$ .
294
+
295
+ # A.4 Clustering
296
+
297
+ Average Clustering Coefficient (ACC): First define $T(u)$ as the number of triangles including node $u$ , then the local clustering coefficient for node $u$ is calculated as $\frac{2}{\deg(u)(\deg(u) - 1)} T(u)$ for undirected graph, where $\deg(u)$ is the degree of node $u$ ; and is calculated as $\frac{2}{\deg^{tot}(u)(\deg^{tot}(u) - 1) - 2\deg^{\leftrightarrow}(u)} T(u)$ for directed graph, where $\deg^{tot}(u)$ is the sum of in-degree and out-degree of node $u$ and $\deg^{\leftrightarrow}(u)$ is the reciprocal degree of $u$ . The average clustering coefficient is then defined as the average local clustering coefficient of all the nodes in the graph.
298
+
299
+ Transitivity: The transitivity is defined as the fraction of all possible triangles present in the graph. Formally, it can be written as $3^{\# \text{triangles}}$ , where a triad is a pair of two edges with a shared vertex.
300
+
301
+ Degeneracy: The degeneracy is determined as the least integer $k$ such that every induced subgraph of the graph contains a vertex with its degree smaller or equal to $k$ .
302
+
303
+ # A.5 Degree Distribution
304
+
305
+ Gini Coefficient of Degree Distribution (Gini-Degree): The Gini coefficient of the node degrees of the graph.
306
+
307
+ # A.6 Attribute
308
+
309
+ Edge Homogeneity [31]: The edge homogeneity is defined as the ratio of edges whose endpoints have the same node labels.
310
+
311
+ In-Feature Similarity [31]: First define within-class angular feature similarity as 1—angular_distance( $\mathbf{x}_i$ , $\mathbf{x}_j$ ) for an edge $(i, j)$ with its endpoints $i$ and $j$ have the same node labels. In-Feature Similarity is the average within-class angular feature similarity of all such edges in the graph.
312
+
313
+ Out-Feature Similarity [31]: First define between-class angular feature similarity as 1- angular_distance $(\mathbf{x}_i,\mathbf{x}_j)$ for an edge $(i,j)$ with its endpoints $i$ and $j$ have different node labels. Out-Feature Similarity is the average between-class angular feature similarity of all such edges in the graph.
314
+
315
+ Feature Angular SNR [31]: The feature angular SNR is computed as the ratio between in-feature similarity and out-feature similarity.
316
+
317
+ Homophily Measure [23]: The homophily measure is defined as
318
+
319
+ $$
320
+ \hat {h} = \frac {1}{C - 1} \sum_ {k = 1} ^ {C} \left[ h _ {k} - \frac {\left| C _ {k} \right|}{n} \right] _ {+},
321
+ $$
322
+
323
+ where $[a]_{+} = \max(a, 0)$ , $|C_k|$ is the total number of nodes having their label $k$ and $h_k$ is the class-wise homophily metric defined below,
324
+
325
+ $$
326
+ h _ {k} = \frac {\sum_ {u : \mathcal {Y} _ {u} = k} d _ {u} ^ {\left(\mathcal {Y} _ {u}\right)}}{\sum_ {u : \mathcal {Y} _ {u} = k} d _ {u}},
327
+ $$
328
+
329
+ where $d_u$ is the number of neighbors of node $u$ and $d_u^{(y_u)}$ is the number of neighbors of node $u$ having the same node label.
330
+
331
+ Attribute Assortativity: The attribute assortativity is the average Pearson correlation coefficient of all pairs of connected nodes. It quantifies the tendency of nodes in a network to be connected to nodes with the same or different attributes (here node label) and ranges between -1 and 1.
332
+
333
+ # B Experiment Setup for Obtaining Metadata
334
+
335
+ In this section, we describe more details of the experimental setup to obtain GNNs' performance that we use in Section 3.2, mostly following Ma et al. [27]. For completeness, we list down the model setting used by them in the following paragraphs.
336
+
337
+ GCN [19], GAT [37], GraphSAGE [13], MoNet [29], MLP, and MixHop [1] are set to have two layers with hidden dimension equals to 8. For LINKX [23], $MLP_{A}$ , $MLP_{X}$ are set to be a one-layer network and $MLP_{f}$ to be a two-layers network, following the setting in Lim et al. [23].
338
+
339
+ For the rest of the training settings, we adopt the same configuration for all experiments. Specifically, we set learning rate $= 0.01$ , weight decay $= 0.001$ , dropout rate $= 0.6$ , max epoch $= 10000$ , and batch size $= 256$ . We use Adam [18] as an optimizer for all models except LINKX. AdamW [26] is used with LINKX in order to comply with Lim et al. [23]. For datasets with binary labels (i.e., penn94, pokec, genius, and twitch-gamers), we choose the ROC AUC score as the evaluation metric; while for other datasets, we use test accuracy instead.
340
+
341
+ We also let all the detailed model settings remain consistent with the same with Ma et al. [27]. Namely,
342
+
343
+ - GAT: Number of heads in multi-head attention $= 8$ . LeakyReLU angle of negative slope $= 0.2$ . No residual is applied. The dropout rate on attention weight is the same as the overall dropout.
344
+ GraphSAGE: Aggregator type is GCN. No norm is applied.
345
+ - MoNet: Number of kernels = 3. Dimension of pseudo-coordinte = 2. Aggregator type = sum.
346
+ - MixHop: List of powers of adjacency matrix $= [1,2,3]$ . No norm is applied. Layer Dropout rate $= 0.9$
347
+ LINKX: No inner activation.
348
+
349
+ # C Proof of Theorem 4.4
350
+
351
+ Proof Sketch. To prove Theorem 4.4, we first show that degree and the neighborhood distribution of each node concentrate with high probability. Then we claim that the node features after the
352
+
353
+ convolution operation will be centered around specific mean values, depending on the node classes. Finally, we demonstrate that the nodes in different classes can be linearly separated by the hyperplane passing through the mid-point of the two mean values $\mu, \nu$ with high probability.
354
+
355
+ We prove the intermediate results in Lemma C.4 (degree and neighborhood distribution concentration inequalities) by utilizing Lemma C.5 (Chernoff bound for Poisson random variable) and in Lemma C.6 (convoluted feature concentration) by making use of Lemma C.7 (Borell's inequality). Finally, given the requirement of $\alpha$ stated in Theorem 4.4, we argue that the convoluted node features in two classes can be linearly separated with a high level of confidence.
356
+
357
+ Novelty of our Proof. The general structure of our proof follows that of Baranwal et al. [3]. However, our analysis requires non-trivial adaptation of the proof by Baranwal et al. [3]. This is because we have a more general data model, DC-CSBM, where the CSBM assumed by Baranwal et al. [3] is a restricted special case of ours. In particular, we assume each edge is generated by the Poisson random variable following DC-SBM, instead of the Bernoulli random variable assumed by CSBM; we also incorporate the degree-corrected factor in our analysis to model node degree heterogeneity within communities, which gives us the flexibility to discuss linear separability for subgraphs with different levels of sparsity.
358
+
359
+ Before we state and prove Lemma C.4, let us first define the following events that we will work on.
360
+
361
+ Definition C.1 (Class Size Concentration). For any $\delta > 0$ , define
362
+
363
+ $$
364
+ \mathbf {I} _ {1} (\delta) = \left\{\frac {n}{2} (1 - \delta) \leq | C _ {0} |, | C _ {1} | \leq \frac {n}{2} (1 + \delta) \right\}.
365
+ $$
366
+
367
+ Definition C.2 (Degree Concentration). For any $\delta' > 0$ and for each node $i \in [n]$ , define
368
+
369
+ $$
370
+ \mathbf {I} _ {2, i} (\delta^ {\prime}) = \left\{\frac {1}{2} (p + q) (1 - \delta^ {\prime}) \theta_ {i} \leq \frac {D _ {i i}}{n} \leq \frac {1}{2} (p + q) (1 + \delta^ {\prime}) \theta_ {i} \right\}.
371
+ $$
372
+
373
+ Definition C.3 (Neighborhood Distribution Concentration). For any $\delta' > 0$ and for each node $i \in [n]$ , define
374
+
375
+ $$
376
+ \begin{array}{l} \mathbf {I} _ {3, i} \left(\delta^ {\prime}\right) = \left\{\frac {(1 - \epsilon_ {i}) p + \epsilon_ {i} q}{p + q} \left(1 - \delta^ {\prime}\right) \leq \frac {\left| C _ {0} \cap \mathcal {N} _ {i} \right|}{D _ {i i}} \leq \frac {(1 - \epsilon_ {i}) p + \epsilon_ {i} q}{p + q} \left(1 + \delta^ {\prime}\right) \right\} \\ \bigcap \left\{\frac {(1 - \epsilon_ {i}) q + \epsilon_ {i} p}{p + q} (1 - \delta^ {\prime}) \leq \frac {| C _ {1} \cap \mathcal {N} _ {i} |}{D _ {i i}} \leq \frac {(1 - \epsilon_ {i}) q + \epsilon_ {i} p}{p + q} (1 + \delta^ {\prime}) \right\}, \\ \end{array}
377
+ $$
378
+
379
+ where $\mathcal{N}_i$ denotes the set of nodes connected to node $i$ .
380
+
381
+ Then in Lemma C.4, we argue that for nodes in the $\alpha$ -subgroup defined in 4.1 for some appropriately chosen $\alpha > 0$ , the above events will happen simultaneously with high probability.
382
+
383
+ Lemma C.4 (Concentration Inequalities). Given $\alpha \in (\frac{1}{\log n}, n]$ , $C_0(\alpha), C_1(\alpha)$ defined by Definition 4.1, and $\mathcal{V}_{\alpha} = C_0(\alpha) \cup C_1(\alpha)$ . Let $\delta = n^{-1/2 + \epsilon}$ and $\delta' = (\alpha \log n)^{-1/2 + \epsilon}$ , then for $\epsilon > 0$ small enough, we have for any $c > 0$ , there is some $C > 0$ such that
384
+
385
+ $$
386
+ \mathbb {P} \left(\mathbf {I} _ {1} (\delta) \bigcap_ {i \in \mathcal {V} _ {\alpha}} \mathbf {I} _ {2, i} \left(\delta^ {\prime}\right) \bigcap_ {i \in \mathcal {V} _ {\alpha}} \mathbf {I} _ {3, i} \left(\delta^ {\prime}\right)\right) \geq 1 - \frac {C}{n ^ {c}}.
387
+ $$
388
+
389
+ Proof. Firstly, we consider the event $\mathbf{I}_1(\delta)$ . Since $(\epsilon_i)_{i\in [n]}\sim \mathrm{Ber}(\frac{1}{2})$ , by the Hoeffding's inequality for independent Bernoulli random variables [38, Theorem 2.2.6], we have for any $\delta >0$ that
390
+
391
+ $$
392
+ \mathbb {P} \left(\left| \frac {1}{n} \sum_ {i = 1} ^ {n} \epsilon_ {i} - \frac {1}{2} \right| \geq \delta / 2\right) \leq 2 \exp (- n \delta^ {2} / 2).
393
+ $$
394
+
395
+ Notice that $\sum_{i=1}^{n} \epsilon_i = |C_1|$ and $|C_0| + |C_1| = n$ , we can conclude that for any $\delta > 0$ , the probability that the number of nodes in each class concentrates will satisfy
396
+
397
+ $$
398
+ \mathbb {P} \left(\frac {| C _ {0} |}{n}, \frac {| C _ {1} |}{n} \in \left[ \frac {1}{2} - \delta , \frac {1}{2} + \delta \right]\right) \geq 1 - C \exp (- c n \delta^ {2}),
399
+ $$
400
+
401
+ for some constant $C, c > 0$ .
402
+
403
+ We now turn to the events $\{\mathbf{I}_{2,i}(\delta')\}_{i\in [n]}$ . Notice that the node degrees are sums of independent Poisson random variables. It is known that sums of independent Poisson random variables will be another Poisson random variable. Hence, conditioning on $\theta = (\theta_{i})_{i\in [n]}$ , for each node $i\in [n]$ , we have
404
+
405
+ $$
406
+ D _ {i i} \sim 1 + \operatorname {P o i} (\frac {n - 1}{2} (p + q) \theta_ {i}),
407
+ $$
408
+
409
+ where $D_{ii}$ is the degree of node $i$ , and
410
+
411
+ $$
412
+ \mathbb {E} [ D _ {i i} ] = 1 + \frac {n - 1}{2} (p + q) \theta_ {i}.
413
+ $$
414
+
415
+ To prove that $\mathbf{I}_{2,i}(\delta')$ will occur with high probability for each $i \in [n]$ , we introduce the following result [38, Corollary 2.3.7] whose proof can be found in the referred literature:
416
+
417
+ Lemma C.5 (Corollary 2.3.7 [38]). If $X \sim \text{Poi}(\lambda)$ , then for $t \in (0, \lambda]$ , we have
418
+
419
+ $$
420
+ \mathbb {P} (| X - \lambda | \geq t) \leq 2 \exp \left(- \frac {c t ^ {2}}{\lambda}\right).
421
+ $$
422
+
423
+ Here, we can let $t = \delta' \lambda$ where $\delta' \in (0,1]$ and get a tail bound as follows:
424
+
425
+ $$
426
+ \mathbb {P} (| D _ {i i} - \mathbb {E} [ D _ {i i} ] | \geq \delta^ {\prime} \mathbb {E} [ D _ {i i} ]) \leq 2 \exp (- \frac {c (\delta^ {\prime} \mathbb {E} [ D _ {i i} ]) ^ {2}}{\mathbb {E} [ D _ {i i} ]}) = 2 \exp (- c \mathbb {E} [ D _ {i i} ] \delta^ {\prime 2}).
427
+ $$
428
+
429
+ It follows that for each $i\in [n]$ and any $\delta^{\prime}\in (0,1]$ , we have
430
+
431
+ $$
432
+ \mathbb {P} \left(\frac {D _ {i i}}{n} \in \left[ \frac {1}{2} (p + q) (1 - \delta^ {\prime}) \theta_ {i}, \frac {1}{2} (p + q) (1 + \delta^ {\prime}) \theta_ {i} \right] ^ {\mathrm {c}}\right) \leq C \exp (- c n (p + q) \theta_ {i} \delta^ {\prime 2}),
433
+ $$
434
+
435
+ for some $C, c > 0$ .
436
+
437
+ We next consider the events $\{\mathbf{I}_{3,i}(\delta')\}_{i\in [n]}$ . Observe that for each node $i$ , we can decompose node degree as $D_{ii} = D_{ii}^{\mathrm{intra}} + D_{ii}^{\mathrm{inter}}$ , where
438
+
439
+ $$
440
+ D _ {i i} ^ {\text {i n t r a}} = \sum_ {j \in \mathcal {N} (i)} \mathbb {1} \left\{\epsilon_ {j} = \epsilon_ {i} \right\},
441
+ $$
442
+
443
+ and
444
+
445
+ $$
446
+ D _ {i i} ^ {\text {i n t e r}} = \sum_ {j \in \mathcal {N} (i)} \mathbb {1} \left\{\epsilon_ {j} \neq \epsilon_ {i} \right\}.
447
+ $$
448
+
449
+ Obviously, $D_{ii}^{\mathrm{intra}} = |C_{\epsilon_i}\cap \mathcal{N}_i|$ and $D_{ii}^{\mathrm{inter}} = |C_{1 - \epsilon_i}\cap \mathcal{N}_i|$ will concentrate around $\frac{np\theta_i}{2}$ and $\frac{np\theta_i}{2}$ , correspondingly. And given the tail bound for $\{\mathbf{I}_{2,i}(\delta ')\}_{i\in [n]}$ , by a similar argument, we have for each $i\in [n]$ and any $\delta^{\prime}\in (0,1]$ ,
450
+
451
+ $$
452
+ \mathbb {P} \left(\mathbf {I} _ {3, i} \left(\delta^ {\prime}\right)\right) \geq 1 - C \exp \left(- c n (p + q) \theta_ {i} \delta^ {\prime 2}\right),
453
+ $$
454
+
455
+ for some $C, c > 0$ .
456
+
457
+ Define the union event $U(\delta, \delta') = \mathbf{I}_1(\delta) \bigcap_{i \in \mathcal{V}_{\alpha}} \mathbf{I}_{2,i}(\delta') \bigcap_{i \in \mathcal{V}_{\alpha}} \mathbf{I}_{3,i}(\delta')$ . Recall that $\forall i \in \mathcal{V}_{\alpha}$ , we have $\theta_i \geq \alpha$ . Thus, we can then choose $\delta = n^{-1/2 + \epsilon}$ and $\delta' = (\alpha \log n)^{-1/2 + \epsilon}$ . Since $p, q = \omega\left(\frac{\log^2 n}{n}\right)$ from Assumption 4.3, by a simple union bound, we have for $\epsilon > 0$ small enough, for any $c > 0$ there is $C > 0$ such that
458
+
459
+ $$
460
+ \mathbb {P} \left(U \left(n ^ {- 1 / 2 + \epsilon}, (\alpha \log n) ^ {- 1 / 2 + \epsilon}\right)\right) \geq 1 - \frac {C}{n ^ {c}}. \tag {3}
461
+ $$
462
+
463
+ Finally, we establish the lower bound for $\alpha$ indicated in the statement, which is $\frac{1}{\log n}$ . The reason why we need this lower bound is that if $\alpha$ is too small, then the subgroups: $C_0(\alpha), C_1(\alpha)$ will be too sparse that their member nodes' degree is too small to assure the concentration inequalities.
464
+
465
+ By the definition of the event: $U(\delta, \delta')$ and union bound, we have
466
+
467
+ $$
468
+ \mathbb {P} \left(\mathbf {I} _ {1} (\delta)\right) \leq C \exp (- c n \delta^ {2})
469
+ $$
470
+
471
+ $$
472
+ \leq C \exp (- c n ^ {2 \epsilon})
473
+ $$
474
+
475
+ $$
476
+ (p l u g \quad i n \delta = n ^ {- 1 / 2 + \epsilon})
477
+ $$
478
+
479
+ $$
480
+ \leq C / n ^ {c}
481
+ $$
482
+
483
+ $$
484
+ (\text {i f w e c h o o s e} \epsilon \geq \frac {\log \log n}{2 \log n} > 0),
485
+ $$
486
+
487
+ and
488
+
489
+ (by Assumption 4.3)
490
+
491
+ $$
492
+ \begin{array}{l} \mathbb {P} \left(\bigcap_ {i \in \mathcal {V} _ {\alpha}} \mathbf {I} _ {2, i} \left(\delta^ {\prime}\right) \bigcap_ {i \in \mathcal {V} _ {\alpha}} \mathbf {I} _ {3, i} \left(\delta^ {\prime}\right)\right) \leq n \cdot C \exp \left(- c n (p + q) \alpha \delta^ {\prime 2}\right) \\ \leq n \cdot C \exp \left(- c \log^ {2} n \cdot \alpha \delta^ {\prime 2}\right) \\ = C \exp (\log n - c \log^ {2} n \cdot \alpha (\alpha \log n) ^ {- 1 + 2 \epsilon}) \\ (p l u g i n \delta^ {\prime} = (\alpha \log n) ^ {- 1 / 2 + \epsilon}) \\ = C \exp (\log n - c \log n \cdot (\alpha \log n) ^ {2 \epsilon}) \\ = C \exp (\log n \cdot (1 - c \cdot (\alpha \log n) ^ {2 \epsilon})) \\ = C n ^ {1 - c \cdot (\alpha \log n) ^ {2 \epsilon}}. \\ \end{array}
493
+ $$
494
+
495
+ We want to ensure the last term stays in $O(1 / n^{\beta})$ for some $\beta > 0$ . Notice that if $\alpha < \log^{-1}n$ , we cannot find suitable $\epsilon > 0$ for some small $c$ to satisfy $1 - c \cdot (\alpha \log n)^{2\epsilon} < 0$ . Hence, we can conclude that a natural lower bound for $\alpha$ should be $\frac{1}{\log n}$ , i.e., $\alpha > \frac{1}{\log n}$ .
496
+
497
+ Thus, combining Equation 3 with this fact, we complete the proof.
498
+
499
+ ![](images/b00d1c0f96f3e503e29d3791603f38dddd9d895df518651a7f95c79219f051f0.jpg)
500
+
501
+ Next, in Lemma C.6, we claim that given the adjacency matrix $\mathbf{A}$ , class memberships $(\epsilon_i)_{i\in [n]}$ , degree-corrected factors $(\theta_{i})_{i\in [n]}$ and a pre-defined threshold $\alpha >0$ , then with high probability, the convoluted node features $\tilde{\mathbf{x}}_i\approx \frac{p\boldsymbol{\mu} + q\boldsymbol{\nu}}{p + q}$ for $i\in C_0(\alpha)$ and $\tilde{\mathbf{x}}_i\approx \frac{q\boldsymbol{\mu} + p\boldsymbol{\nu}}{p + q}$ for $i\in C_1(\alpha)$ .
502
+
503
+ Lemma C.6 (Convoluted Feature Concentration). Given $\alpha \in (\frac{1}{\log n}, n]$ , $C_0(\alpha), C_1(\alpha)$ defined by Definition 4.1, and $\mathcal{V}_{\alpha} = C_0(\alpha) \cup C_1(\alpha)$ . Conditionally on $\mathbf{A}$ , $(\epsilon_i)_{i \in [n]}$ and $(\theta_i)_{i \in [n]}$ , we have that for any $c > 0$ and some $C > 0$ , with probability at least $1 - \frac{C}{n^c}$ , for every node $i \in \mathcal{V}_{\alpha}$ and any unit vector $\mathbf{w}$ ,
504
+
505
+ $$
506
+ \left| \left\langle \tilde {\mathbf {x}} _ {i} - \frac {p \boldsymbol {\mu} + q \boldsymbol {\nu}}{p + q}, \mathbf {w} \right\rangle (1 + o (1)) \right| = O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) f o r i \in C _ {0} (\alpha),
507
+ $$
508
+
509
+ $$
510
+ \left| \left\langle \tilde {\mathbf {x}} _ {i} - \frac {q \boldsymbol {\mu} + p \boldsymbol {\nu}}{p + q}, \mathbf {w} \right\rangle (1 + o (1)) \right| = O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) f o r i \in C _ {1} (\alpha).
511
+ $$
512
+
513
+ Proof. Since $(\mathbf{X}, \mathbf{A})$ is sampled from DC-CSBM $(\pmb{\mu}, \pmb{\nu}, p, q, \theta)$ , when conditioning on $(\epsilon_{i})_{i \in [n]}$ , we have node $i$ 's node feature $\mathbf{x}_{i} \sim \mathcal{N}(m_{i}, \frac{1}{d} I)$ where $\pmb{m}_{i} = \pmb{\mu}$ if $i \in C_{0}$ and $\pmb{m}_{i} = \pmb{\nu}$ if $i \in C_{1}$ . We can also write
514
+
515
+ $$
516
+ \mathbf {x} _ {i} = (1 - \epsilon_ {i}) \boldsymbol {\mu} + \epsilon_ {i} \boldsymbol {\nu} + \frac {g _ {i}}{\sqrt {d}},
517
+ $$
518
+
519
+ where $g_{i}\sim \mathcal{N}(\mathbf{0},\mathbf{I})$ is standard normal vector.
520
+
521
+ On the other hand, conditioning on the adjacency matrix $\mathbf{A}$ and class memberships $\epsilon = (\epsilon_{i})_{i\in [n]}$ , the mean of the convoluted feature of node $i$ can be written as
522
+
523
+ $$
524
+ m (i) = \mathbb {E} [ \tilde {\mathbf {x}} _ {i} | \mathbf {A}, \epsilon ] = \frac {1}{D _ {i i}} \sum_ {j \in [ n ]} \tilde {\mathbf {A}} _ {i j} \boldsymbol {m} _ {j},
525
+ $$
526
+
527
+ by the definition of the graph convolution operation $(\tilde{\mathbf{x}}_i = [\mathbf{D}^{-1}\tilde{\mathbf{A}}\mathbf{X}]_i)$ .
528
+
529
+ Thus, for any unit vector $\mathbf{w}$ , we have
530
+
531
+ $$
532
+ \tilde {\mathbf {x}} _ {i} \cdot \mathbf {w} = \frac {1}{D _ {i i}} \sum_ {j \in [ n ]} \tilde {\mathbf {A}} _ {i j} \langle \mathbf {x} _ {j}, \mathbf {w} \rangle = \langle m (i), \mathbf {w} \rangle + \frac {1}{D _ {i i} \sqrt {d}} \sum_ {j \in [ n ]} \tilde {\mathbf {A}} _ {i j} \cdot \langle g _ {j}, \mathbf {w} \rangle . \tag {4}
533
+ $$
534
+
535
+ Let us define $F_{i} = \frac{1}{D_{ii}\sqrt{d}}\sum_{j\in [n]}\tilde{\mathbf{A}}_{ij}\cdot \langle g_j,\mathbf{w}\rangle$ and observe that $\langle g_j,\mathbf{w}\rangle$ is a standard Gaussian random variable for all $j\in [n]$ . Thus, we have that $F_{i}\sim \mathcal{N}(0,\frac{1}{dD_{ii}})$ , conditioning on the adjacency matrix $\mathbf{A}$ . Now we introduce Borell's inequality [2] to give a high-probability bound of $|F_{i}|$ for all $i\in \mathcal{V}_{\alpha}$ .
536
+
537
+ Lemma C.7 (Borell's Inequality, Theorem 2.1.1 in Adler et al. [2]). Let $F_{i}\sim \mathcal{N}(0,\sigma_{F_{i}}^{2})$ for each $i\in \mathcal{V}_{\alpha}$ . Then for any $K > 0$ , we have
538
+
539
+ $$
540
+ \mathbb {P} \big (\max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} - \mathbb {E} [ \max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} ] > K \big) \leq \exp (- \frac {K ^ {2}}{2 \max _ {i \in \mathcal {V} _ {\alpha}} \sigma_ {F _ {i}} ^ {2}}).
541
+ $$
542
+
543
+ We can further define the event $Q_{\alpha} = Q_{\alpha}(t) = \{\max_{i\in \mathcal{V}_{\alpha}}|F_i|\leq t\}$ . Observe that
544
+
545
+ $$
546
+ \begin{array}{l} \mathbb {P} \left(Q _ {\alpha} ^ {\mathrm {c}} | \mathbf {A}\right) = \mathbb {P} \left(\max _ {i \in \mathcal {V} _ {\alpha}} \left| F _ {i} \right| > t | \mathbf {A}\right) \\ \leq 2 \mathbb {P} (\max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} > t | \mathbf {A}) \\ = 2 \mathbb {P} \left(\max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} - \mathbb {E} \left[ \max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} \right] > t - \mathbb {E} \left[ \max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} \right] | \mathbf {A}\right). \\ \end{array}
547
+ $$
548
+
549
+ If we let the union event $U \coloneqq U(n^{-1/2 + \epsilon}, (\alpha \log n)^{-1/2 + \epsilon})$ defined the same as in Lemma C.4, then by Lemma C.7 and the definition of $\mathcal{V}_{\alpha}$ ,
550
+
551
+ $$
552
+ \begin{array}{l} \mathbb {P} \left(Q _ {\alpha} ^ {\mathbf {c}}\right) \leq \mathbb {P} \left(U \cap Q _ {\alpha} ^ {\mathbf {c}}\right) + \mathbb {P} \left(U ^ {\mathbf {c}}\right) \\ \leq 2 \exp (- c ^ {\prime} (t - \mathbb {E} [ \max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} ]) ^ {2} \cdot d n (p + q) \alpha) + \frac {1}{n ^ {c}}, \\ \end{array}
553
+ $$
554
+
555
+ for any $c > 0$ and some $c' > 0$ .
556
+
557
+ By the definition of event $U(\delta, \delta')$ , we can derive the upper bound of $\max_{i \in \mathcal{V}_{\alpha}} \sigma_{F_i}$ as follows:
558
+
559
+ $$
560
+ \max _ {i \in \mathcal {V} _ {\alpha}} \sigma_ {F _ {i}} = \max _ {i \in \mathcal {V} _ {\alpha}} \sqrt {\frac {1}{d D _ {i i}}} \leq \max _ {i \in \mathcal {V} _ {\alpha}} \sqrt {\frac {1}{d \frac {n}{2} (p + q) (1 - \delta^ {\prime}) \theta_ {i}}} \leq \sqrt {\frac {2}{d n (p + q) (1 - \delta^ {\prime}) \alpha}}.
561
+ $$
562
+
563
+ Since $\delta^\prime$ is determined, we have for some constant $k^{\prime},k > 0$
564
+
565
+ $$
566
+ \mathbb {E} \left[ \max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} \right] \leq k ^ {\prime} \sqrt {\log n} \max _ {i \in \mathcal {V} _ {\alpha}} \sigma_ {F _ {i}} \leq k \sqrt {\frac {\log n}{d n (p + q) \alpha}}.
567
+ $$
568
+
569
+ By choosing $t = C' \sqrt{\frac{\log n}{dn(p + q)\alpha}}$ for some large constant $C' > k > 0$ , we can obtain
570
+
571
+ $$
572
+ t - \mathbb {E} [ \max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} ] \geq C ^ {\prime} \sqrt {\frac {\log n}{d n (p + q) \alpha}} - k \sqrt {\frac {\log n}{d n (p + q) \alpha}} > 0.
573
+ $$
574
+
575
+ Thus, we have
576
+
577
+ $$
578
+ \begin{array}{l} \mathbb {P} (U \cap Q _ {\alpha}) \geq 1 - \mathbb {P} (U ^ {\mathbf {c}}) - \mathbb {P} (Q _ {\alpha} ^ {\mathbf {c}}) \\ \geq 1 - \frac {2}{n ^ {c}} - 2 \exp (- c ^ {\prime} (t - \mathbb {E} [ \max _ {i \in \mathcal {V} _ {\alpha}} F _ {i} ]) ^ {2} d n (p + q) \alpha) \\ \geq 1 - \frac {2}{n ^ {c}} - \frac {2}{n ^ {c ^ {\prime} (C ^ {\prime} - k) ^ {2}}}. \\ \end{array}
579
+ $$
580
+
581
+ Recall that when conditioning on the event $U$ , we have
582
+
583
+ $$
584
+ m (i) = \frac {p \boldsymbol {\mu} + q \boldsymbol {\nu}}{p + q} (1 + o (1)) \quad \text {f o r} i \in C _ {0} (\alpha), \tag {5}
585
+ $$
586
+
587
+ $$
588
+ m (i) = \frac {q \boldsymbol {\mu} + p \boldsymbol {\nu}}{p + q} (1 + o (1)) \quad \text {f o r} i \in C _ {1} (\alpha). \tag {6}
589
+ $$
590
+
591
+ Thus, on the event $U \cap Q_{\alpha}$ , we have for each node $i \in \mathcal{V}_{\alpha}$
592
+
593
+ $$
594
+ | \langle \tilde {\mathbf {x}} _ {i} - m (i), \mathbf {w} \rangle | = O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right),
595
+ $$
596
+
597
+ which completes the proof.
598
+
599
+ ![](images/6d9224380331f82e0649c17670296e1e23652f58d9c587ecf20b7b2a837dc9dd.jpg)
600
+
601
+ Now we are ready to prove Theorem 4.4.
602
+
603
+ Proof. Recall the definition of linear separability, we need to find some unit vector $\mathbf{v} \in \mathbb{R}^d$ and $b \in \mathbb{R}$ such that
604
+
605
+ $$
606
+ \langle \tilde {\mathbf {x}} _ {i}, \mathbf {v} \rangle + b < 0 \quad \text {f o r} i \in C _ {0} (\alpha),
607
+ $$
608
+
609
+ $$
610
+ \langle \tilde {\mathbf {x}} _ {i}, \mathbf {v} \rangle + b > 0 \quad \text {f o r} i \in C _ {1} (\alpha).
611
+ $$
612
+
613
+ We fix $\tilde{\mathbf{v}} = \frac{1}{2\gamma} (\pmb {\nu} - \pmb {\mu})$ and $\tilde{b} = -\frac{\langle\pmb{\mu} + \pmb{\nu},\tilde{\mathbf{v}}\rangle}{2}$ , where $\gamma = \frac{1}{2}\| \pmb {\mu} - \pmb {\nu}\| _2$ . By Assumption 4.3, Lemma C.4 and C.6, with probability at least $1 - O(n^{-c})$ for any $c > 0$ , for all $i\in C_0(\alpha)$ , we have
614
+
615
+ $$
616
+ \begin{array}{l} \langle \tilde {\mathbf {x}} _ {i}, \tilde {\mathbf {v}} \rangle + \tilde {b} \\ = \frac {\left\langle p \boldsymbol {\mu} + q \boldsymbol {\nu} , \tilde {\mathbf {v}} \right\rangle}{p + q} (1 + o (1)) + O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) + \tilde {b} \quad (\text {B y}) \\ = \left\langle \frac {(p - q) (\boldsymbol {\mu} - \boldsymbol {\nu})}{2 (p + q)}, \tilde {\mathbf {v}} \right\rangle (1 + o (1)) + O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) \\ = - \gamma \Gamma (p, q) (1 + o (1)) + O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) \\ = - \gamma \Gamma (p, q) (1 + o (1)) + o (\gamma) \quad (\alpha \in \omega \left(\frac {\log n}{d n (p + q) \gamma^ {2}}\right)) \\ < 0. \quad (\text {b y}) \\ \end{array}
617
+ $$
618
+
619
+ Similarly, for all $i\in C_1(\alpha)$ , we have
620
+
621
+ $$
622
+ \begin{array}{l} \langle \tilde {\mathbf {x}} _ {i}, \tilde {\mathbf {v}} \rangle + \tilde {b} \\ = \frac {\left\langle q \boldsymbol {\mu} + p \boldsymbol {\nu} , \tilde {\mathbf {v}} \right\rangle}{p + q} (1 + o (1)) + O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) + \tilde {b} \quad (\text {B y}) \\ = \left\langle \frac {(p - q) (\boldsymbol {\nu} - \boldsymbol {\mu})}{2 (p + q)}, \tilde {\mathbf {v}} \right\rangle (1 + o (1)) + O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) \\ = \gamma \Gamma (p, q) (1 + o (1)) + O \left(\sqrt {\frac {\log n}{d n (p + q) \alpha}}\right) \\ = \gamma \Gamma (p, q) (1 + o (1)) + o (\gamma) \quad (\alpha \in \omega \left(\frac {\log n}{d n (p + q) \gamma^ {2}}\right)) \\ > 0. \quad (\text {b y}) \\ \end{array}
623
+ $$
624
+
625
+ The above two inequalities imply the linear separability of $\{\tilde{\mathbf{x}}_i,i\in \mathcal{V}_\alpha \}$ , which completes the proof.
626
+
627
+ Table 3: Description of parameters used in the controlled experiments.
628
+
629
+ <table><tr><td>Parameter Name</td><td>Description</td></tr><tr><td>n</td><td>Number of vertices in the graph.</td></tr><tr><td>cluster size slope</td><td>the slope of cluster sizes when ordered by size.</td></tr><tr><td>feature dimension</td><td>the number of dimensions of node features.</td></tr><tr><td>feature center distance</td><td>distance between feature cluster centers.</td></tr><tr><td>p/q ratio</td><td>the ratio of intra-class edge probability to inter-class edge probability.</td></tr><tr><td>average degree</td><td>the average expected degrees of nodes.</td></tr><tr><td>power exponent</td><td>the value of power-law exponent used to generate expected node degrees.</td></tr><tr><td>feature cluster variance</td><td>variance of feature clusters around their centers.</td></tr></table>
630
+
631
+ Table 4: Remaining parameters used in experiments. One of the parameters is manipulated to generate synthetic datasets with varying data properties indicated in the first column.
632
+
633
+ <table><tr><td>Experiments</td><td>p/q ratio</td><td>Average Degree</td><td>Power Exponent</td><td>Feature Cluster Variance</td></tr><tr><td>Gini-Degree</td><td>3</td><td>20</td><td>[1.5, 2, 2.5, 3, 5]</td><td>0.25</td></tr><tr><td>Average Degree</td><td>3</td><td>[10, 20, 30, 40, 50]</td><td>2</td><td>0.25</td></tr><tr><td>Edge Homogeneity</td><td>[1,2,3,5,10]</td><td>20</td><td>2</td><td>0.1</td></tr><tr><td>In-Feature Similarity</td><td>2</td><td>20</td><td>2</td><td>[2, 1, 0.5, 0.2, 0.1]</td></tr><tr><td>Feature Angular SNR</td><td>2</td><td>20</td><td>2</td><td>[2, 1, 0.5, 0.2, 0.1]</td></tr></table>
634
+
635
+ # D Controlled Experiments of Identified Salient Factors
636
+
637
+ From Section 3.3, we discover six prominent dataset properties that correlate with some or all of the GNN models' performance. In Section 5, we have presented controlled experiments for Gini-Degree to verify its relationship to GNNs' performance (Table 2). In this section, we further conduct controlled experiments for all the remaining identified salient factors, except for Pseudo Diameter, which is hard to control via manipulating explicit parameters provided by GraphWorld.
638
+
639
+ Across all experiments, we fix the number of nodes $n = 5000$ , cluster size slope as 0.0, the number of clusters as 4, feature dimension as 16, and feature center distance as 0.05. For each of the experiments, we will keep most of the remaining GraphWorld parameters the same and only vary one of the parameters. The remaining parameters that we will manipulate are the $p / q$ ratio, average degree, power exponent, and feature cluster variance. We give a short description of all the parameters in Table 3. For completeness, we summarize the value of the remaining parameters used in all four experiments in Table 4.
640
+
641
+ Table 5, 6, and 7 show the results of the four controlled experiments, correspondingly. Note that varying feature cluster variance can manipulate In-Feature Similarity and Feature Angular SNR simultaneously (Table 7). In general, all the results closely follow the regression results indicated in Table 1 and the discussion in Section 3.3.
642
+
643
+ Table 5: Controlled experiment results for varying Average Degree. Standard deviations are derived from 5 independent runs. The performances of all models except for GAT, MixHop, and MLP have an evident positive correlation with Average Degree.
644
+
645
+ <table><tr><td>Average Degree</td><td>GCN</td><td>GAT</td><td>GraphSAGE</td><td>MoNet</td><td>MixHop</td><td>LINKX</td><td>MLP</td></tr><tr><td>10</td><td>0.71±0.018</td><td>0.67±0.009</td><td>0.725±0.002</td><td>0.556±0.024</td><td>0.696±0.001</td><td>0.693±0.006</td><td>0.632±0.004</td></tr><tr><td>20</td><td>0.823±0.001</td><td>0.734±0.013</td><td>0.797±0.006</td><td>0.593±0.012</td><td>0.806±0.003</td><td>0.825±0.001</td><td>0.54±0.024</td></tr><tr><td>30</td><td>0.839±0.005</td><td>0.722±0.017</td><td>0.801±0.002</td><td>0.761±0.005</td><td>0.756±0.002</td><td>0.852±0.003</td><td>0.653±0.004</td></tr><tr><td>40</td><td>0.876±0.003</td><td>0.742±0.006</td><td>0.825±0.001</td><td>0.795±0.002</td><td>0.794±0.003</td><td>0.876±0.002</td><td>0.648±0.003</td></tr><tr><td>50</td><td>0.9±0.004</td><td>0.734±0.019</td><td>0.86±0.002</td><td>0.814±0.003</td><td>0.788±0.011</td><td>0.89±0.005</td><td>0.651±0.002</td></tr></table>
646
+
647
+ Table 6: Controlled experiment results for varying Edge Homogeneity. Standard deviations are derived from 5 independent runs. The performances of all models except for MixHop and MLP have an evident positive correlation with Edge Homogeneity.
648
+
649
+ <table><tr><td>Edge Homogeneity</td><td>GCN</td><td>GAT</td><td>GraphSAGE</td><td>MoNet</td><td>MixHop</td><td>LINKX</td><td>MLP</td></tr><tr><td>0.249</td><td>0.737±0.004</td><td>0.565±0.009</td><td>0.732±0.005</td><td>0.515±0.004</td><td>0.836±0.002</td><td>0.823±0.005</td><td>0.744±0.033</td></tr><tr><td>0.375</td><td>0.873±0.002</td><td>0.825±0.011</td><td>0.847±0.003</td><td>0.57±0.009</td><td>0.945±0.002</td><td>0.93±0.003</td><td>0.93±0.001</td></tr><tr><td>0.452</td><td>0.917±0.002</td><td>0.887±0.004</td><td>0.896±0.007</td><td>0.598±0.005</td><td>0.947±0.001</td><td>0.949±0.002</td><td>0.784±0.09</td></tr><tr><td>0.559</td><td>0.925±0.002</td><td>0.89±0.004</td><td>0.925±0.004</td><td>0.678±0.003</td><td>0.913±0.005</td><td>0.943±0.005</td><td>0.9±0.004</td></tr><tr><td>0.702</td><td>0.946±0.004</td><td>0.933±0.004</td><td>0.953±0.001</td><td>0.802±0.003</td><td>0.942±0.001</td><td>0.959±0.001</td><td>0.865±0.004</td></tr></table>
650
+
651
+ Table 7: Controlled experiment results for varying In-Feature Similarity / Feature Angular SNR. Standard deviations are derived from 5 independent runs. The performances of all models except for MoNet have an evident positive correlation with In-Feature Similarity / Feature Angular SNR.
652
+
653
+ <table><tr><td>In-Feature Similarity</td><td>Feature Angular SNR</td><td>GCN</td><td>GAT</td><td>GraphSAGE</td><td>MoNet</td><td>MixHop</td><td>LINKX</td><td>MLP</td></tr><tr><td>0.506</td><td>1.009</td><td>0.478±0.016</td><td>0.412±0.016</td><td>0.446±0.005</td><td>0.562±0.021</td><td>0.433±0.001</td><td>0.598±0.002</td><td>0.402±0.002</td></tr><tr><td>0.516</td><td>1.022</td><td>0.563±0.004</td><td>0.47±0.006</td><td>0.517±0.008</td><td>0.615±0.002</td><td>0.531±0.003</td><td>0.661±0.004</td><td>0.47±0.001</td></tr><tr><td>0.527</td><td>1.039</td><td>0.717±0.008</td><td>0.507±0.006</td><td>0.6±0.006</td><td>0.555±0.021</td><td>0.621±0.007</td><td>0.737±0.001</td><td>0.486±0.003</td></tr><tr><td>0.582</td><td>1.101</td><td>0.784±0.011</td><td>0.599±0.014</td><td>0.74±0.01</td><td>0.533±0.01</td><td>0.848±0.001</td><td>0.854±0.001</td><td>0.611±0.003</td></tr><tr><td>0.602</td><td>1.154</td><td>0.887±0.006</td><td>0.791±0.004</td><td>0.825±0.006</td><td>0.627±0.004</td><td>0.924±0.004</td><td>0.913±0.006</td><td>0.915±0.002</td></tr></table>
654
+
655
+ # E Robustness of the Sparse Regression Analysis
656
+
657
+ To demonstrate the robustness of the identified salient factors (defined in Section 3.3), we expand our sparse regression analysis to include five additional models that are more recent and popular. The models are TAGCN [10], GATv2 [5], SGC [40], APPNP [12] and GCNII [7]. For the additional experiments, we adopt the same configuration as in Appendix B. The updated analysis result is presented in Table 8.
658
+
659
+ We can observe that the widely influential factors and the narrowly influential factors all remain salient after incorporating the five additional models. The results show that our proposed multivariate regression analysis is robust with respect to our diverse choice of GNN models.
660
+
661
+ Table 8: The estimated coefficient matrix $\mathbf{B}$ of the multivariate sparse regression analysis with five additional GNN models. These five models are indicated in blue. Each entry indicates the strength (magnitude) and direction (+, -) of the relationship between a graph data property and the performance of a GNN model. The six most salient data properties are indicated in bold. Notice that these factors are the same as the ones we presented in Section 3.3.
662
+
663
+ <table><tr><td>Graph Data Property</td><td>GCN</td><td>GAT</td><td>GraphSAGE</td><td>MoNet</td><td>MixHop</td><td>LINKX</td><td>MLP</td><td>TAGCN</td><td>GATv2</td><td>SGC</td><td>APPNP</td><td>GCNII</td></tr><tr><td>Edge Density</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0.0279</td><td>0.0937</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Average Degree</td><td>0.2136</td><td>0</td><td>0.098</td><td>0.1047</td><td>0</td><td>0.3362</td><td>0</td><td>0.173</td><td>0</td><td>0.4588</td><td>0</td><td>0</td></tr><tr><td>Pseudo Diameter</td><td>0</td><td>-0.3824</td><td>-0.1608</td><td>-0.0173</td><td>-0.4915</td><td>-0.3937</td><td>-0.6191</td><td>-0.2514</td><td>-0.1428</td><td>-0.0816</td><td>-0.401</td><td>-0.2962</td></tr><tr><td>Degree Assortativity</td><td>0</td><td>0</td><td>0</td><td>-0.0587</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>RSLCC</td><td>0.1014</td><td>0</td><td>0</td><td>0.0673</td><td>0</td><td>0.1312</td><td>0</td><td>0.0333</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>ACC</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>-0.0523</td><td>-0.1139</td><td>0.0276</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Transitivity</td><td>0</td><td>-0.0458</td><td>0</td><td>-0.148</td><td>0</td><td>0.2168</td><td>0</td><td>0</td><td>0</td><td>-0.0795</td><td>0</td><td>-0.0315</td></tr><tr><td>Degeneracy</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>-0.1555</td><td>0</td><td>-0.0652</td><td>-0.3099</td><td>-0.0276</td><td>0</td></tr><tr><td>Gini-Degree</td><td>-0.4437</td><td>-0.2955</td><td>-0.3313</td><td>-0.292</td><td>-0.4269</td><td>-0.3681</td><td>-0.1993</td><td>-0.3838</td><td>-0.2043</td><td>-0.1907</td><td>-0.3021</td><td>-0.33</td></tr><tr><td>Edge Homogeneity</td><td>0.714</td><td>0.4197</td><td>0.7241</td><td>0.8108</td><td>0.6396</td><td>0.2017</td><td>0.4777</td><td>0.7147</td><td>0.7007</td><td>0.2817</td><td>0.7962</td><td>0.7184</td></tr><tr><td>In-Feature Similarity</td><td>0.3103</td><td>0.0926</td><td>0.1878</td><td>0.0989</td><td>0.4576</td><td>0.6406</td><td>0.2421</td><td>0.4394</td><td>0.0359</td><td>0</td><td>0</td><td>0.0255</td></tr><tr><td>Out-Feature Similarity</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Feature Angular SNR</td><td>0.2492</td><td>0.0393</td><td>0.2455</td><td>0</td><td>0.2355</td><td>0.3564</td><td>0.3682</td><td>0.1354</td><td>0.3308</td><td>-0.0997</td><td>0.2733</td><td>0.359</td></tr><tr><td>Homophily Hat</td><td>0</td><td>0.4569</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0.4795</td><td>0</td><td>0</td></tr><tr><td>Attribute Assortativity</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr></table>
ametadatadrivenapproachtounderstandgraphneuralnetworks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52fff0f73303c4024ed418ccd4fac0d328bfd0d567310f9dc28a7c3b58e7a4c4
3
+ size 833019
ametadatadrivenapproachtounderstandgraphneuralnetworks/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcadaf6578a82f96426e859813dc0eb523aaf3a505739d7cab4a0fb9469eef6d
3
+ size 823636
amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/3923ddea-b059-47c8-8f3f-111e24127343_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:926d214b5786cb3a5161f7e2bdb9542871b3fc51d71a8874d2255c187aeef73c
3
+ size 146801
amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/3923ddea-b059-47c8-8f3f-111e24127343_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55aa3f8c912f06d953edaf7bdd746f6fe949a4d6e004209769ff5f39559789b7
3
+ size 176817
amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/3923ddea-b059-47c8-8f3f-111e24127343_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4643d81879c29f6e53ff2adfefd4992be941e1695917ed90cc60592ba50e1e50
3
+ size 6483639
amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/full.md ADDED
@@ -0,0 +1,639 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Causal Relationship
2
+
3
+ Shiyu Hu<sup>1,2</sup> Dailing Zhang<sup>1,2</sup> Meiqi Wu<sup>3</sup> Xiaokun Feng<sup>1,2</sup>
4
+
5
+ Xuchen Li $^{4}$ Xin Zhao $^{1,2}$ Kaiqi Huang $^{1,2,5}$
6
+
7
+ $^{1}$ School of Artificial Intelligence, University of Chinese Academy of Sciences
8
+
9
+ $^{2}$ Institute of Automation, Chinese Academy of Sciences
10
+
11
+ $^{3}$ School of Computer Science and Technology, University of Chinese Academy of Sciences
12
+
13
+ $^{4}$ School of Computer Science, Beijing University of Posts and Telecommunications
14
+
15
+ $^{5}$ Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences
16
+
17
+ {hushiyu2019, zhangdailing2023, fengxiaokun2022}@ia.ac.cn, wumeiqi18@ mails.ucas.ac.cn,
18
+
19
+ xuchenli@bupt.edu.cn, {xzhao,kqhuang} @ia.ac.cn
20
+
21
+ # Abstract
22
+
23
+ Tracking an arbitrary moving target in a video sequence is the foundation for high-level tasks like video understanding. Although existing visual-based trackers have demonstrated good tracking capabilities in short video sequences, they always perform poorly in complex environments, as represented by the recently proposed global instance tracking task, which consists of longer videos with more complicated narrative content. Recently, several works have introduced natural language into object tracking, desiring to address the limitations of relying only on a single visual modality. However, these selected videos are still short sequences with uncomplicated spatio-temporal and causal relationships, and the provided semantic descriptions are too simple to characterize video content. To address these issues, we (1) first propose a new multi-modal global instance tracking benchmark named MGIT. It consists of 150 long video sequences with a total of 2.03 million frames, aiming to fully represent the complex spatio-temporal and causal relationships coupled in longer narrative content. (2) Each video sequence is annotated with three semantic grains (i.e., action, activity, and story) to model the progressive process of human cognition. We expect this multi-granular annotation strategy can provide a favorable environment for multi-modal object tracking research and long video understanding. (3) Besides, we execute comparative experiments on existing multi-modal object tracking benchmarks, which not only explore the impact of different annotation methods, but also validate that our annotation method is a feasible solution for coupling human understanding into semantic labels. (4) Additionally, we conduct detailed experimental analyses on MGIT, and hope the explored performance bottlenecks of existing algorithms can support further research in multi-modal object tracking. The proposed benchmark, experimental results, and toolkit will be released gradually on http://videocube.aitestunion.com/.
24
+
25
+ # 1 Introduction
26
+
27
+ Single object tracking (SOT) is an important computer vision task that aims to locate an arbitrary moving target in a video sequence, and can be regarded as the foundation for high-level tasks like video understanding. In the past decade, researchers have proposed numerous high-quality benchmarks [4, 5, 6, 7, 8, 9] for the visual-based SOT task, and a series of trackers [10, 11, 12, 13, 14,
28
+
29
+ ![](images/23e2d77337f6bbed671987d33d726072e5496668729812b4049e76fb584ed3fc.jpg)
30
+
31
+ ![](images/75a7c3f7f300fae28aa0a90efd6730c898c7c906ec8a7b14a1ea2a1a2849d140.jpg)
32
+
33
+ ![](images/b3c26e1379bf7a61c0f6a386f20d2967410af9aca19c529c33d8947eeb8c983d.jpg)
34
+ OTB-Lang Liquor sequence: brown liquor bottle
35
+
36
+ ![](images/58c20faf085a109365abc23aaa6253e01dc3e3249d7b44dbab06cd92738b8d59.jpg)
37
+
38
+ ![](images/ae05823f5a012ebf7c11001b73f456040fbdeed493dda699d3a46f6a987ca724.jpg)
39
+
40
+ ![](images/e44f547e66dab8e64b7e06116874d18d71dfc1c0b2c846dc429329aaf710eed2.jpg)
41
+
42
+ ![](images/c12f3bb226762a2e8ea60b9ea7f707a5af43ba7d7655fe25bb036af1ed5a279a.jpg)
43
+
44
+ ![](images/3aceab2cb902837817166356eb33007097cd4041b628366da804b3141f247bea.jpg)
45
+
46
+ ![](images/3f97a9296ca03c9fe3b7f6659e9df2b2b275bb7e7fa5c88c9469935d7e499c95.jpg)
47
+ LaSOT airplane-1 sequence: white airplane landing on ground
48
+
49
+ ![](images/1a3cdaca9287a163daefb551ca4e1cccfa0c1a4d1c86934e4e6c28df9d3fd5e4.jpg)
50
+
51
+ ![](images/875a774da0c48cb0d037d9621538a338c3b9d1608f5e686ea0c9db30352d8c9c.jpg)
52
+
53
+ ![](images/6f5e3ad20511e8e2f587795822569553b1bad93ebce958623abe6bb2d994b462.jpg)
54
+
55
+ ![](images/e104e351d4558ce4e912a09ec6f931fbeaed36bcec5cdf35051ffc7244eb5b9e.jpg)
56
+
57
+ ![](images/c8ca20544cd6bfc98e58d3eda976d20c9fed56d511098fb7474101320ff4fd9b.jpg)
58
+
59
+ ![](images/73723c4709998b1fb5f27586d2a1d2b48b204d46c3059ea45f86eef67c69b6c2.jpg)
60
+
61
+ ![](images/78f05928667e3b889e636647b0c7c6753e409867e98f730814f2ac4fcc0b91e1.jpg)
62
+ TNL2k Arrow_Video_ZZ04_done sequence: the second arrow from left to right
63
+
64
+ ![](images/388b20dd8511cf5da9e8d6879fedfa262ce069889080236324d41b626326aa20.jpg)
65
+
66
+ ![](images/ce4443bf9032e334a0ed114adcb9d7d3f220f90446b34be159f7217c37ab8e25.jpg)
67
+
68
+ ![](images/b8ef9bd466efbad88f82cf5a5797ee14133fcb9c847111ac4f0390409823361e.jpg)
69
+
70
+ ![](images/bed301ffdfb03c68c65fee3aa2d32c050b7ef3cfb00e12fb5bc2f5bf35044bcb.jpg)
71
+
72
+ ![](images/9dcb56eb3b231448a56bde5b98c471828ed2761369b1a2ab0eca83705c4684a3.jpg)
73
+
74
+ ![](images/f0c46b80a52dfc732d2b730ed33f520af7abc08d347e0b1a0ab45f25a361c404.jpg)
75
+ Action 1: A male secret agent wearing a black suit walks in the washroom.
76
+
77
+ ![](images/9e2ea8b3524c24fddab2affa047c2dcd0c2902f6dc83daa4e8192b5afdcf5dd2.jpg)
78
+
79
+ ![](images/5b9d26b79837577a1cdf4fa8ed0ae4f6b3888504becc2300291075b3467b8dc4.jpg)
80
+
81
+ ![](images/59f3d6457b82f4aef627f2f25e99bac19053f6131964869bdd7b25e25ee5f09e.jpg)
82
+
83
+ ![](images/2c8602778ba4057d6a5ec7c3a96a808dfba1fb9d900dd8d935072db25f974c34.jpg)
84
+
85
+ ![](images/8f7c31d298283807a0c01a003b71afbde765af36d94dbdf53087a74eecc13346.jpg)
86
+
87
+ ![](images/cea0be42ac8f96049d282acc8d6e056810da8850d4fe2bc4fd18eb71fef80530.jpg)
88
+
89
+ ![](images/5248e815e0d841f839a10ccadb598448f5c38f25ae99e2de3a7f65473b41d6c1.jpg)
90
+ Activity 1: A male secret agent wearing a black suit walks in the washroom, and stands near a man wearing a light grey suit. They fight, then the male secret agent wins, and lifts the insensible grey-suit man to the washroom cubicle.
91
+
92
+ ![](images/6011972f76e86671337474770b09ddbd51662d95b7588d623a63428b6d014918.jpg)
93
+
94
+ ![](images/ec6809e53b389e464fcc8dd7ca2cef6aa5d5074f563dcdba9b040bad5a749850.jpg)
95
+
96
+ ![](images/7126fe1e354de214aada3ed92c1f50890ab75af5832fee9157deee97e47f32f3.jpg)
97
+
98
+ ![](images/0a447941d7b841be272ad1754d7e86307a0e344ab25a4f24aa2f0977d691a600.jpg)
99
+
100
+ ![](images/7056e2ec1a5d5524b0c082640738a5ebbc21cde3392a674a815d863745a2bd3a.jpg)
101
+ Story: A male secret agent wearing a black suit walks in the washroom, and stands near a man wearing a light grey suit. They fight, and the male secret agent wins. He then lifts the insensible grey-suit man to the washroom cubicle. The male secret agent crouches in the washroom cubicle and checks the insensible grey-suit man. Suddenly, the grey-suit man wakes up, and they fight together again in the washroom. Eventually, the male secret agent wins the fight. After the male secret agent talks with a woman wearing a brown suit, he again lifts the insensible grey-suit man to the washroom cubicle. Finally, the male secret agent lefts the washroom after talking with the brown-suit woman.
102
+
103
+ ![](images/bc70b552300d106f4286a687524b94c24857be3d5f421460a32e73ee194e48dd.jpg)
104
+
105
+ ![](images/27abc903aecef535abe832827da06d28fbad4455343e033d34ff1d60049998ec.jpg)
106
+
107
+ ![](images/fed4b0da92ae47bfd8140415524b7c774678c0d6c3fc9c865eb29e388bf3596b.jpg)
108
+
109
+ ![](images/2799f5fd229ce865b0a6035324461afb1c427c22bfedb46527630a0553883ff6.jpg)
110
+
111
+ ![](images/31f863ba3758e352488948893c2a5cac76d442636eaced4b4def59a332a794f8.jpg)
112
+ Figure 1: Comparison of MGIT and other multi-modal object tracking benchmarks. (A-C) Examples of video content and semantic descriptions on OTB-Lang [1], LaSOT [2], and TNL2K [3]. The green bounding box (BBox) indicates ground truth, while the red dashed BBox indicates other objects that satisfy the semantic description. These benchmarks have short sequences with simple narrative content. Besides, their semantic labels mainly describe the first frame, which may misguide algorithms. (D1-D3) An example of the multi-granular annotation strategy used by MGIT. Compared to existing benchmarks, MGIT contains longer sequences with more complex narratives, and the multi-granular annotation provides more prosperous and flexible information to portray long videos.
113
+
114
+ 15, 16] have demonstrated good tracking capabilities in these environments, especially in short video sequences ranging from hundreds to thousands of frames. However, researchers noticed that most trackers always perform poorly in longer videos with more complicated narrative content. Besides, only relying on a single visual modality also limits the application scenarios. Thus, several works have begun to offer additional semantic annotations for SOT task.
115
+
116
+ As the first multi-modal SOT benchmark, OTB-Lang [1] provides a language description for the classic OTB [5] benchmark, hoping to provide a more natural human-machine interaction method. The long-term tracking benchmark LaSOT [2, 17] also supplies a semantic annotation for each sequence, desiring to utilize linguistic features to improve the tracking performance. TNL2k [3] wants to achieve more flexible and accurate tracking ability with more explicit information (e.g., location information) in the semantic description. Although these multi-modal benchmarks have introduced semantic information into visual object tracking, they still face the following problems. (1) Short sequences with uncomplicated spatio-temporal and causal relationships: Existing works mainly focus on videos with hundreds to thousands of frames (the average sequence lengths of OTB-Lang, LaSOT, and TNL2k are 590 frames, 2,502 frames, and 622 frames), while shorter video sequences are always insufficient to reflect complex narrative content. (2) Simple semantic descriptions: The quality of semantic information is critical to multi-modal trackers' performance, while incorrect or ambiguous semantic information may misguide algorithms in tracking interference [18]. However, the semantic labels in existing works mainly describe the state in the first frame, but lack the portrayal
117
+
118
+ of the complete sequence. For example, the brown liquor bottle description of Figure 1 (A) cannot distinguish the object from the interference (another brown liquor bottle). In Figure 1 (B), white airplane landing on ground may also misdirect trackers to locate another airplane that has parked on the right ground. In Figure 1 (C), the second arrow from left to right only satisfies to represent the object state at the beginning of the sequence; as the object moves, the position constraint contained in the semantic information will become misleading. Consequently, a better way to construct a multi-modal benchmark is not to provide a simple natural language description for short videos, but to design a scientific way to couple human understanding of long videos into semantic labels.
119
+
120
+ Therefore, we should first select suitable long videos with rich narrative relationships to compose a complex environment. VideoCube [19] is a high-quality benchmark recently released for the global instance tracking (GIT) task (i.e., search an arbitrary user-specified instance in a video without any assumptions about motion consistency), which can be regarded as expanding the definition of traditional SOT task (i.e., tracking a target in single-camera and single-scene) to success model the human visual tracking ability in a complex environment. Thus, we selected 150 representative long video sequences from VideoCube to form a new multi-modal benchmark named MGIT. The proposed new benchmark is consistent with the distribution of the original VideoCube in all dimensions (e.g., length, scene categories, object classes, motion modes, spatio-temporal consistency, and difficulty). Besides, we carefully check the content of each sequence to ensure that the selected data contain as many different types of video narratives as possible. Figure 1 (D1-D3) illustrates an example in MGIT. Compared with other works, sequences in MGIT include more complex content (i.e., the spatial-temporal variation and causal relationship are more complicated).
121
+
122
+ Besides, we design a multi-granular annotation strategy to provide scientific natural language information. On the one hand, existing research has indicated that complex narrative content can be perceived as several components and their relations, which is consistent with cognitive intuition [20]. On the other hand, the process of human comprehension and cognitive development is progressive as well [21, 22]. Therefore, designing a hierarchical structure to represent the video content is a reasonable annotation method. As shown in Figure 1 (D1-D3), each sequence in MGIT is annotated with three semantic grains (i.e., action, activity, and story). We hope this method can provide a step-by-step "learning" environment for multi-modal trackers, in which they can first learn multi-modal information at a fine-grained level (action), then gradually develop to a morea comprehensive level (activity), and finally understand the complex video narrative at a story level like humans.
123
+
124
+ Contributions. (1) We propose a new multi-modal benchmark named MGIT. It consists of 150 long videos with a total of 2.03 million frames, and the average length of a single sequence is $5 \sim 22$ times longer than existing multi-modal benchmarks. We hope this new benchmark fully represents the complex spatio-temporal and causal relationships coupled in longer narrative content (Section 3.1). (2) We design a multi-granular annotation strategy for providing scientific semantic information. Via this strategy, MGIT can provide a favorable environment for multi-modal object tracking research and long video understanding (Section 3.2). (3) We execute comparative experiments on other benchmarks. Experimental results explore the impact of different annotation methods, and validate that the proposed strategy is a feasible solution for coupling human understanding into semantic labels (Section 4.2). (4) We conduct detailed experimental analyses on MGIT. Results indicate that existing methods still have significant room for improvement in multi-modal tracking (Section 4.3). The proposed benchmark, experimental results, and toolkit will be released gradually on http://videocube.aitestunion.com/.
125
+
126
+ # 2 Related Work
127
+
128
+ Benchmarks with Visual Information. Standard SOT trackers are always initialized in the first frame by a target's bounding box (BBox), then continuously locating it in the video sequence. Since 2013, many benchmarks represented by OTB [4, 5] and VOT [6, 23] have been released, and these standardized datasets with scientific evaluation mechanisms promote the SOT research. With the development of deep learning techniques, these short-term and small-scale benchmarks have struggled to support data-driven trackers. Thus, several researchers have started to design larger-scale datasets like GOT-10k [9] and TrackingNet [8], while others have tried to collect data with longer videos and proposed long-term tracking benchmarks like OxUvA [24] and VOT-LT [25, 26]. Recently, some researchers have noticed that short-term and long-term tracking tasks include a continuous motion assumption in their definitions, resulting in the experimental environments being restricted to
129
+
130
+ single-camera and single-scene. Therefore, they propose the global instance tracking task [19] with a new benchmark named VideoCube to track an arbitrary moving target in any type of video.
131
+
132
+ Benchmarks with Visual and Semantic Information. Unlike numerous visual benchmarks that have evolved over a decade, multi-modal benchmarks combining visual and semantic information have only received attention lately. OTB-Lang [1] is the first multi-modal SOT benchmark, which provides additional natural language description for sequences in OTB100 [5] benchmark. However, the limited dataset scale has prevented the multi-modal SOT task from receiving widespread attention. After that, a large-scale and long-term tracking benchmark LaSOT [2, 17] is released with multimodal annotations. In the same year, researchers propose the TNL2k [3] benchmark to achieve more flexible and accurate object tracking with natural language. These two benchmarks have provided a prosperity of data and have facilitated the generation of various multi-modal trackers.
133
+
134
+ ![](images/c8055b5f49fff0d1d697a2b3c594b857db0ad3e2be85a7f90b4012b2323038f4.jpg)
135
+ Figure 2: Comparison between MGIT with other SOT benchmarks, including visual-based (e.g., OTB50 [4], OTB100 [5], GOT-10k[9], TrackingNet [8], OxUvA [24], and VideoCube [19]) and multi-modal SOT benchmarks (e.g., OTB-Lang [1], LaSOT [2], and TNL2k [3]). The bubble diameter is in proportion to the total frames of a benchmark, and the vertical coordinate represents the average sequence length of each benchmark. Obviously, the proposed MGIT includes longer videos with multi-modal information.
136
+
137
+ As shown in Figure 2, existing works either focus on visual modality, or concentrate on multimodality but lack longer videos with complex content. Besides, Figure 1 indicates a more scientific annotation strategy is also needed for providing high-quality semantic information. These limitations prompt us to propose MGIT, hoping to construct a more complex and flexible environment for research.
138
+
139
+ Algorithms with Bounding Box. Visual-based trackers always utilize the target's appearance and motion information to accomplish the tracking process, including the correlation filter (CF) based trackers [27, 28], Siamese neural network (SNN) based trackers [29, 30, 31, 32, 33, 34, 11, 35, 36, 10], the combination of CF and SNN [37, 38, 12, 13], and the transformer-based trackers [39, 14, 15, 16]. Before 2021, SNN-based trackers are the prevalent methods. Recently, transformer-based trackers have demonstrated exemplary performance and gradually become the dominant architecture.
140
+
141
+ # Algorithms with Bounding Box and Natural Language. Tracking a moving target with visual
142
+
143
+ and semantic information is a new task for SOT research; thus, representative works are mainly released in recent two years. AdaSwitcher [3] is released with the TNL2k benchmark, which proposes a switcher that utilizes natural language to alternate search mechanism (i.e., switch between the global search visual grounding module and the local visual tracking module). GTI [40] decomposes the visual language tracking task into three sub-tasks: tracking, grounding, and integration, and verifies the performance of each sub-module. SINT [41] proposes a semantic information fusion module that can be utilized across various SNN-based trackers. VLT [42] introduces a modality mixer named ModaMixer with asymmetric ConvNet search, which aims to demonstrate pure ConvNet models can achieve comparable results to state-of-the-art (SOTA) transformer-based algorithms. Besides, the proposed ModaMixer can further improve performance when directly applied to transformer-based trackers. JointNLT [18] unifies visual grounding and tracking as a coherent task (i.e., locating referred objects based on visual-language references). It employs the transformer-based architecture to model the relation between natural language and visual information.
144
+
145
+ # 3 Construction of MGIT
146
+
147
+ We propose a new multi-modal benchmark named MGIT and design a multi-granular annotation strategy for generating scientific semantic information. On the one hand, we have carefully selected 150 longer video sequences to form MGIT (please refer to Section A.2 in the Appendix for more details), hoping this complex environment can promote visual tracking and video understanding research. On the other hand, we hope this multi-granular annotation strategy can provide a step-by
148
+
149
+ step "learning" environment for multi-modal trackers. Like humans can increase their comprehension by gradually increasing the learning difficulty, trackers can first learn at a fine-grained level (action), then to a more comprehensive level (activity), and finally accomplish a story level understanding of long video sequences. A well-trained elite annotation team is selected to execute this task instead of crowdsourcing, and the annotation quality is ensured through various efforts. The detailed workflow has been outlined in Section A.3.2 of the Appendix.
150
+
151
+ # 3.1 Data Collection
152
+
153
+ ![](images/1e47be31adbf336a6363a9deda9759cfd70c19f641c6c53e16540200029c7906.jpg)
154
+
155
+ Story: A pink cartoon pig wearing red clothes talks to her family members on the grassland. Today, the red-clothes pig and her family aim to visit a castle. They go to the castle in a red car, and the red-clothes pig sits in the back. They stop the vehicle nearby the foothills and walk to the castle. At the entrance of the castle, they meet a white cartoon pig wearing gray armor. The red-clothes pig first talks with the gray-armor pig, then they are invited to visit the castle. The red-clothes pig walks with her family into the castle and sits beside a blue-clothes pig on the chair. After that, they have a meal in the castle's living room, and the red-clothes pink pig gets a gift from a yellow-clothes pig after the meal. Finally, the red-clothes pig walks with her family members on the stairway, and then stands at the top of the tower.
156
+
157
+ ![](images/09b7f0509bbdb846f16252e5d8617768140149e5f7023cc1f07f3e53422b6605.jpg)
158
+
159
+ Story: A black gorilla holding a lady in white crouches on a gray building, and some airplanes attack them. He then walks and climbs to the top of the grey building. After that, he stands atop the grey building, hits an airplane, fights with a gray soldier in the other airplane, and finally crouches on the gray building.
160
+
161
+ ![](images/6d8cce933286cc6a0060f082b0b88b8b2f4870f98449b51edb1c1183147da933.jpg)
162
+
163
+ Story: A black motorcycle is checked by a man with orange and white clothes in the yard; then, the man rides this black motorcycle in the yard. As an obstacle race, the black motorcycle first bounces across obstacles in the playground, then bounces across obstacles in the street. After that, it bounces across obstacles near the pool and across obstacles in the stream. After a brief break, the black motorcycle bounces across obstacles in the playground, then across obstacles near the pool, and finally across obstacles in the stream.
164
+
165
+ Story: A small basketball is played by a boy with a grey t-shirt and black shorts, and then inflated by a man with a red t-shirt and black pants in the skatepark. After that, the basketball is played by the boy, and then played by the man. After they practice, the basketball is holden by the boy from the skatepark to outdoors; then it is played by the boy outdoors. Finally, The basketball then is carried away by the boy.
166
+
167
+ ![](images/6c40b81feef4ed6e2e907fad215782185552c59555345ec73a4857406db57bc8.jpg)
168
+
169
+ Story: A brown cello is played by a man with white shirt and black Story: A red cap is worn by a man with a gray t-shirt on the soccer pants in the room. court.
170
+
171
+ ![](images/ced678072b56de931bde67125d67f0d514313afc5c2d7d41c8bc6562c93e788d.jpg)
172
+ Figure 3: The representative data of MGIT. Here we illustrate six sequences with different aspects (e.g., narrativity, topics, virtuality, object classes, spatio-temporal continuity, and total frames).
173
+
174
+ MGIT follows the recently proposed large-scale benchmark VideoCube [19] to conduct the data collection. VideoCube refers to the film narrative (i.e., a chain of causal relationship events occurring in space and time) and proposes the 6D principle for benchmark construction. In this work, we divide the 6D principle into two parts. Four dimensions (i.e., object class, spatial continuity, temporal continuity, and total frame), together with narrativity and topic, form the new sequence-level selection criterion. The other two dimensions (i.e., motion mode and scene category) will be refined during fine-grained semantic annotation. Therefore, we first regard the original VideoCube as the candidate samples, then add the additional examination of narrativity and topic, and finally select 150 video sequences to form the MGIT. Particularly, the proportions of the train/val/test subsets are the same as the original VideoCube. Thus, sequences in each subset are $105 / 15 / 30$ in MGIT. Taking Figure 3 as an example, here we present several dimensions considered in the data collection process:
175
+
176
+ Topic and Narrativity. We have divided the main video topics into six categories, which are cartoons, movies & TV shows, outdoor sports, regular sports, performances, and documentaries. Among them, cartoons and movies & TV shows usually have a high narrativity (i.e., the video content contains a solid causal relationship, as shown in Figure 3 A and B). Outdoor sports and regular sports contain rich patterns of motion, and these motions can be linked chronologically into a story, but the narrativity is usually simple than in cartoons and movies (Figure 3 C and D). Compared to other topics, performances and documentaries usually record one action with low narrativity (Figure 3 E and F). However, these examples are classifications for most cases; it is worth noting that some performances (e.g., sketches with explicitly narrative content on stage) and some documentaries (e.g., documentaries with causal teaching steps) also belong to high narrativity.
177
+
178
+ Spatial Continuity and Temporal Continuity. Temporal continuity means the video content is developed according to the normal time flow (i.e., without fast-forwarding, fast-receding, or interpolation). Spatial continuity means the video content takes place in a fixed space.
179
+
180
+ **Virtuality.** Virtuality refers that this video is computer-generated, like cartoons or games. The same content in virtuality videos can be very different from videos sampled from the real world; thus, virtuality videos can present a new challenge for object tracking and long video understanding.
181
+
182
+ # 3.2 Natural Language Annotation
183
+
184
+ ![](images/d6c0f2f744932bb4e5873d9c5960590ca6d802dd5ac418bb604853a7da7a13a1.jpg)
185
+ Figure 4: An example of action annotation. We label the target, motion pattern, third-party object, and scene for each action. The target to be tracked is determined in the first frame and does not change during the entire video sequence. A change in any of the other three elements will end the current action and proceed into the following action.
186
+
187
+ In this work, we design a multi-granular annotation strategy to provide scientific natural language information. Video content is annotated by three grand (i.e., action, activity, and story, as shown in Figure 1 D1-D3). This hierarchical structure to represent the video content is motivated by existing works in computer vision [20, 43] and human cognitive [21, 22], such as a recent method [43] decouples the video content into multiple granularities for the visual question-answering task [44], aiming to help algorithms better understand video information like humans.
188
+
189
+ Action. As shown in Figure 4, we use the following critical narrative elements to portray an action: tracking target (who), motion (what) and third-party object (if present), location (where), and time interval (when). On the one hand, the above elements are necessary to portray narrative content. On the other hand, these elements are also essential grammatical components to form complete sentences. In particular, we use Stanford CoreNLP [45], a widely used natural language processing toolkit, to check the semantic annotations of other multi-modal datasets. We find that more than half of these semantic descriptions are only annotated at the phrase level, lacking the necessary grammatical structure (the detailed statistic result has been shown in Section A.3.1 of the Appendix). Thus, compared with existing works, MGIT can describe more detailed narrative content.
190
+
191
+ Activity. An action describes what happens in a short period, while an activity can be seen as a collection of actions with clear causal relationships. A new activity is usually accompanied by a scene switch or an explicit change of the third-party object. Compared with the former action, if an action is preferred to be the beginning (i.e., reason) of a new event rather than an ending (i.e., result) of an old event, it can be regarded as a starting point of an activity. As shown in Figure 4 and Figure 1 (D1-D3), the first four actions describe a complete causality (the target approaches the third party, they fight, and cause the third party to be insensible), while the 5th action starts a new event (start examining the unconscious third-party and conduct a second fight when he wakes up). Therefore, the 4th and 5th actions can be divided into two different activities.
192
+
193
+ ![](images/248a02edd31766aca3969b5ad44bc2c697ea51fd99113b64b0e5076c76d040e7.jpg)
194
+
195
+ ![](images/50a9bb09876b72fc96d3827f907b23234830881e89ca2470f6794707d3001c67.jpg)
196
+
197
+ ![](images/3a9f99359645d55608872321a95c4129ee8a1dc7128ed4167db1918aff926dd3.jpg)
198
+ (a) Topics
199
+ (d) Temporal Continuity(e) Spatial Continuity
200
+ Figure 5: Statistical analysis of key aspects in MGIT. (a-b) Distribution of topics and length of sequences. (c) Distribution of activities and actions. The bubble diameter is in proportion to the length of a sequence, the vertical coordinate and the horizontal coordinate represent the total activities and actions of this sequence. (d-f) Distribution of temporal continuity, spatial continuity, and narrativity. (g) The word cloud of semantic descriptions.
201
+
202
+ ![](images/977d829660d54c9f891bbb342837033ebc5df59ec3e894e0e075a08efa53b746.jpg)
203
+ (b) #Frames
204
+
205
+ ![](images/739144ad645bfb29a4936aa4c8f61b4dbaef71af51c2775202e79818084f7859.jpg)
206
+ (f) Narrativity
207
+
208
+ ![](images/8cc299232e1d4c95708824b5151687d7a2a3529c78cf301bd1529706b97cc59f.jpg)
209
+ (c) Activities, Actions and Frames
210
+
211
+ ![](images/961935b76e7bfe6e824459f55172d9db606068e27c95007df2f670b9646df9a4.jpg)
212
+ (g) Word Cloud
213
+
214
+ Story. Story is a high-level description. To avoid boring narrativity, we do not stack the existing actions and activities, but use some words (e.g., first, then, after that, finally, etc.) to guide the content, making the temporal and causal more precise.
215
+
216
+ Based on the data collection process and the multi-granular annotation strategy, we construct MGIT with 2.03 million frames, and provide detailed annotation with 150 stories, 621 activities, and 982 actions. The semantic descriptions contain 77,652 words with 921 non-repetitive words, and more detailed analyses have been illustrated in Figure 5.
217
+
218
+ # 4 Experimental Results
219
+
220
+ # 4.1 Datasets and Evaluation Methods
221
+
222
+ Datasets. We select OTB-Lang [1], TNL2k [3], LaSOT [2], and MGIT as experimental environments. Several variants of LaSOT are also concerned: (1) $\mathrm{LaSOT_{Ext}}$ [17] is a complement of LaSOT [2] with 150 newly added video sequences. (2) Figure 1 indicates that several semantic descriptions in LaSOT are ambiguous. Thus, 22 ambiguous and 20 unambiguous sequences are selected to
223
+
224
+ ![](images/66463c2b64188a3de2c44e42f0ce01539a26c9f613ce0783a1236844fbc58f27.jpg)
225
+ Figure 6: Evaluation mechanisms of visual-based and multi-modal based trackers. (A) Traditional multi-modal tracking mechanism (i.e., only initialize a tracker with BBox and simple semantic information in the first frame). (B-D) Tracking with semantic information updates (i.e., initialize a tracker with BBox and semantic information in the first frame, then update the semantic information in each new interval). (E) Traditional one-pass evaluation (OPE) mechanism (i.e., only initialize a tracker with BBox in the first frame).
226
+
227
+ form the $\mathrm{LaSOT_{sub}}$ , aiming to better analyze tracking performance with different kinds of natural language information. (3) $\mathrm{LaSOT_{NLC}}$ is a subset of $\mathrm{LaSOT_{sub}}$ , which is formed by the 20 unambiguous sequences, and we have carefully checked all the semantic and visual information in this subset.
228
+
229
+ Evaluation Methods. As shown in Figure 6, various mechanisms are designed to evaluate tracking precision (PRE) and success rate (SR). We use $F_{t}$ to represent the $t$ -th frame. (1) Precision is calculated based on the center distance between the predicted BBox $p_{t}$ and the ground truth BBox $g_{t}$ (i.e., $d_{t} = \| c_{p} - c_{g}\|_{2}$ , where $c_{p}$ and $c_{g}$ represent center points). By calculating the proportion of frames where $d_{t} \leq \theta_{d}$ and plotting curves at different thresholds, we can generate a precision plot. PRE is common to use $\theta_{d} = 20$ as the criterion to rank trackers. (2) Furthermore, researchers [19] provide the normalized precision (N-PRE) to eliminate the effect of target size. When trackers have a predicted center outside the ground-truth, an additional penalty term, represented by $d_{t}^{p}$ , is included to account for the shortest distance between the center point $c_{p}$ and the edge of the ground-truth. The final result is then normalized to a range of 0 to 1 (i.e., $N(d_{t}) = \frac{d_{t}^{\prime}}{\max(\{d_{i}^{\prime}|i\in F_{t}\})}$ , where $d_{t}^{\prime} = d_{t} + d_{t}^{p}$ ). Similarly, the normalized precision plot is generated by plotting statistical outcomes derived from various $\theta_{d}^{\prime}$ values. (3) Besides, frames with the intersection over union (IoU) $\Omega(p_{t},g_{t}) = \frac{p_{t} \bigcap g_{t}}{p_{t} \bigcup g_{t}} \geq \theta_{s}$ can be regarded as successful tracking, and the SR measures the percentage of successfully tracked frames. Drawing the results based on various $\theta_{s}$ is the success plot. For more details on the evaluation metrics, please refer to Section B.1 in the Appendix.
230
+
231
+ Table 1: Results on different multi-modal benchmarks (based on mechanism A in Figure 6).
232
+
233
+ <table><tr><td rowspan="2">Tracker</td><td colspan="2">OTB-Lang [1]</td><td colspan="2">TNL2k [3]</td><td colspan="2">LaSOT [2]</td><td colspan="2">LaSOTExt [17]</td><td colspan="2">LaSOTSub</td><td colspan="2">LaSOTNLC</td><td colspan="2">MGIT</td></tr><tr><td>PRE</td><td>SR</td><td>PRE</td><td>SR</td><td>PRE</td><td>SR</td><td>PRE</td><td>SR</td><td>PRE</td><td>SR</td><td>PRE</td><td>SR</td><td>PRE</td><td>SR</td></tr><tr><td>SNLT [46]</td><td>0.848</td><td>0.666</td><td>0.081</td><td>0.100</td><td>0.475</td><td>0.459</td><td>0.306</td><td>0.262</td><td>0.527</td><td>0.495</td><td>0.513</td><td>0.483</td><td>0.004</td><td>0.036</td></tr><tr><td>VLT_SCAR [42]</td><td>0.898</td><td>0.739</td><td>0.556</td><td>0.497</td><td>0.677</td><td>0.630</td><td>0.503</td><td>0.428</td><td>0.670</td><td>0.633</td><td>0.659</td><td>0.633</td><td>0.124</td><td>0.177</td></tr><tr><td>VLT_TT [42]</td><td>0.931</td><td>0.764</td><td>0.583</td><td>0.539</td><td>0.714</td><td>0.670</td><td>0.549</td><td>0.465</td><td>0.707</td><td>0.660</td><td>0.721</td><td>0.662</td><td>0.324</td><td>0.474</td></tr><tr><td>JointNLT [18]</td><td>0.856</td><td>0.653</td><td>0.598</td><td>0.552</td><td>0.640</td><td>0.607</td><td>0.457</td><td>0.398</td><td>0.624</td><td>0.583</td><td>0.707</td><td>0.651</td><td>0.433</td><td>0.603</td></tr></table>
234
+
235
+ # 4.2 Comparison with Other Multi-modal Benchmarks (Mechanism A)
236
+
237
+ We select several SOTA multi-modal trackers as baseline models and evaluate them on various benchmarks (as shown in Table 1). To fairly compare the tracking performance on MGIT and other datasets, we only allow trackers to use the semantic information of the first action in this experiment. Results show that: (1) most trackers perform worst on MGIT, which means it is a more complex environment with more challenges. (2) By comparing the tracking results on $\mathrm{LaSOT}_{\mathrm{sub}}$ and $\mathrm{LaSOT}_{\mathrm{NLC}}$ , we can find that most trackers perform worse on $\mathrm{LaSOT}_{\mathrm{sub}}$ , showing that ambiguous
238
+
239
+ semantic information may introduce external interferences. Thus, we avoid this problem via the scientific annotation and check process for MGIT construction.
240
+
241
+ # 4.3 Experimental Results on MGIT
242
+
243
+ Tracking by NL&BBox (Mechanism B-D). As shown in Figure 6 (B-D), both visual information (BBox of the first frame) and semantic information (natural language description) can be used for multi-modal trackers. Specifically, different granularities have various lengths, while most trackers have a maximum limit of the input semantic information. JointNLT [18] sets 50 as a maximum limit and truncates the excess information. This truncation occurs for both activity (C) and story (D). Similarly, the VLT [42] series limits the semantic length but can avoid truncation by adjusting the parameters. Thus, we set the semantic length to 80 for the activity and 200 for the story, with zero padding as necessary. From Table 2, we can draw the following conclusions: (1) SNLT, VLT_SCAR, and VLT_TT perform well when using longer semantic information like activity and story. This indicates that the semantic information processing modules (BERT [47]) used in these trackers can effectively handle long text. Besides, their good performances in activity indicate that as an intermediate granularity, activity accomplishes a balance between the amount of information and the number of semantic description updates. (2) On the contrary, JointNLT performs well on action rather than levels with longer descriptions, suggesting that truncated semantic information leads to poorer performance. Therefore, to obtain better multi-modal information processing capabilities, algorithms should first ensure that long texts can be processed rather than truncated directly.
244
+
245
+ Table 2: Results of different trackers on MGIT.
246
+
247
+ <table><tr><td>Tracker</td><td>Architecture</td><td>Initialize</td><td>Mechanism</td><td>PRE</td><td>N-PRE</td><td>SR</td></tr><tr><td>SiamCAR [11]</td><td>SNN</td><td>BBox</td><td></td><td>0.116</td><td>0.378</td><td>0.183</td></tr><tr><td>SiamRCNN [10]</td><td>SNN</td><td>BBox</td><td></td><td>0.512</td><td>0.707</td><td>0.591</td></tr><tr><td>PrDiMP [12]</td><td>SNN+CF</td><td>BBox</td><td></td><td>0.296</td><td>0.602</td><td>0.453</td></tr><tr><td>KeepTrack [13]</td><td>SNN+CF</td><td>BBox</td><td>E</td><td>0.373</td><td>0.695</td><td>0.519</td></tr><tr><td>TransT [39]</td><td>Transformer</td><td>BBox</td><td></td><td>0.447</td><td>0.670</td><td>0.539</td></tr><tr><td>MixFormer [14]</td><td>Transformer</td><td>BBox</td><td></td><td>0.526</td><td>0.775</td><td>0.629</td></tr><tr><td>OSTrack [15]</td><td>Transformer</td><td>BBox</td><td></td><td>0.476</td><td>0.706</td><td>0.583</td></tr><tr><td>GRM [16]</td><td>Transformer</td><td>BBox</td><td></td><td>0.500</td><td>0.718</td><td>0.597</td></tr><tr><td rowspan="3">SNLT [46]</td><td rowspan="3">SNN</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.004</td><td>0.226</td><td>0.036</td></tr><tr><td>Activity (C)</td><td>0.004</td><td>0.234</td><td>0.038</td></tr><tr><td>Story (D)</td><td>0.005</td><td>0.230</td><td>0.040</td></tr><tr><td rowspan="3">VLT_SCAR [42]</td><td rowspan="3">SNN</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.116</td><td>0.354</td><td>0.167</td></tr><tr><td>Activity (C)</td><td>0.124</td><td>0.382</td><td>0.180</td></tr><tr><td>Story (D)</td><td>0.127</td><td>0.403</td><td>0.184</td></tr><tr><td rowspan="3">VLT_TT [42]</td><td rowspan="3">Transformer</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.318</td><td>0.602</td><td>0.468</td></tr><tr><td>Activity (C)</td><td>0.325</td><td>0.627</td><td>0.485</td></tr><tr><td>Story (D)</td><td>0.322</td><td>0.616</td><td>0.480</td></tr><tr><td rowspan="3">JointNLT [18]</td><td rowspan="3">Transformer</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.445</td><td>0.786</td><td>0.610</td></tr><tr><td>Activity (C)</td><td>0.441</td><td>0.780</td><td>0.605</td></tr><tr><td>Story (D)</td><td>0.433</td><td>0.773</td><td>0.600</td></tr></table>
248
+
249
+ By comparing results under mechanisms A and D, we can find that in this complex environment, well-designed trackers (i.e., trackers with suitable long input processing ability) can perform better via longer descriptions than only relaying a short description (SNLT: $0.036 \rightarrow 0.040$ , VLT_SCAR: $0.177 \rightarrow 0.184$ , VLT_TT: $0.474 \rightarrow 0.480$ in SR). The above experiments indicate two key points: (1) Richer semantic information (mechanism D based on story) can improve the tracking perfor
250
+
251
+ mance than a simple sentence (mechanism A based on information for the first action), which can also verify the accuracy and necessity of the proposed multi-granularity semantic annotation strategy. (2) Only providing a simple description for multi-modal trackers is unreasonable for MGIT. Thus, initializing the tracking process with longer and more specific sentences, or updating the semantic information periodically throughout the sequence, has been found to be more effective in accurately locating targets within complex scenes.
252
+
253
+ Tracking by BBox Only (Mechanism E). We mainly evaluate SOTA visual-based trackers under mechanism E. As shown in Table 2, by comparing with other trackers, the transformers-based trackers have emerged as the predominant approach and achieved SOTA performance. Besides, it is worth noting that visual-based trackers usually outperform multi-modal trackers. Although we hope that more modal information can improve the tracking performance, the current multi-modal approaches cannot better align different modalities, resulting in the multi-modal information not being fully utilized. In contrast, visual-based methods have been well developed over the past decades and can better use visual features to accomplish good tracking performance. This result (i.e., current multi-modal trackers are worse than visual-based trackers) can also be found in other works, highlighting the significant room for improvement in multi-modal tracking. More detailed experimental results and analyses can be found in Section B.3 of the Appendix.
254
+
255
+ # 4.4 Visualization and Bad Case Analysis
256
+
257
+ ![](images/bd19b2768ac17560d84eb2c7e475780d7b6b8da28d09c3908dbb4085f72fd2f9.jpg)
258
+ LaSOT bird-2 sequence: white bird walking on the among other birds
259
+
260
+ ![](images/e8b36a1ed85f065b8eca9df00762cbbf7854f5e24ae579691ec08c9d42b67d92.jpg)
261
+
262
+ ![](images/dd5857cd9bb035acf3bd92227e6ac992b36a24c56a9eab41c8816d2f30f40c7e.jpg)
263
+
264
+ the among other birds
265
+
266
+ ![](images/5eea7a328e9e4a296b548152eb6588119787c404d0e67d32f23718fb5b52c9cd.jpg)
267
+ LaSOT zebra-16 sequence: zebra running on dry grass with other zebras
268
+
269
+ ![](images/67eba7642fceff19fe6c25dd0dd9fb2122980e6a207171d9ee6a271a2d9fe179.jpg)
270
+
271
+ ![](images/cbb5b3271bb61afbf4c187e998b3c2b4e146befc5c3b438b261e85b4f0970a2b.jpg)
272
+
273
+ ![](images/a03fe0a6aeac4744591c12776d3f0bf18147d9e044ea726bb70c1ed94e572d7c.jpg)
274
+ MGIT 012 sequence (mechanism A): A pink cartoon pig wearing red clothes talks to her family members on the grassland.
275
+ GroundTruth
276
+ JointNLT (CVPR23)
277
+ Figure 7: Bad cases of representative multi-modal trackers on LaSOT [2] and MGIT. (A-B) Ambiguous semantic annotations on LaSOT lead trackers to locate at similar objects. (C-D) The mechanism A used in existing multi-modal SOT benchmarks is unable to adapt to complex scenes like MGIT.
278
+
279
+ ![](images/2d1036ea0e6da2ca5139d0e1a8f07f5a9460ffacdd8fa8456737d29e031ef73c.jpg)
280
+ VLT_TT (NeurIPS22)
281
+ MGIT 006 sequence (mechanism A): A skateboard is slid by a man in black on the playground.
282
+
283
+ ![](images/c2b96644cb092dcf2b5aad307885f07c6921673696e6786ef9a92d13b0d9d4d6.jpg)
284
+ SNLT (CVPR21)
285
+
286
+ ![](images/6320b11adcd5e872a4fb9e2b71aff36f1277daa3babca9d97299fd6eac8b69a3.jpg)
287
+
288
+ We further analyze the bottlenecks of the multi-modal algorithms through the bad cases shown in Figure 7. The first two examples are selected from LaSOT [2], demonstrating that ambiguous semantic information may introduce noise, leading algorithms to wrongly focus on similar objects – this emphasizes the importance of accurate semantic annotations. The latter two examples are chosen from MGIT, demonstrating that the experimental environment constructed by MGIT presents complex spatio-temporal and causal relationships, posing challenges to multi-modal algorithms. Specifically, the complexity of MGIT results in significant differences between the target appearance and background environment in the initial frame and subsequent states. Besides, MGIT is selected from the recently released VideoCube [19] benchmark, which has a higher image resolution, posing challenges for trackers to relocate the target after failure. Additionally, using only the first action information (mechanism A) is applied in all other multi-modal SOT benchmarks. However, it is not applicable to visual object tracking in complex scenes like MGIT (Figure 7(C-D)). Therefore, the proposed multi-granularity annotation strategy offers a more reasonable solution. Multi-modal trackers who want to perform better on MGIT need a more well-designed semantic information processing module to accurately extract useful information described by semantic labels. Nevertheless, existing trackers have not made specialized designs for this aspect, which can be further improved.
289
+
290
+ # 5 Conclusions
291
+
292
+ Summary. Accuracy target tracking is the foundation for accomplishing high-level tasks like long video understanding, and introducing natural language into visual object tracking is a possible way to increase tracking ability. Different from existing multi-modal benchmarks that mainly consisted of short sequences with simple or even ambiguous descriptions, we (1) propose a new multi-modal benchmark named MGIT with 150 long video sequences, and (2) design a multi-granular annotation strategy for generating scientific semantic information. On the one hand, MGIT is a challenging and complex environment for visual tracking and video understanding research (i.e., trackers should process the spatio-temporal and causal relationships coupled with longer narrative content to accomplish better performance). On the other hand, the multi-granular annotation strategy models the human cognitive enhancement process, which may provide a step-by-step "learning" environment for generating human-like trackers. The experimental results demonstrate that MGIT is a more complex environment, and our proposed strategy is a feasible solution for coupling human understanding into semantic labels. Besides, existing trackers still have a large room for development, like improving the capability for processing long text and aligning multi-modal information. Conclusioally, we hope this work can help researchers to conduct further research in object tracking and video understanding.
293
+
294
+ Limitations. Some limitations here can be further enhanced in future work. First, we can expand MGIT with more types of videos to provide a more complicated environment for data-driven algorithms. Besides, we can design a better comprehensive evaluation system to measure visual tracking and video understanding ability. Finally, we can add more types of tasks based on the benchmark, and try to test algorithms for tasks like video caption and action recognition.
295
+
296
+ # Acknowledgments and Disclosure of Funding
297
+
298
+ This work was supported in part by the National Key R&D Program of China (No.2022ZD0116403); the National Natural Science Foundation of China (No.61721004); the Strategic Priority Research Program of Chinese Academy of Sciences (No.XDA27000000).
299
+
300
+ # References
301
+
302
+ [1] Zhenyang Li, Ran Tao, Efstratios Gavves, Cees GM Snoek, and Arnold WM Smeulders. Tracking by natural language specification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6495-6503, 2017.
303
+ [2] Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, and Haibin Ling. Lasot: A high-quality benchmark for large-scale single object tracking. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5369-5378, 2019.
304
+ [3] Xiao Wang, Xiujun Shu, Zhipeng Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, and Feng Wu. Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13763-13773, 2021.
305
+ [4] Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Online object tracking: A benchmark. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2411-2418, 2013.
306
+ [5] Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Object tracking benchmark. IEEE Transactions on Pattern Analysis & Machine Intelligence, 37(09):1834-1848, 2015.
307
+ [6] Matej Kristan, Roman Pflugfelder, Ale Leonardis, Jiri Matas, Fatih Porikli, Luka Cehovin, Georg Nebehay, Gustavo Fernandez, Tomá Vojir, Adam Gatt, Ahmad Khajenezhad, Ahmed Salahledin, Ali Soltani-Farani, Ali Zarezade, Alfredo Petrosino, Anthony Milton, Behzad Bozortabar, Bo Li, Chee Seng Chan, Cherkeng Heng, Dale Ward, David Kearney, Dorothy Monekosso, Hakki Can Karaimer, Hamid R. Rabiee, Jianke Zhu, Jin Gao, Jingjing Xiao, Junge Zhang, Junliang Xing, Kaiqi Huang, Karel Lebeda, Lijun Cao, Mario Edoardo Maresca, Mei Kuan Lim, Mohamed El Helw, Michael Felsberg, Paolo Remagnino, Richard Bowden, Roland Goecke, Rustam Stolkin, Samantha Yueying Lim, Sara Maher, Sebastien Poullot, Sebastien Wong, Shin'Ichi Satoh, Weihua Chen, Weiming Hu, Xiaqin Zhang, Yang Li, and Zhiheng Niu. The visual object tracking vot2013 challenge results. In 2013 IEEE International Conference on Computer Vision Workshops, pages 98-111, 2013.
308
+ [7] Matthias Mueller, Neil Smith, and Bernard Ghanem. A benchmark and simulator for uav tracking. In European conference on computer vision, pages 445-461. Springer, 2016.
309
+ [8] Matthias Muller, Adel Bibi, Silvio Giancola, Salman Alsubaihi, and Bernard Ghanem. Trackingnet: A large-scale dataset and benchmark for object tracking in the wild. In Proceedings of the European conference on computer vision (ECCV), pages 300-317, 2018.
310
+ [9] Lianghua Huang, Xin Zhao, and Kaiqi Huang. Got-10k: A large high-diversity benchmark for generic object tracking in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(5):1562-1577, 2021.
311
+ [10] Paul Voigtlaender, Jonathon Luiten, Philip HS Torr, and Bastian Leibe. Siam r-cnn: Visual tracking by re-detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6578-6588, 2020.
312
+ [11] Dongyan Guo, Jun Wang, Ying Cui, Zhenhua Wang, and Shengyong Chen. Siamcar: Siamese fully convolutional classification and regression for visual tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6269-6277, 2020.
313
+ [12] Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic regression for visual tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7183-7192, 2020.
314
+
315
+ [13] Christoph Mayer, Martin Danelljan, Danda Pani Paudel, and Luc Van Gool. Learning target candidate association to keep track of what not to track. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13444-13454, 2021.
316
+ [14] Yutao Cui, Cheng Jiang, Limin Wang, and Gangshan Wu. Mixformer: End-to-end tracking with iterative mixed attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13608-13618, 2022.
317
+ [15] Botao Ye, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Joint feature learning and relation modeling for tracking: A one-stream framework. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXII, pages 341-357. Springer, 2022.
318
+ [16] Shenyuan Gao, Chunluan Zhou, and Jun Zhang. Generalized relation modeling for transformer tracking. arXiv preprint arXiv:2303.16580, 2023.
319
+ [17] Heng Fan, Hexin Bai, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Mingzhen Huang, Juehuan Liu, Yong Xu, et al. Lasot: A high-quality large-scale single object tracking benchmark. International Journal of Computer Vision, 129(2):439-461, 2021.
320
+ [18] Li Zhou, Zikun Zhou, Kaige Mao, and Zhenyu He. Joint visual grounding and tracking with natural language specification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 23151-23160, June 2023.
321
+ [19] Shiyu Hu, Xin Zhao, Lianghua Huang, and Kaiqi Huang. Global Instance Tracking: Locating Target More Like Humans. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):576-592, January 2023.
322
+ [20] Zhang Zhang, Tieniu Tan, and Kaiqi Huang. An extended grammar system for learning and recognizing complex visual events. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2):240-255, 2011.
323
+ [21] Magali Bovet. Piaget's Theory of Cognitive Development and Individual Differences, pages 269-279. Springer Berlin Heidelberg, Berlin, Heidelberg, 1976.
324
+ [22] Richard C Atkinson and Richard M Shiffrin. Human memory: A proposed system and its control processes. In *Psychology of learning and motivation*, volume 2, pages 89-195. Elsevier, 1968.
325
+ [23] Matej Kristan, Jiri Matas, Aleš Leonardis, Tomáš Vojíř, Roman Pflugfelder, Gustavo Fernández, Georg Nebehay, Fatih Porikli, and Luka Čehovin. A novel performance evaluation methodology for single-target trackers. IEEE transactions on pattern analysis and machine intelligence, 38(11):2137-2155, 2016.
326
+ [24] Jack Valmadre, Luca Bertinetto, Joao F Henriques, Ran Tao, Andrea Vedaldi, Arnold WM Smeulders, Philip HS Torr, and Efstratos Gavves. Long-term tracking in the wild: A benchmark. In Proceedings of the European conference on computer vision (ECCV), pages 670-685, 2018.
327
+ [25] Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman Pflugfelder, Luka Čehovin Zajc, Tomas Vojir, Goutam Bhat, Alan Lukezic, Abdelrahman Eldsokey, et al. The sixth visual object tracking vot2018 challenge results. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, pages 0–0, 2018.
328
+ [26] Matej Kristan, Jiri Matas, Ales Leonardis, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kamarainen, Luka Cehovin Zajc, Ondrej Drbohlav, Alan Lukezic, Amanda Berg, et al. The seventh visual object tracking vot2019 challenge results. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0-0, 2019.
329
+ [27] João F Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. High-speed tracking with kernelized correlation filters. IEEE transactions on pattern analysis and machine intelligence, 37(3):583-596, 2014.
330
+
331
+ [28] Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. Eco: Efficient convolution operators for tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6638-6646, 2017.
332
+ [29] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully convolutional siamese networks for object tracking. In European conference on computer vision, pages 850-865. Springer, 2016.
333
+ [30] Bo Li, Junjie Yan, Wei Wu, Zheng Zhu, and Xiaolin Hu. High performance visual tracking with siamese region proposal network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8971-8980, 2018.
334
+ [31] Zheng Zhu, Qiang Wang, Bo Li, Wei Wu, Junjie Yan, and Weiming Hu. Distractor-aware siamese networks for visual object tracking. In Proceedings of the European conference on computer vision (ECCV), pages 101-117, 2018.
335
+ [32] Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, and Junjie Yan. Siamrpn++: Evolution of siamese visual tracking with very deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4282-4291, 2019.
336
+ [33] Bin Yan, Haojie Zhao, Dong Wang, Huchuan Lu, and Xiaoyun Yang. 'skimming-perusal' tracking: A framework for real-time and robust long-term tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2385-2393, 2019.
337
+ [34] Zhipeng Zhang and Houwen Peng. Deeper and wider siamese networks for real-time visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4591-4600, 2019.
338
+ [35] Yinda Xu, Zeyu Wang, Zuoxin Li, Ye Yuan, and Gang Yu. Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12549-12556, 2020.
339
+ [36] Zhipeng Zhang, Houwen Peng, Jianlong Fu, Bing Li, and Weiming Hu. Ocean: Object-aware anchor-free tracking. In European Conference on Computer Vision, pages 771-787. Springer, 2020.
340
+ [37] Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. Atom: Accurate tracking by overlap maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4660-4669, 2019.
341
+ [38] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6182-6191, 2019.
342
+ [39] Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun Yang, and Hutchuan Lu. Transformer tracking. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8122-8131, 2021.
343
+ [40] Zhengyuan Yang, Tushar Kumar, Tianlang Chen, Jingsong Su, and Jiebo Luo. Grounding-tracking-integration. IEEE Transactions on Circuits and Systems for Video Technology, 31(9):3433-3443, 2021.
344
+ [41] Ran Tao, Efstratos Gavves, and Arnold WM Smeulders. Siamese instance search for tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1420-1429, 2016.
345
+ [42] Mingzhe Guo, Zhipeng Zhang, Heng Fan, and Liping Jing. Divert more attention to vision-language tracking. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 4446-4460. Curran Associates, Inc., 2022.
346
+ [43] Junbin Xiao, Angela Yao, Zhiyuan Liu, Yicong Li, Wei Ji, and Tat-Seng Chua. Video as conditional graph hierarchy for multi-granular question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2804-2812, 2022.
347
+
348
+ [44] Junbin Xiao, Xindi Shang, Angela Yao, and Tat-Seng Chua. Next-qa: Next phase of question-answering to explaining temporal actions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9777–9786, 2021.
349
+ [45] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60, 2014.
350
+ [46] Qi Feng, Vitaly Ablavsky, Qinxun Bai, and Stan Sclaroff. Siamese natural language tracker: Tracking by natural language descriptions with siamese trackers. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5847-5856, 2021.
351
+ [47] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
352
+ [48] Yu Huang, Chenzhuang Du, Zihui Xue, Xuanyao Chen, Hang Zhao, and Longbo Huang. What makes multi-modal learning better than single (provably). Advances in Neural Information Processing Systems, 34:10944-10956, 2021.
353
+
354
+ # Appendix
355
+
356
+ # A Dataset Information
357
+
358
+ # A.1 Basic Information
359
+
360
+ In this work, we propose a new multi-modal global instance tracking benchmark named MGIT. It consists of 150 long video sequences with a total of 2.03 million frames, aiming to fully represent the complex spatio-temporal and causal relationships coupled in longer narrative content.
361
+
362
+ This work further expands our former work, which was accepted by IEEE TPAMI in 2022. We proposed a global instance tracking task in the previous work and released an online evaluation platform (URL: http://videocube.aitestunion.com). We hope the online platform can help researchers use our proposed resources better and conduct more fair comparisons via our real-time evaluation server.
363
+
364
+ In the past year, our platform has evaluated 380 algorithms and received more than $287\mathrm{k}$ IP visits from 130 countries (statistics by Jan 04, 2024). However, all submitted algorithms do not show significant performance improvement on the GIT task, and their tracking performances are significantly degraded on the GIT task compared to other representative single object tracking benchmarks (i.e., short-time tracking and long-time tracking). This phenomenon shows the limitations of a single visual modality for long video understanding of complex narrative relationships. Thus, we conduct this work to introduce semantic information, which aims to help the algorithms better cope with the challenges posed by complex narrative relationships for target tracking and long video understanding.
365
+
366
+ Since the motivation for this work is closely inherited from our former work, and the existing online platform has received considerable attention worldwide, we select to release the MGIT benchmark via the same online platform. The proposed benchmark, experimental results, and toolkit will be released gradually on http://videocube.aitestunion.com/ (Figure A1).
367
+
368
+ Our dataset has been uploaded to OneDrive and Baiduun disk, and the online evaluation platform is maintained by dedicated staff, which will ensure the stability of the dataset.
369
+
370
+ We declare that we bear all responsibility in case of violation of rights, etc., and confirm the data license. Our work is licensed under CC BY-NC-SA 4.0. Users are free to use the dataset for research purposes.
371
+
372
+ ![](images/2fbb31e2f3b5906db76224b7377aa37c1eaf496ed9eaba1f5e7978125171840a.jpg)
373
+ Figure A1: Our online platform and currently updated download links with related instructions.
374
+
375
+ # A.2 Data Selection
376
+
377
+ The 150 sequences in MGIT are carefully selected from the original VideoCube [19] benchmark. In the video selecting part, we thoroughly consider the consistency between the new dataset (MGIT)
378
+
379
+ and the original dataset (VideoCube) in various dimensions (6D principle), while also taking into account the consistency of difficulty.
380
+
381
+ The specific process is as follows. (1) We assess the similarity in the distribution of the selected MGIT dataset to the original VideoCube dataset across various dimensions, including object class, scene category, motion mode, and more, in accordance with the 6D principle. (2) Besides, it is essential for the selected dataset to maintain a similar level of difficulty as the original VideoCube dataset. Regarding the difficulty level, we select three state-of-the-art trackers (MixFormer [14], KeepTrack [13], and SiamRCNN [10]) with different architectures as the basis for our selection criteria. The success scores (based on IoU) of these algorithms on the original 500 sequences are calculated and ranked to measure sequence difficulty. (3) By considering both the distribution across the 6D principle and the difficulty level, we carefully choose 150 representative sequences to construct the MGIT dataset, aiming to maintain consistency with the distribution of the original VideoCube dataset.
382
+
383
+ # A.3 Semantic Annotation
384
+
385
+ # A.3.1 Semantic Annotation Deficiencies of Existing Multi-modal SOT Benchmarks
386
+
387
+ To better show the semantic annotation deficiencies of existing multi-modal SOT benchmarks, we conduct statistical analyses on OTB-Lang [1], LaSOT [2], LaSOT $_{\text{Ext}}$ [17], and TNL2k [3] from two aspects:
388
+
389
+ 1. Ambiguity of semantic labeling: A random sampling inspection is conducted to address the ambiguity of semantic annotation. Specifically, 10 sequences are randomly selected from each dataset for inspection. To ensure random and fair selection, all sequences in each dataset are alphabetically sorted first, and samples are taken at equal intervals.
390
+ 2. Completeness of grammatical structures: Semantic descriptions of high quality typically necessitate complete grammatical structures. Hence, Stanford CoreNLP [45] is utilized to analyze the semantic labels in all four datasets, and statistics those that adhered to the criteria of complete sentences.
391
+
392
+ Table A3: The statistics of semantic annotation quality in four representative datasets.
393
+
394
+ <table><tr><td>Benchmark</td><td>OTB-Lang [1]</td><td>LaSOT [2]</td><td>LaSOTExt [17]</td><td>TNL2k [3]</td></tr><tr><td>Statistical Analysis 1: Inspection Pass Rate (Non-ambiguous Semantic Descriptions)</td><td>30%</td><td>70%</td><td>60%</td><td>60%</td></tr><tr><td>Statistical Analysis 2: Complete Sentences Rate (Checked by Stanford CoreNLP, Including Complete Grammatical Structures)</td><td>9%</td><td>63%</td><td>36%</td><td>20%</td></tr></table>
395
+
396
+ ![](images/af2018e7933ae3badccb2df44dc9b8711dcc36e51eff72042e033f9f71e9e92f.jpg)
397
+ Figure A2: Statistical analysis about the completeness of grammatical structures, based on Stanford CoreNLP [45] toolkit. (NP: noun phrase; FRAG: fragment; S: simple declarative clause; SQ: inverted yes/no question, or main clause of a wh-question. Only S and SQ satisfy the completeness of the grammatical structure.)
398
+
399
+ ![](images/4cf13cd81f3fceac7323b1ecec9b623c3d44cf4b63101bd778262e5489b86ecc.jpg)
400
+
401
+ ![](images/893ce6e48ab864a78ac55f4635fd25607d67a6b7e679274a9fe05d5886204e0e.jpg)
402
+
403
+ As shown in Table A3 and Figure A2, the current datasets exhibit deficiencies regarding ambiguity and completeness. However, our proposed MGIT benchmark considers these factors during construction and avoids the previously mentioned issues, thus possessing higher-quality semantic annotations.
404
+
405
+ # A.3.2 Annotation Process
406
+
407
+ We chose an elite annotation team instead of crowdsourcing to carry out this work and ensured quality through multiple efforts.
408
+
409
+ 1. Task Decomposition. We first decompose the task to ensure a standardized workflow for execution. For instance, we begin by annotating the finest granularity (action), and subsequently continue with the annotations of activity and story. This approach ensures accuracy and consistency in the fundamental content throughout various levels of granularity. Given that action is the finest granularity and its annotation quality may affect activity and story, we refer to film narrative literature and English grammar materials to further decompose the description of an action. This decomposition involves identifying the tracking target (who), the motion (what), the presence of a third-party object (if applicable), the location (where), and the time interval (when). By obtaining these specific details, annotators can attain a standardized and comprehensive description of the action.
410
+ 2. Annotator Selection. Considering the difficulty of controlling annotation quality in crowdsourcing, we chose highly cognitive graduate students with experience in dataset annotation to form an annotation team. Team members not only have experience in annotating vision-based datasets represented by VideoCube but also have experience in annotating image datasets in visual psychology. They have a solid foundation in dataset construction in fields such as computer vision and cognitive psychology. All team members undergo standardized training before formal annotation to ensure their understanding of task characteristics and annotation rules. Additionally, the training session includes 10 video examples of different types, requiring annotators to comprehend the annotation process and details.
411
+ 3. Annotation Workflow. (1) The formal annotation process begins, wherein annotation personnel is grouped based on video types, including cartoons, movies, TV shows, sports, performances, and documentaries. Any issues requiring discussion will be documented, followed by a comprehensive discussion among all personnel, and then the annotation process for that particular sequence will commence. (2) For instance, in the case of a sniper rifle as the target, which term should be employed: "gun" or "sniper rifle"? Through consulting relevant materials and conducting discussions, we have concluded that the fundamental principle of annotation is to incorporate human comprehension into semantic labels. As there is no second firearm in the sequence and the term "gun" encompasses "sniper rifle" semantically, the usage of "gun" aligns with commonplace terminology. Nevertheless, if a second gun appears in the sequence, "sniper rifle" may be employed to underscore the target's distinctiveness. (3) Furthermore, to enhance the standardization of annotations, we refer to WordNet to construct verb and noun lists. Initially, annotators will choose candidate terms from the current vocabulary lists to depict the essential elements of the scene, aiming to maintain consistency in the portrayal of actions across varying sequences to the greatest extent possible. If there are no appropriate terms found in the candidate list, annotators will employ new vocabulary to depict the elements and subsequently incorporate them into the candidate list, supplemented with relevant examples for future annotation reference.
412
+ 4. Quality Review. After completing the annotation for all sequences, we will review the content to ensure its quality. Additionally, we utilize the Stanford CoreNLP [45], a natural language processing toolkit, to examine the semantic annotations and ensure the grammatical structure's completeness.
413
+
414
+ # A.3.3 Annotation File
415
+
416
+ We propose a multi-granular annotation strategy to generate the semantic description, and use JSON format to save the natural language annotation for each video sequence. Here we illustrate an example to show the JSON file structure for video sequence 001 in the MGIT benchmark, as shown in Listing 1. Due to the limited space, we only illustrate some representative information, while the remaining information with similar structure is indicated by ellipses. Please download and check the dataset for more detailed annotation about each video sequence.
417
+
418
+ {
419
+
420
+ ```jsonl
421
+ "action": {"action_1": { "start_frame": 0, "end_frame": 1246, "length": 1247, "object_class": "male secret agent", "appearance": "black suit", "action_1": "walk", "prep_1": NaN, "3rd_object_1": NaN, "action_2": NaN, "prep_2": NaN, "3rd_object_2": NaN, "scene": "washroom", "description": "A male secret agent wearing a black suit $\rightarrow$ walks in the washroom"}, "action_2": {...}, "action_3": {...}, "action_4": {...}, "action_5": {...}, "action_6": {...}, "action_7": {...}, "action_8": {...}, "action_9": {...}, }, "activity": {"activity_1": { "start_frame": 0, "end_frame": 2400, "length": 2401, "description": "A male secret agent wearing a black suit $\leftrightarrow$ walks in the washroom, and stands near a man wearing $\leftrightarrow$ a light grey suit. They fight, then the male secret $\leftrightarrow$ agent wins, and lifts the insensible grey-suit man $\leftrightarrow$ to the washroom cubicle.", "activity_2": {...}, "activity_3": {...}, }, "story": {"story_1": { "start_frame": 0, "end_frame": 9032, "length": 9033, "description": "A male secret agent wearing a black suit $\leftrightarrow$ walks in the washroom, and stands near a man wearing $\leftrightarrow$ a light grey suit. They fight, and the male secret $\leftrightarrow$ agent wins. He then lifts the insensible grey-suit $\leftrightarrow$ man to the washroom cubicle. The male secret agent $\leftrightarrow$ crouches in the washroom cubicle and checks the $\leftrightarrow$ insensible grey-suit man. Suddenly, the grey-suit $\leftrightarrow$ man wakes up, and they fight together again in the $\leftrightarrow$ washroom. Eventually, the male secret agent wins the $\leftrightarrow$ fight. After the male secret agent talks with a $\leftrightarrow$ woman wearing a brown suit, he again lifts the $\leftrightarrow$ insensible grey-suit man to the washroom cubicle. $\leftrightarrow$ Finally, the male secret agent lefts the washroom $\leftrightarrow$ after talking with the brown-suit woman."}}
422
+ ```
423
+
424
+ Listing 1: The JSON file about the semantic information of video sequence 001.
425
+
426
+ ![](images/bac35ecc2999bb1e4d2169d073bf4541d47e229fadceb1bee5f3354f40729b67.jpg)
427
+ (a) The word cloud of action verbs
428
+
429
+ ![](images/030ad5ebfa9cd25a8fda68c1ab35e26aa7293155ad5bef96a5ebfe8c63ec8f6d.jpg)
430
+ (b) The word cloud of scene categories
431
+ Figure A3: The word cloud of action verbs and scene categories on MGIT.
432
+
433
+ 1. Action: For each action, we save the following information in the JSON file:
434
+
435
+ (a) start_frame: The starting frame of the action. Note that the original VideoCube is labeled with the first frame starting from 0. Therefore, in the JSON file, we use 0 to represent the first frame (Note that in the figures of the main paper, we use the same format as the other datasets to show the starting point as 1 for ease of understanding).
436
+ (b) end_frame: The ending frame of the action.
437
+ (c) length: Length of the time interval.
438
+ (d) object_class: The object class of the tracking target.
439
+ (e) appearance: The appearance of the tracking target. We ensure that the description of the target's appearance is unique in the entire sequence.
440
+ (f) action_1: The first action of the target. The word cloud of action verbs is illustrated in Figure A3 (a).
441
+ (g) prep_1: The preposition of action (if present).
442
+ (h) 3rd_object_1: The interaction object of the first action (if present).
443
+ (i) action_2: The second action of the target (if present).
444
+ (j) prep_2: The preposition of action (if present).
445
+ (k) 3rd_object_2: The interaction object of the second action (if present.
446
+ (1) scene: The scene category. The word cloud of action verbs is illustrated in Figure A3 (b).
447
+ (m) description: The natural language description.
448
+
449
+ 2. Activity: For each activity, we save the following information in the JSON file:
450
+
451
+ (a) start_frame: The starting frame of the activity.
452
+ (b) end_frame: The ending frame of the activity.
453
+ (c) length: Length of the time interval.
454
+ (d) description: The natural language description.
455
+
456
+ 3. Story: For each story, we save the following information in the JSON file:
457
+
458
+ (a) start_frame: The starting frame of the story.
459
+ (b) end_frame: The ending frame of the story.
460
+ (c) length: Length of the time interval.
461
+ (d) description: The natural language description.
462
+
463
+ # A.4 Dataset Structure
464
+
465
+ The MGIT benchmark includes 150 long video sequences (344G) with detailed annotations. We add semantic information based on a multi-granularity annotation strategy while retaining the detailed annotation information of the original VideoCube dataset, aiming to help multi-modal methods better understand the narrative content of long videos.
466
+
467
+ The dataset download and file organization process is as follows:
468
+
469
+ 1. Download three subsets (train/val/test) and the info data. Please click on the hyperlink to visit our dataset (choose the link that works best for you).
470
+
471
+ (a) Train Data (229G, 105 Sequences):
472
+
473
+ Listing 2: The dataset structure of val subset.
474
+ ```txt
475
+ -- val/
476
+ | -- 005/
477
+ | | -- frame_005/
478
+ | | | -- 000000.jpg/
479
+ | | | ...
480
+ | | | -- 016891.jpg/
481
+ | | -- 029/
482
+ | | ...
483
+ | | -- 362/
484
+ ```
485
+
486
+ Listing 3: The dataset structure of the MGIT benchmark.
487
+ ```txt
488
+ -- MGIT/
489
+ | -- data/
490
+ | | -- train/
491
+ | | | -- 002/
492
+ | | | | ...
493
+ | | | -- 480/
494
+ | | | -- val/
495
+ | | | | -- 005/
496
+ | | | | ...
497
+ | | | | -- 362/
498
+ | | | -- test/
499
+ | | | | -- 001/
500
+ | | | | ...
501
+ | | | | -- 498/
502
+ | | | -- train_list.txt
503
+ | | | -- val_list.txt
504
+ | | | -- test_list.txt
505
+ | | -- attribute/
506
+ | | | -- absent/
507
+ | | | -- color_constancytran/
508
+ | | | ...
509
+ | | | -- description/
510
+ | | | ...
511
+ | | | -- shortcut/
512
+ ```
513
+
514
+ i. OneDrive
515
+ ii. Baiduu Disk (The extraction code is cube.)
516
+
517
+ (b) Validation Data (37G, 15 Sequences):
518
+
519
+ i. OneDrive
520
+ ii. Baiduu Disk (The extraction code is cube.)
521
+
522
+ (c) Test Data (78G, 30 Sequences):
523
+
524
+ i. OneDrive
525
+ ii. Baiduu Disk (The extraction code is cube.)
526
+
527
+ (d) Info Data (89.16M, 15 Attributes):
528
+
529
+ i. OneDrive
530
+ ii. Braiduyu Disk (The extraction code is cube.)
531
+
532
+ 2. Check the number of files in each subset and run the unzipping script. To facilitate transmission and downloading, the very long video sequences in the dataset are divided into smaller segments during the packaging process. Each segment is compressed and kept under
533
+
534
+ 4GB. For instance, in the train set, the sequence 013 is divided into three compressed files: frame_013_split.z01, frame_013_split.z02, and frame_013_split.zip. Before unzipping:
535
+
536
+ (a) The train subset should include 129 files (128 data files and an unzip_train bash).
537
+ (b) The val subset should include 22 files (21 data files and an unzip_val bash).
538
+ (c) The test subset should include 41 files (40 data files and an unzip_test bash).
539
+
540
+ 3. Run the unzipping script in each subset folder, and delete the script after decompression.
541
+ 4. Taking val subset of full version as an example, the folder structure is listed as Listing 2.
542
+ 5. Unzip attribute.zip in info data. Attention that we only provide properties files for train and val subsets. For ground-truth files in the test subset, we only offer a small number of annotations for restart frames to support the essential function of the R-OPE mechanism (For detailed information about the R-OPE mechanism, please refer to the TPAMI paper [19] about GIT task and the VideoCube benchmark. Note that we only use the OPE mechanism for MGIT evaluation process, while the R-OPE mechanism is supported for visual-based trackers.). The annotations of other frames in the test subset have been set as zero. Please upload the final results to the server (http://videocube.aitestunion.com/) for evaluation.
543
+ 6. Rename and organize folders as Listing 3. Note that the semantic information (saved as JSON file) for the MGit benchmark is saved in the description folder.
544
+
545
+ # B Experiment Information
546
+
547
+ # B.1 Evaluation Metrics
548
+
549
+ Assume an experiment dataset $E$ (e.g., MGIT) comprises $|E|$ sequences, with $|\cdot|$ representing the cardinality. In the sequence $L$ , we use $F_t$ to represent the $t$ -th frame. We assume that $p_t$ denotes the predicted position by the tracker $T$ , and $g_t$ refers to the ground-truth position. Notably, if a frame does not contain the target (i.e., full-occlusion or out-of-view), it is considered an empty set (i.e., $g_t = \phi$ ) and thereby excluded from the evaluation process. The precision score and success score of frame $F_t$ are calculated through the following formulas:
550
+
551
+ $$
552
+ d _ {t} = \| c _ {p} - c _ {g} \| _ {2},
553
+ $$
554
+
555
+ $$
556
+ s _ {t} = \Omega \left(p _ {t}, g _ {t}\right) = \frac {p _ {t} \bigcap g _ {t}}{p _ {t} \bigcup g _ {t}}, \tag {1}
557
+ $$
558
+
559
+ where $d_{t}$ represents the distance between the center points $c_{p}$ and $c_{g}$ , while $\Omega(\cdot)$ denotes the intersection over union.
560
+
561
+ Recently, the normalized precision score ([19]) is proposed to eliminate the impact of the target size and frame resolution. In the case where trackers have a predicted center outside the ground-truth, an additional penalty item $d_t^p$ is included, representing the shortest distance between the center point $c_p$ and the edge of the ground-truth. If the center point of a tracker falls within the ground-truth, the center distance $d_t'$ is equal to the original precision $d_t$ , resulting in $d_t^p = 0$ :
562
+
563
+ $$
564
+ N \left(d _ {t}\right) = \frac {d _ {t} ^ {\prime}}{\operatorname* {m a x} \left(\left\{d _ {i} ^ {\prime} \mid i \in F _ {t} \right\}\right)}, \tag {2}
565
+ $$
566
+
567
+ $$
568
+ d _ {t} ^ {\prime} = d _ {t} + d _ {t} ^ {p}.
569
+ $$
570
+
571
+ The precision $\mathrm{P}(E)$ , normalized precision $\mathrm{N}(E)$ , and success $\mathrm{S}(E)$ of environment $E$ are defined as follows:
572
+
573
+ $$
574
+ \mathrm {P} (E) = \frac {1}{| E |} \sum_ {l = 1} ^ {| E |} \frac {1}{| L |} \left| \left\{t: d _ {t} \leq \theta_ {d} \right\} \right|,
575
+ $$
576
+
577
+ $$
578
+ \mathrm {N} (E) = \frac {1}{| E |} \sum_ {l = 1} ^ {| E |} \frac {1}{| L |} \left| \left\{t: N \left(d _ {t}\right) \leq \theta_ {d} ^ {\prime} \right\} \right|, \tag {3}
579
+ $$
580
+
581
+ $$
582
+ \mathrm {S} (E) = \frac {1}{| E |} \sum_ {l = 1} ^ {| E |} \frac {1}{| L |} | \{t: s _ {t} \geq \theta_ {s} \} |.
583
+ $$
584
+
585
+ The precision plot is generated by calculating the proportion of frames with a distance $d_t$ less than or equal to $\theta_d$ and plotting the statistical results across different $\theta_d$ values. In most cases, existing benchmarks use $\theta_d = 20$ as a standard threshold to rank trackers.
586
+
587
+ The normalized precision plot is generated similarly by plotting statistical results obtained from varying $\theta_{d}^{\prime}$ values within the range of [0,1]. However, directly assigning a specific $\theta_{d}^{\prime}$ value to rank trackers can introduce subjective biases. Therefore, the ranking of trackers is based on the proportion of frames in which the predicted center $c_{p}$ successfully falls within the ground-truth rectangle $g_{t}$ .
588
+
589
+ The success plot is generated by plotting the results obtained from different overlap thresholds, $\theta_{s}$ on a curve. In this plot, the mAO (mean average overlap) is commonly utilized to rank trackers.
590
+
591
+ # B.2 Baseline Information
592
+
593
+ All experiments are performed on a server with 4 NVIDIA TITAN RTX GPUs and a 64 Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz. Detailed information about the baselines are illustrated in Table A4, we use the parameters provided by the original authors.
594
+
595
+ Table A4: Table: The model architectures and URLs of open-sourced algorithms used in this work.
596
+
597
+ <table><tr><td>Tracker</td><td>Architecture</td><td>Initializa</td><td>URL</td></tr><tr><td>SiamCAR [11]</td><td>SNN</td><td>BBox</td><td>https://github.com/ohhhyeahhh/SiamCAR</td></tr><tr><td>SiamRCNN [10]</td><td>SNN</td><td>BBox</td><td>https://github.com/VisualComputingInstitute/SiamR-CNN</td></tr><tr><td>PrDiMP [12]</td><td>SNN</td><td>BBox</td><td>https://github.com/visionml/pytracking</td></tr><tr><td>KeepTrack [13]</td><td>SNN</td><td>BBox</td><td>https://github.com/visionml/pytracking</td></tr><tr><td>TransT [39]</td><td>Transformer</td><td>BBox</td><td>https://github.com/chenxin-dlut/TransT</td></tr><tr><td>MixFormer [14]</td><td>Transformer</td><td>BBox</td><td>https://github.com/MCG-NJU/MixFormer</td></tr><tr><td>OSTrack [15]</td><td>Transformer</td><td>BBox</td><td>https://github.com/botaoye/OSTrack</td></tr><tr><td>GRM [16]</td><td>Transformer</td><td>BBox</td><td>https://github.com/Little-Podi/GRM</td></tr><tr><td>SNLT [46]</td><td>SNN</td><td>NL&amp;BBox</td><td>https://github.com/fredfung007/snlt</td></tr><tr><td>VLT_SCAR [42]</td><td>SNN</td><td>NL&amp;BBox</td><td>https://github.com/JudasDie/SOTS</td></tr><tr><td>VLT_TT [42]</td><td>Transformer</td><td>NL&amp;BBox</td><td>https://github.com/JudasDie/SOTS</td></tr><tr><td>JointNLT [18]</td><td>Transformer</td><td>NL&amp;BBox</td><td>https://github.com/lizhou-cs/JointNLT</td></tr></table>
598
+
599
+ Note: SNN-Siamese Neural Network. NL-Natural Language. BBox-Bounding Box.
600
+
601
+ # B.3 More Detailed Experimental Results on MGIT
602
+
603
+ ![](images/b71472fa694b5a1102527ad29ed658c337f0c7e8dee51699bd320f6ef969bde5.jpg)
604
+ (a) Precision plots on MGIT
605
+
606
+ ![](images/8c66b436e2bc9b100b1ecab0705114ced36161a2d869600483f29830abb53e03.jpg)
607
+ (b) Normalized precision plots on MGIT
608
+
609
+ ![](images/a97e249457c37a638b89d71fb1f1d8c8fbfdf6f96913f17b10a5e0d16c69cc80.jpg)
610
+ (c) Success plots on MGIT (based on IoU)
611
+ Figure A4: The precision plot (a), normalized precision plot (b), and the success plot (c) on MGIT.
612
+
613
+ Figure A4 illustrates the precision plot (a), normalized precision plot (b), and the success plot (c). The performance of trackers indicates that the multi-modal trackers still exhibits a certain gap when compared to visual-based trackers.
614
+
615
+ Specially, the multi-modal tracker SNLT [46] performs poorly in MGIT. The possible reasons behind the poor results of SNLT are as follows: (1) SNLT is based on the local search, which exhibits a performance gap to the global search trackers. Experiments reveal that local search trackers may encounter a more severe tracking drift problem in the MGIT task (this method tracks by cutting out the search area from the original image, while the high image resolution in MGIT will challenge it). Besides, SNLT's weaker tracking ability is more prone to failure. The errors generated as a result will further misguide, thus creating a negative loop. For example, when it loses the target or drifts towards a similar object, it will persist in tracking failure until the target reemerges within its search area with more recognizable visual information. (2) In addition, there are some issues within SNLT's open-source code, and although we fix them during our reproduction and evaluation, SNLT has some limitations in terms of project completeness compared to other better-maintained multi-modal open-sourced trackers, which may also contribute to its poor performance.
616
+
617
+ Besides, some other challenges may also influence the tracking performance. Here, we discuss the challenges trackers face from two perspectives: by comparing different multi-modal information provisioning mechanisms, and from limitations between single-modal and multi-modal approaches.
618
+
619
+ 1. Comparison of different multi-modal evaluation mechanisms: (1) First, compared to mechanism A, which only provides semantic information for the first action, multi-modal methods do not show a significant performance decrease in mechanisms B, C, and D. Except JointNLT [18], the success rate scores of other multi-modal trackers in mechanism D are superior to mechanism A. (2) Furthermore, most algorithms performed the worst in mechanism A because they only obtained semantic descriptions for the first action, and this semantic information is not updated in the subsequent process, which may introduce noise to the tracker as the sequence progresses. (3) Mechanisms B and C both regularly update the semantic information during tracking. However, C has a moderate frequency of semantic information updates, and each update provides a moderate length of semantic information, which may better leverage the capabilities of trackers. (4) Mechanism D, similar to mechanism A, only provides semantic information in the initial frame but offers story information that can cover the entire video. However, current multi-modal trackers lack well-designed semantic understanding modules for handling long texts, making it challenging to align semantic information with visual information
620
+
621
+ 2. Comparison between single-modal and multi-modal approaches: Semantic modalities can provide information beyond superficial features such as appearance and location compared to pure visual trackers. However, achieving better correlation and fusion between modalities still needs to be solved. Previous research on other multi-modal tasks has experimentally and theoretically demonstrated that multi-modal approaches can introduce more information, resulting in improved algorithms [48]. However, in the field of SOT research, the performance of multi-modal algorithms still lags behind that of single-modal algorithms. The main reason may be related to the need for more high-quality benchmarks – existing multi-modal benchmarks have significant limitations in the completeness of semantic information and video complexity, making it challenging to provide a favorable experimental environment for multi-modal trackers. Additionally, these multi-modal benchmarks do not adopt a multi-granular annotation strategy, resulting in an evaluation system that only involves mechanism A. As a result, they cannot thoroughly explore the current methods' bottlenecks like our work.
622
+
623
+ Therefore, the emergence of MGIT can provide a high-quality experimental environment for research, and help the algorithms quickly identify bottleneck issues under various evaluation mechanisms, thereby accelerating the development of efficient multi-modal trackers.
624
+
625
+ Furthermore, the multi-modal trackers demonstrate superior performance in terms of the normalized precision plot (Figure A4 (b)). We attribute this to the integration of the semantic modality, which enables the multi-modal tracker to effectively recognize the target's position. This observation aligns with intuition, but there is room for further improvement in localization accuracy.
626
+
627
+ ![](images/7a2c9053d00b28c10b55c396f749e5c47257bdc5dcf18728925c1a023ea8e4d7.jpg)
628
+ B.4 A Comparison Experiment about Using Various Granularities
629
+ Figure A5: An illustration of mechanism B, mechanism B, and their combination.
630
+
631
+ Table A5: Results of mechanism B, mechanism B, and their combination.
632
+
633
+ <table><tr><td>Tracker</td><td>Architecture</td><td>Initialize</td><td>Mechanism</td><td>PRE</td><td>N-PRE</td><td>SR</td></tr><tr><td rowspan="3">SNLT [46]</td><td rowspan="3">SNN</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.004</td><td>0.226</td><td>0.036</td></tr><tr><td>Story (D)</td><td>0.005</td><td>0.230</td><td>0.040</td></tr><tr><td>Combination (B+D)</td><td>0.004</td><td>0.229</td><td>0.037</td></tr><tr><td rowspan="3">VLT_SCAR [42]</td><td rowspan="3">SNN</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.116</td><td>0.354</td><td>0.167</td></tr><tr><td>Story (D)</td><td>0.127</td><td>0.403</td><td>0.184</td></tr><tr><td>Combination (B+D)</td><td>0.107</td><td>0.367</td><td>0.165</td></tr><tr><td rowspan="3">VLT_TT [42]</td><td rowspan="3">Transformer</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.318</td><td>0.602</td><td>0.468</td></tr><tr><td>Story (D)</td><td>0.322</td><td>0.616</td><td>0.480</td></tr><tr><td>Combination (B+D)</td><td>0.327</td><td>0.612</td><td>0.477</td></tr><tr><td rowspan="3">JointNLT [18]</td><td rowspan="3">Transformer</td><td rowspan="3">NL&amp;BBox</td><td>Action (B)</td><td>0.445</td><td>0.786</td><td>0.610</td></tr><tr><td>Story (D)</td><td>0.433</td><td>0.773</td><td>0.600</td></tr><tr><td>Combination (B+D)</td><td>0.443</td><td>0.783</td><td>0.607</td></tr></table>
634
+
635
+ Considering that integrating information from different granularities may further benefit the algorithms, we here take mechanisms B and D as an example to explore whether multiple granularities of information are more effective for algorithms. As shown in Figure A5, we combine mechanisms B and D using the following strategy: taking mechanism B as the main component, we replace its semantic information from the first frame with story information (mechanism D). The experimental results are shown in Table A5.
636
+
637
+ Since most current multi-modal trackers perform target feature matching frame by frame, when the first frame receives story information, the semantic information will be replaced by new action information after reaching the next action. Therefore, the effective range of story information only covers the initial action. As a result, for most algorithms, the score difference between combination $(\mathrm{B} + \mathrm{D})$ and the original mechanism B is insignificant.
638
+
639
+ It is worth noting that this experiment only provides a simple and direct approach to evaluating tracking performance with the combination of multi-granularity information. The direct reason for insignificant improvement lies in the limitations of existing trackers (lacking a well-designed semantic processing module and the poor multi-modal alignment capability). If future multi-modal tracking algorithms can design a stronger semantic information processing module to comprehensively represent the video content (i.e., hierarchically constructing video content based on graph [43]), perhaps a more powerful tracking capability can be obtained than simply using a single granularity of information.
amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39dee4fdbef40e81b486d44c57e5c647c95091058bf4c3a17eb4a1d7b392adb2
3
+ size 1486621
amultimodalglobalinstancetrackingbenchmarkmgitbetterlocatingtargetincomplexspatiotemporalandcausalrelationship/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c639ff6ef3f24f4637036106c733c2eaeae64ceb08874be2f2a4e963612a1d30
3
+ size 697271
aneuralcollapseperspectiveonfeatureevolutioningraphneuralnetworks/854e4251-0979-4ea7-b7a7-1c013dffffcd_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ece2cea3ac00ee56ec050840dd133ea96d686ef947326a45ca046919ad416736
3
+ size 440719
aneuralcollapseperspectiveonfeatureevolutioningraphneuralnetworks/854e4251-0979-4ea7-b7a7-1c013dffffcd_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d29b672d81838890235e1baab5b46ade431cbaae142a3633b8c631d552b58be1
3
+ size 486041