Eric03 commited on
Commit
41655c5
·
verified ·
1 Parent(s): fb7897e

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2003.02249/main_diagram/main_diagram.drawio +1 -0
  2. 2003.02249/main_diagram/main_diagram.pdf +0 -0
  3. 2003.02249/paper_text/intro_method.md +160 -0
  4. 2005.02066/main_diagram/main_diagram.drawio +0 -0
  5. 2005.02066/paper_text/intro_method.md +33 -0
  6. 2010.13993/main_diagram/main_diagram.drawio +1 -0
  7. 2010.13993/main_diagram/main_diagram.pdf +0 -0
  8. 2010.13993/paper_text/intro_method.md +16 -0
  9. 2106.03904/main_diagram/main_diagram.drawio +1 -0
  10. 2106.03904/main_diagram/main_diagram.pdf +0 -0
  11. 2106.03904/paper_text/intro_method.md +142 -0
  12. 2110.01428/main_diagram/main_diagram.drawio +1 -0
  13. 2110.01428/main_diagram/main_diagram.pdf +0 -0
  14. 2110.01428/paper_text/intro_method.md +84 -0
  15. 2111.05323/main_diagram/main_diagram.drawio +1 -0
  16. 2111.05323/main_diagram/main_diagram.pdf +0 -0
  17. 2111.05323/paper_text/intro_method.md +104 -0
  18. 2111.15340/main_diagram/main_diagram.drawio +0 -0
  19. 2111.15340/paper_text/intro_method.md +75 -0
  20. 2112.08025/main_diagram/main_diagram.drawio +0 -0
  21. 2112.08025/paper_text/intro_method.md +142 -0
  22. 2201.08214/main_diagram/main_diagram.drawio +1 -0
  23. 2201.08214/main_diagram/main_diagram.pdf +0 -0
  24. 2201.08214/paper_text/intro_method.md +132 -0
  25. 2202.07028/main_diagram/main_diagram.drawio +1 -0
  26. 2202.07028/main_diagram/main_diagram.pdf +0 -0
  27. 2202.07028/paper_text/intro_method.md +96 -0
  28. 2205.12247/main_diagram/main_diagram.drawio +0 -0
  29. 2205.12247/paper_text/intro_method.md +76 -0
  30. 2205.14345/main_diagram/main_diagram.drawio +1 -0
  31. 2205.14345/paper_text/intro_method.md +184 -0
  32. 2206.12680/main_diagram/main_diagram.drawio +1 -0
  33. 2206.12680/paper_text/intro_method.md +163 -0
  34. 2210.09404/main_diagram/main_diagram.drawio +0 -0
  35. 2210.09404/paper_text/intro_method.md +203 -0
  36. 2210.13014/main_diagram/main_diagram.drawio +0 -0
  37. 2210.13014/paper_text/intro_method.md +160 -0
  38. 2303.14368/main_diagram/main_diagram.drawio +0 -0
  39. 2303.14368/paper_text/intro_method.md +174 -0
  40. 2304.06244/main_diagram/main_diagram.drawio +118 -0
  41. 2304.06244/main_diagram/main_diagram.pdf +0 -0
  42. 2304.06244/paper_text/intro_method.md +109 -0
  43. 2305.15074/main_diagram/main_diagram.drawio +0 -0
  44. 2305.15074/paper_text/intro_method.md +31 -0
  45. 2305.16988/main_diagram/main_diagram.drawio +0 -0
  46. 2305.16988/main_diagram/main_diagram.pdf +0 -0
  47. 2305.16988/paper_text/intro_method.md +176 -0
  48. 2307.14367/main_diagram/main_diagram.drawio +0 -0
  49. 2307.14367/paper_text/intro_method.md +42 -0
  50. 2310.05057/main_diagram/main_diagram.drawio +0 -0
2003.02249/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="www.draw.io" modified="2020-01-30T23:11:58.701Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36" etag="lylYqutsHfOBzt5p6ZJ5" version="12.5.8" type="device"><diagram id="1HZlRQZvQ5HgiqLno1Uz" name="Page-1">7Vzdk5o6FP9rnGkfdCABPx531fa2s/243Z25vU+dCFG5C8RCrGv/+ptAEEiiuCrourUPJYfkQH7n5OR8hG3BYfD0PkKL+SfiYr8FDPepBUctAEzDgOw/TlmnlK7ZSwmzyHNFp5xw7/3G2UhBXXoujksdKSE+9RZlokPCEDu0RENRRFblblPil5+6QDOsEO4d5KvUfzyXzlNq3zZy+l/Ym82zJ7MZp3cClHUWhHiOXLIqkOC4BYcRITS9Cp6G2OfgZbik495tubt5sQiHdJ8BK9MZAbxaj/0fd5PR7+jdHVm1uymXX8hfigmLl6XrDIGILEMXcyZGC96u5h7F9wvk8LsrJnNGm9PAZy2TXaovJd7zF44ofiqQxEu+xyTANFqzLuKuJfASCgMzQFc5/GZP0OZF6LOOSIh8tmGdo8IuBDDPAAkoIH0jt+NvD2gHVmY1VlPP94fEJ1EyFk5t/o/RYxqRR1y4001+fAQJaYGe/k6E+qDT75eBh10FeB3u0KoJ9p4KO0Pl2+g41E+AlQklFQWWglRXgxSoS0H7ClKfPt99uDycuupSbhQn01KAgh3W/hBSHAXY9RDFrPkQIS/0whm7/DpHMVaAZPOnZbTKazYkIZYWuCAh35uFrOlg/khG4Gh6bLu5ETcCz3X5Y7TiKZtibg3uxUtpdoNniwv2StICfY3h7TcpLks1vBYX1wOKZpi+XkEBaV1pJWU3KinVAtlcUmNOQNQj4SsTkQX2EBFoUkT2Hq4eDt0b7jPnGBfdlbLzYSQ/RmcgRevvAsek8S9vdICdtUdPxbujddZ68uh3wZxfp8Ns0coH8UY2ZqtoYrKMHFztxNHUdlR6HdgtxQWqoAuC1C21jBZhn+n/r3I0oROueMJX4oW0sIUCU/HLZC7p1MXAYgAg87K6Ci9T4pXCo/BKNG4z+SOUUPXp6lXC3stUwv5lKSFUldA6WAll37k2DQyjD18C7+Pnx3X7gS568fLjF9qGigKafKP6GhEHxzG7GpJw6s1ezjal7EkazdzuoZtlaWw89uI2BY2OCVTlgifYqbQiUp30Fuj6HP0Fu5jRZAWmBI4H6zpFYtV1fy55OuV2yLTRY2AD4zNe5WR5eLxAYUZzhNyNhJHR6m16TSJ5HJtWcWiZXUFrsuc6qUbc8KU4m7zhgTUDj6Fj8GWUXXbfJk+/5RNi82pPUeD563RY2oVxRMEi6QJZ2J28dEx8JgH5xvYhvjfBUeaSBSQkzxgb4NDf0Z9zixP1zeaRaHQ7pSXzWET5zQlyHmeJfrclgIBtb5hK12910jS4SoWLJf0REHfp41yKm84RYdOmqO1z89J2mCfqFjkN/wj7JQmbcaY89vtBUfwY68SNHRIxCQ+D0Pf+CPrFCjr1BraKeUKI//OPlF+6lDudzoECE7TeqKI3I6e+gkIu+BTHFD4yL8/H05zZUb6Z5CibmuQpGNRVB9E6ZmqSB6DUecZ8Wi2emUvW6avwnaGUhdviOzfoOA+qg+sDinnHgWTZlVq8UdhmtDgLeS4JJjmjqy166iL8GmFS84Wbqqdsqt84ZMFnc/t2B5JnqIgeJxMrj3szsdhQEUtdJVG9UNT0xakqfcctc7aHZ0XQHcW+umoSeqzU7UrB6bBc48EpQymfV5lD3Lkwi0nEHcpyrqQhNPnyMTY/s6wdPUns++YPLcNWNE3mVXMK0VT32UrLeHPtltFUV3ujlhGou/otCw7/vkjTmJ2BO5dp3NQzr800DvY0jYUDeJdnGkHvdKZR5lWzacz06JJ8a8vYw7fWHUOqz7cGqhv3unxry1CPG57btwZq3ekifGtrIOlvg6fo9EDZ17l7ZKuyevewzrl7WD2+dk7uWNs9ezejureOA/INV+VV62zi2b1q9bzOZXjVilHUuNTNGsUrzTaAfbMNqapcqFE81KVWjGLT/rSaagATXnK5I8hl/93zukiYFO3GoUNcfuRFUrrrLL5YtlRXABqn3tSZSliTAbDUBIRqANiKzFAgEZ2TGQmRP86pEmp5nztCFkJ8/2FK1+JDObSkRGdbT7L04ZYg+egjrVuORlesKWY60brQbcE7xNufY0tn27Kdc+vJRPm9yv3ZRfoGpz2OeIHFKKtbBuICanZwR8B8PQ6gtNmYgzNHxFCNiE/3Bd5xSdWuBFVPhapR9w+qMfFFJA9koIAm+dUsUGqYdxkRhYIUOHNEAZ8XUTg+imPPKeNySPCw7VOKqi8pWqfzPITdqYw5BnqJNpTFl7/CMyVN2DfIgP0t+21Tn0TscW7oShVt33rRNle4oeDWko34oGMfpmuyb6dhVbO2ZR8OXbe27fpjG9UHN86qbUCul9ngMF0Dpr2bUd2aBk6gabVrTXViDZx1l+vL2nDoLjcYdAZSddo6nfFhzfxP6aTd8z9IBMf/Aw==</diagram></mxfile>
2003.02249/main_diagram/main_diagram.pdf ADDED
Binary file (37.8 kB). View file
 
2003.02249/paper_text/intro_method.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Transfer learning is an area of research that uses knowledge from pretrained models to transfer to new tasks. In recent years, Transformer-based models like BERT [@devlin-etal-2019-bert] and T5 [@2019t5] have yielded state-of-the-art results on the lion's share of benchmark tasks for language understanding through pretraining and transfer, often paired with some form of multitask learning.
4
+
5
+ `jiant` enables a variety of complex training pipelines through simple configuration changes, including multi-task training [@Caruana1993MultitaskLA; @liu-etal-2019-multi] and pretraining, as well as the sequential fine-tuning approach from STILTs [@Phang2018SentenceEO]. In STILTs, intermediate task training takes a pretrained model like ELMo or BERT, and applies supplementary training on a set of intermediate tasks, before finally performing single-task training on additional downstream tasks.
6
+
7
+ `jiant` can be cloned and installed from GitHub: <https://github.com/nyu-mll/jiant>. `jiant` v1.3.0 requires Python 3.5 or later, and `jiant`'s core dependencies are PyTorch [@NEURIPS2019_9015], AllenNLP [@Gardner2017AllenNLP], and HuggingFace's Transformers [@Wolf2019HuggingFacesTS]. `jiant` is released under the MIT License [@osi2020]. `jiant` runs on consumer-grade hardware or in cluster environments with or without CUDA GPUs. The `jiant` repository also contains documentation and configuration files demonstrating how to deploy `jiant` in Kubernetes clusters on Google Kubernetes Engine.
8
+
9
+ - Tasks: Tasks have references to task data, methods for processing data, references to classifier heads, and methods for calculating performance metrics, and making predictions.
10
+
11
+ - Sentence Encoder: Sentence encoders map from the indexed examples to a sentence-level representation. Sentence encoders can include an input module (e.g., Transformer models, ELMo, or word embeddings), followed by an optional second layer of encoding (usually a BiLSTM). Examples of possible sentence encoder configurations include BERT, ELMo followed by a BiLSTM, BERT with a variety of pooling and aggregation methods, or a bag of words model.
12
+
13
+ - Task-Specific Output Heads: Task-specific output modules map representations from sentence encoders to outputs specific to a task, e.g. entailment/neutral/contradiction for NLI tasks, or tags for part-of-speech tagging. They also include logic for computing the corresponding loss for training (e.g. cross-entropy).
14
+
15
+ - Trainer: Trainers manage the control flow for the training and validation loop for experiments. They sample batches from one or more tasks, perform forward and backward passes, calculate training metrics, evaluate on a validation set, and save checkpoints. Users can specify experiment-specific parameters such as learning rate, batch size, and more.
16
+
17
+ - Config: Config files or flags are defined in HOCON[^3] format. Configs specify parameters for `jiant` experiments including choices of tasks, sentence encoder, and training routine.[^4]
18
+
19
+ Configs are `jiant`'s primary user interface. Tasks and modeling components are designed to be modular, while `jiant`'s pipeline is a monolithic, configuration-driven design intended to facilitate a number of common workflows outlined in [3.3](#pipeline){reference-type="ref" reference="pipeline"}.
20
+
21
+ ![`jiant` pipeline stages using RoBERTa as the sentence encoder, ReCoRD and MNLI tasks as intermediate tasks, and MNLI and BoolQ as tasks for target training and evaluation. The diagram highlights that during target training and evaluation phases, copies are made of the sentence encoder model, and fine tuning and evaluation for each task are conducted on separate copies.](jiant_flow_v3.png){#fig:flow width="\\textwidth"}
22
+
23
+ `jiant`'s core pipeline consists of the five stages described below and illustrated in Figure [2](#fig:flow){reference-type="ref" reference="fig:flow"}:
24
+
25
+ 1. A config or multiple configs defining an experiment are interpreted. Users can choose and configure models, tasks, and stages of training and evaluation.
26
+
27
+ 2. The tasks and sentence encoder are prepared:
28
+
29
+ 1. The task data is loaded, tokenized, and indexed, and the preprocessed task objects are serialized and cached. In this process, AllenNLP is used to create the vocabulary and index the tokenized data.
30
+
31
+ 2. The sentence encoder is constructed and (optionally) pretrained weights are loaded.[^5]
32
+
33
+ 3. The task-specific output heads are created for each task, and task heads are attached to a common sentence encoder. Optionally, different tasks can share the same output head, as in @liu-etal-2019-multi.
34
+
35
+ 3. Optionally, in the intermediate phase the trainer samples batches randomly from one or more tasks,[^6] and trains the shared model.
36
+
37
+ 4. Optionally, in the target training phase, a copy of the model is configured and trained or fine-tuned for each target task separately.
38
+
39
+ 5. Optionally, the model is evaluated on the validation and/or test sets of the target tasks.
40
+
41
+ `jiant` supports over 50 tasks. Task types include classification, regression, sequence generation, tagging, masked language modeling, and span prediction. `jiant` focuses on NLU tasks like MNLI [@N18-1101], CommonsenseQA [@talmor2018commonsenseqa], the Winograd Schema Challenge [@wsc], and SQuAD [@squad]. A full inventory of tasks and task variants is available in the [`jiant/tasks`](https://github.com/nyu-mll/jiant/tree/master/jiant/tasks) module.
42
+
43
+ `jiant` provides support for cutting-edge sentence encoder models, including support for Huggingface's Transformers. Supported models include: ELMo [@peters-etal-2018-deep], GPT [@radford2018improving], BERT [@devlin-etal-2019-bert], XLM [@NIPS20198928], GPT-2 [@radford2019language], XLNet [@yang2019xlnet], RoBERTa [@liu2019roberta], and ALBERT [@lan2019albert]. `jiant` also supports the from-scratch training of (bidirectional) LSTMs [@hochreiter1997long] and deep bag of words models [@iyyer-etal-2015-deep], as well as syntax-aware models such as PRPN [@DBLP:conf/iclr/ShenLHC18] and ON-LSTM [@shen2018ordered]. `jiant` also supports word embeddings such as GloVe [@pennington-etal-2014-GloVe].
44
+
45
+ <figure id="tab:config" data-latex-placement="t">
46
+ <div class="sourceCode" id="cb1" data-language="java" data-frame="single" data-xleftmargin=".025\textwidth" data-xrightmargin=".025\textwidth"><pre class="sourceCode java"><code class="sourceCode java"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="co">// Config for BERT experiments.</span></span>
47
+ <span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a></span>
48
+ <span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a><span class="co">// Get default configs from a file:</span></span>
49
+ <span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a>include <span class="st">&quot;defaults.conf&quot;</span></span>
50
+ <span id="cb1-5"><a href="#cb1-5" aria-hidden="true" tabindex="-1"></a>exp_name <span class="op">=</span> <span class="st">&quot;bert-large-cased&quot;</span></span>
51
+ <span id="cb1-6"><a href="#cb1-6" aria-hidden="true" tabindex="-1"></a></span>
52
+ <span id="cb1-7"><a href="#cb1-7" aria-hidden="true" tabindex="-1"></a><span class="co">// Data and preprocessing settings</span></span>
53
+ <span id="cb1-8"><a href="#cb1-8" aria-hidden="true" tabindex="-1"></a>max_seq_len <span class="op">=</span> <span class="dv">256</span> </span>
54
+ <span id="cb1-9"><a href="#cb1-9" aria-hidden="true" tabindex="-1"></a></span>
55
+ <span id="cb1-10"><a href="#cb1-10" aria-hidden="true" tabindex="-1"></a><span class="co">// Model settings</span></span>
56
+ <span id="cb1-11"><a href="#cb1-11" aria-hidden="true" tabindex="-1"></a>input_module <span class="op">=</span> <span class="st">&quot;bert-large-cased&quot;</span></span>
57
+ <span id="cb1-12"><a href="#cb1-12" aria-hidden="true" tabindex="-1"></a>transformers_output_mode <span class="op">=</span> <span class="st">&quot;top&quot;</span></span>
58
+ <span id="cb1-13"><a href="#cb1-13" aria-hidden="true" tabindex="-1"></a>s2s <span class="op">=</span> <span class="op">{</span></span>
59
+ <span id="cb1-14"><a href="#cb1-14" aria-hidden="true" tabindex="-1"></a> attention <span class="op">=</span> none</span>
60
+ <span id="cb1-15"><a href="#cb1-15" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span>
61
+ <span id="cb1-16"><a href="#cb1-16" aria-hidden="true" tabindex="-1"></a>sent_enc <span class="op">=</span> <span class="st">&quot;none&quot;</span></span>
62
+ <span id="cb1-17"><a href="#cb1-17" aria-hidden="true" tabindex="-1"></a>sep_embs_for_skip <span class="op">=</span> <span class="dv">1</span></span>
63
+ <span id="cb1-18"><a href="#cb1-18" aria-hidden="true" tabindex="-1"></a>classifier <span class="op">=</span> log_reg </span>
64
+ <span id="cb1-19"><a href="#cb1-19" aria-hidden="true" tabindex="-1"></a><span class="co">// fine-tune entire BERT model</span></span>
65
+ <span id="cb1-20"><a href="#cb1-20" aria-hidden="true" tabindex="-1"></a>transfer_paradigm <span class="op">=</span> finetune</span>
66
+ <span id="cb1-21"><a href="#cb1-21" aria-hidden="true" tabindex="-1"></a></span>
67
+ <span id="cb1-22"><a href="#cb1-22" aria-hidden="true" tabindex="-1"></a><span class="co">// Training settings</span></span>
68
+ <span id="cb1-23"><a href="#cb1-23" aria-hidden="true" tabindex="-1"></a>dropout <span class="op">=</span> <span class="fl">0.1</span></span>
69
+ <span id="cb1-24"><a href="#cb1-24" aria-hidden="true" tabindex="-1"></a>optimizer <span class="op">=</span> bert_adam</span>
70
+ <span id="cb1-25"><a href="#cb1-25" aria-hidden="true" tabindex="-1"></a>batch_size <span class="op">=</span> <span class="dv">4</span></span>
71
+ <span id="cb1-26"><a href="#cb1-26" aria-hidden="true" tabindex="-1"></a>max_epochs <span class="op">=</span> <span class="dv">10</span></span>
72
+ <span id="cb1-27"><a href="#cb1-27" aria-hidden="true" tabindex="-1"></a>lr <span class="op">=</span> <span class="fl">.00001</span></span>
73
+ <span id="cb1-28"><a href="#cb1-28" aria-hidden="true" tabindex="-1"></a>min_lr <span class="op">=</span> <span class="fl">.0000001</span></span>
74
+ <span id="cb1-29"><a href="#cb1-29" aria-hidden="true" tabindex="-1"></a>lr_patience <span class="op">=</span> <span class="dv">4</span></span>
75
+ <span id="cb1-30"><a href="#cb1-30" aria-hidden="true" tabindex="-1"></a>patience <span class="op">=</span> <span class="dv">20</span></span>
76
+ <span id="cb1-31"><a href="#cb1-31" aria-hidden="true" tabindex="-1"></a>max_vals <span class="op">=</span> <span class="dv">10000</span></span>
77
+ <span id="cb1-32"><a href="#cb1-32" aria-hidden="true" tabindex="-1"></a></span>
78
+ <span id="cb1-33"><a href="#cb1-33" aria-hidden="true" tabindex="-1"></a><span class="co">// Phase configuration</span></span>
79
+ <span id="cb1-34"><a href="#cb1-34" aria-hidden="true" tabindex="-1"></a>do_pretrain <span class="op">=</span> <span class="dv">1</span></span>
80
+ <span id="cb1-35"><a href="#cb1-35" aria-hidden="true" tabindex="-1"></a>do_target_task_training <span class="op">=</span> <span class="dv">1</span></span>
81
+ <span id="cb1-36"><a href="#cb1-36" aria-hidden="true" tabindex="-1"></a>do_full_eval <span class="op">=</span> <span class="dv">1</span></span>
82
+ <span id="cb1-37"><a href="#cb1-37" aria-hidden="true" tabindex="-1"></a>write_preds <span class="op">=</span> <span class="st">&quot;val,test&quot;</span></span>
83
+ <span id="cb1-38"><a href="#cb1-38" aria-hidden="true" tabindex="-1"></a>write_strict_glue_format <span class="op">=</span> <span class="dv">1</span></span>
84
+ <span id="cb1-39"><a href="#cb1-39" aria-hidden="true" tabindex="-1"></a></span>
85
+ <span id="cb1-40"><a href="#cb1-40" aria-hidden="true" tabindex="-1"></a><span class="co">// Task specific configuration </span></span>
86
+ <span id="cb1-41"><a href="#cb1-41" aria-hidden="true" tabindex="-1"></a>commitbank <span class="op">=</span> <span class="op">{</span></span>
87
+ <span id="cb1-42"><a href="#cb1-42" aria-hidden="true" tabindex="-1"></a> val_interval <span class="op">=</span> <span class="dv">60</span></span>
88
+ <span id="cb1-43"><a href="#cb1-43" aria-hidden="true" tabindex="-1"></a> max_epochs <span class="op">=</span> <span class="dv">40</span></span>
89
+ <span id="cb1-44"><a href="#cb1-44" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
90
+ <figcaption>Example <code>jiant</code> experiment config file.</figcaption>
91
+ </figure>
92
+
93
+ `jiant` experiments can be run with a simple CLI:
94
+
95
+ ``` {basicstyle="\\ttfamily\\footnotesize"}
96
+ python -m jiant \
97
+ --config_file roberta_with_mnli.conf \
98
+ --overrides "target_tasks = swag, \
99
+ run_name = swag_01"
100
+ ```
101
+
102
+ `jiant` provides default config files that allow running many experiments without modifying source code.
103
+
104
+ `jiant` also provides baseline config files that can serve as a starting point for model development and evaluation against GLUE [@wang2018glue] and SuperGLUE [@wang2019superglue] benchmarks.
105
+
106
+ More advanced configurations can be developed by composing multiple configurations files and overrides. Figure [3](#tab:config){reference-type="ref" reference="tab:config"} shows a config file that overrides a default config, defining an experiment that uses BERT as the sentence encoder. This config includes an example of a task-specific configuration, which can be overridden in another config file or via a command line override.
107
+
108
+ Because `jiant` implements the option to provide command line overrides with a flag, it is easy to write scripts that launch `jiant` experiments over a range of parameters, for example while performing grid search across hyperparameters. `jiant` users have successfully run large-scale experiments launching hundreds of runs on both Kubernetes and Slurm.
109
+
110
+ Here we highlight some example use cases and key corresponding `jiant` config options required in these experiments:
111
+
112
+ - Fine-tune BERT on SWAG [@zellers-etal-2018-swag] and SQUAD [@squad], then fine-tune on HellaSwag [@zellers-etal-2019-hellaswag]:
113
+
114
+ input_module = bert-base-cased
115
+ pretrain_tasks = "swag,squad"
116
+ target_tasks = hellaswag
117
+
118
+ - Train a probing classifier over a frozen BERT model, as in @tenney2019bert:
119
+
120
+ input_module = bert-base-cased
121
+ target_tasks = edges-dpr
122
+ transfer_paradigm = frozen
123
+
124
+ - Compare performance of GloVe [@pennington-etal-2014-GloVe] embeddings using a BiLSTM:
125
+
126
+ input_module = glove
127
+ sent_enc = rnn
128
+
129
+ - Evaluate ALBERT [@lan2019albert] on the MNLI [@N18-1101] task:
130
+
131
+ input_module = albert-large-v2
132
+ target_task = mnli
133
+
134
+ `jiant` implements features that improve run stability and efficiency:
135
+
136
+ - `jiant` implements checkpointing options designed to offer efficient early stopping and to show consistent behavior when restarting after an interruption.
137
+
138
+ - `jiant` caches preprocessed task data to speed up reuse across experiments which share common data resources and artifacts.
139
+
140
+ - `jiant` implements gradient accumulation and multi-GPU, which enables training on larger batches than can fit in memory for a single GPU.
141
+
142
+ - `jiant` supports outputting predictions in a format ready for GLUE and SuperGLUE benchmark submission.
143
+
144
+ - `jiant` generates custom log files that capture experimental configurations, training and evaluation metrics, and relevant run-time information.
145
+
146
+ - `jiant` generates TensorBoard event files [@tensorflow2015-whitepaper] for training and evaluation metric tracking. TensorBoard event files can be visualized using the TensorBoard Scalars Dashboard.
147
+
148
+ `jiant`'s design offers conveniences that reduce the need to modify code when making changes:
149
+
150
+ - `jiant`'s task registry makes it easy to define a new version of an existing task using different data. Once the new task is defined in the task registry, the task is available as an option in `jiant`'s config.
151
+
152
+ - `jiant`'s sentence encoder and task output head abstractions allow for easy support of new sentence encoders.
153
+
154
+ In use cases requiring the introduction of a new task, users can use class inheritance to build on a number of available parent task types including classification, tagging, span prediction, span classification, sequence generation, regression, ranking, and multiple choice task classes. For these task types, corresponding task-specific output heads are already implemented.
155
+
156
+ More than 30 researchers and developers from more than 5 institutions have contributed code to the `jiant` project.[^7] `jiant`'s maintainers welcome pull requests that introduce new tasks or sentence encoder components, and pull request are actively reviewed. The `jiant` repository's continuous integration system requires that all pull requests pass unit and integration tests and meet Black[^8] code formatting requirements.
157
+
158
+ While `jiant` is quite flexible in the pipelines that can be specified through configs, and some components are highly modular (e.g., tasks, sentence encoders, and output heads), modification of the pipeline code can be difficult. For example, training in more than two phases would require modifying the trainer code.[^9] Making multi-stage training configurations more flexible is on `jiant`'s development roadmap.
159
+
160
+ `jiant`'s development roadmap prioritizes adding support for new Transformer models, and adding tasks that are commonly used for pretraining and evaluation in NLU. Additionally, there are plans to make `jiant`'s training phase configuration options more flexible to allow training in more than two phases, and to continue to refactor `jiant`'s code to keep `jiant` flexible to track developments in NLU research.
2005.02066/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2005.02066/paper_text/intro_method.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In our proposed CyC-PDAM, the CycleGAN has the same implementation as its original work [@zhu2017unpaired]. When training the CycleGAN, the initial learning rate was set to $0.0001$ for the first $1/2$ of the total training iterations, and linearly decayed to $0$ for the other $1/2$.
4
+
5
+ The PDAM is trained with a batch size of $1$ and each batch contains $2$ images, one from the source and the other from the target domain. Due to the small batch size, we replace traditional batch normalization layers with group normalization [@wu2018group] layers, with the default group number $32$ as in [@wu2018group].
6
+
7
+ The overall loss function of PDAM is defined as:
8
+
9
+ $$\begin{equation}
10
+ \begin{aligned}
11
+ L_{pdam} & = \alpha_{img} L_{rpn} + \alpha_{ins} L_{det} + \alpha_{sem} L_{(sem-seg)} \\
12
+ & + \alpha_{da}(L_{(img-da)} + L_{(sem-da)} + L_{(ins-da)})
13
+ \end{aligned}
14
+ \end{equation}$$ where $L_{rpn}$ is the loss function for the RPN, $L_{det}$ is the loss of class, bounding box, and instance mask prediction of Mask R-CNN, $L_{(sem-seg)}$ is the cross entropy loss for semantic segmentation, $L_{(img-da)}$, $L_{(sem-da)}$ and $L_{(ins-da)}$ are cross entropy losses for domain classification at image, semantic and instance levels. $\alpha_{img}$, $\alpha_{ins}$, and $\alpha_{isem}$ are calculated according to Eq. [\[taskrw\]](#taskrw){reference-type="ref" reference="taskrw"} for task re-weighting. In our experiment, we set $\beta$ as $2$. $\alpha_{da}$ is updated as:
15
+
16
+ $$\begin{equation}
17
+ \begin{aligned}
18
+ \alpha_{2} = \frac{2}{1 + exp(-10t)} - 1
19
+ \end{aligned}
20
+ \end{equation}$$ where $t$ is the training progress and $t \in [0, 1]$. Thus $\alpha_{da}$ is gradually changed from $0$ to $1$, to avoid the noise from the unstable domain discriminators in the early training stage.
21
+
22
+ During training, the PDAM is optimized by SGD, with a weight decay of $0.001$ and a momentum of $0.9$. The initial learning rate is $0.002$, with linear warming up in the first $500$ iterations. The learning rate is then decreased to $0.0002$ when it reaches $3/4$ of the total training iteration. During inference, only the original Mask R-CNN architecture is used with the adapted weight and all of the hyperparameters for testing are fine-tuned on the validation set. All of our experiments were implemented with Pytorch [@paszke2017automatic], on two NVIDIA GeForce 1080Ti GPUs.
23
+
24
+ # Method
25
+
26
+ In this section, our proposed CyC-PDAM is compared with several state-of-the-art UDA methods, including CyCADA [@hoffman2017cycada], Chen [@chen2018domain], SIFA [@chen2019synergistic], and DDMRL [@kim2019diversify]. As the original CyCADA focuses on classification and semantic segmentation, we extend it with Mask R-CNN for UDA instance segmentation, as described in Sec. [\[baseline-sec\]](#baseline-sec){reference-type="ref" reference="baseline-sec"}. Chen [@chen2018domain] are originally for UDA object detection based on Faster R-CNN, by adapting the features at the image and instance levels. For UDA instance segmentation, we replace the original VGG16 based Faster R-CNN with the same Mask R-CNN in our architecture, and the original image- and instance-level adaptation in [@chen2018domain] with ours in Sec. [\[baseline-sec\]](#baseline-sec){reference-type="ref" reference="baseline-sec"}. SIFA [@chen2019synergistic] is a UDA semantic segmentation architecture for CT and MR images, with a pixel- and feature-level adaptation. In our experiment, we add the watershed algorithm to separate the touching objects in the semantic segmentation prediction of SIFA, for a fair comparison. DDMRL [@kim2019diversify] learns multi-domain-invariant features from various generated domains for UDA object detection and it is extended for instance segmentation, in a similar way as CyCADA [@hoffman2017cycada] and Chen [@chen2018domain]. In addition, we also compared with Hou [@hou2019robust], which is particularly designed for unsupervised nuclei segmentation in histopathology images. They trained a multi-task (segmentation, detection, and refinement) CNN architecture with their synthesized histopathology images from randomly generated binary nuclei masks.
27
+
28
+ <figure id="cmp-vis" data-latex-placement="t">
29
+ <img src="vis-cmp5.png" style="width:49.0%" />
30
+ <figcaption>Visualization result for the comparison experiments experiment. The first <span class="math inline">3</span> rows are from Kumar dataset, and the last <span class="math inline">3</span> rows are from TNBC. </figcaption>
31
+ </figure>
32
+
33
+ Table [\[cmp-exp\]](#cmp-exp){reference-type="ref" reference="cmp-exp"} shows that our proposed method outperforms all the comparison methods by a large margin, on different histopathology datasets. In addition, the one-tailed paired t-test is employed to prove that all of our improvements are statistically significant, with all the p-values under $0.05$. Chen [@chen2018domain] learns the domain-invariant features at the image and instance levels. However, due to the large differences between the fluorescence microscopy and real histopathology images, feature-level adaptation only is not enough to reduce the domain gap. With pixel-level adaptation on appearance, all the other methods achieve better performance. Compared with the baseline method CyCADA [@hoffman2017cycada], our CyC-PDAM has a large improvement of $6 - 12 \%$, due to the effectiveness of our proposed nuclei inpainting mechanism, panoptic-level adaptation, and task re-weighting mechanism. SIFA [@chen2019synergistic] focuses on domain-invariant features in the image and semantic levels, with a UDA semantic segmentation structure. As there exists a large number of nuclei objects in the histopathology images, the effectiveness of SIFA is still limited without any instance-level learning or adaptation. Although DDMRL [@kim2019diversify] only adapts the features at the image level, its performance is still at the same level as CyCADA, by adapting knowledge across various domains. Among all the comparison methods, Hou [@hou2019robust] achieves the second-best performance. Due to the effectiveness of panoptic-level feature adaptation and task re-weighting mechanism, our method still outperforms it under all three metrics, in both two experiments. Fig. [5](#cmp-vis){reference-type="ref" reference="cmp-vis"} are visualization examples of all the comparison methods.
2010.13993/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-09-30T05:44:19.422Z" agent="5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36" version="13.7.6" etag="1q435NDXNkUBrD9_xpdI" type="google"><diagram id="iQIMGvlgcFMxar9eVH3b">7VpNk5s4EP01PsYFEgh8nHGy2UuqpsqHTY4akG3VAnKEHNv761eAxIcMGYUojjM1vphuSajp92h1d7GA6/z8kePD/hNLSbYAXnpewPcLAHw/WMm/SnNpNCgEjWLHaaomdYoN/Y8opae0R5qScjBRMJYJehgqE1YUJBEDHeacnYbTtiwb7nrAO3Kl2CQ4u9b+Q1Oxb7Rx6HX6vwnd7fXOvqdGcqwnK0W5xyk79VTwwwKuOWOiucrPa5JVztN+adb9NTHaGsZJIWwWKL9/w9lRPZuyS1z0w8oF0q9SeDztqSCbA06qkZOEVur2Is+k5MvLLc2yNcsYr9fBFJN4m0h9iss9SdUkzgQWlBVSfCdtg4+l4Oxf0luHkpg8b+WIMo1wQc6Tj+e3TpNsIywngl/kFLUABsrPimj+SsmnDjagYdv3IdNKrKiya+/deVNeKIeOOxeOOBdlonIVk/b3vYy+HpkeeFfWhH+QE0BwOHeDFVUb53e6NctpIgc2uCjl36dNfzraVf+e3lXa22zc6J0hvd1uQZKMYZmiZxQiN1iClYFldI1lMAJl4ADJYBLJ8oCLaSS3OKfZpcFSDuH8UPsCwkD+Jwq6soEuL69mLOo3pI5Q36XFNc6NWX8mzvEQZ4huh3P4hvPNcIaRgbN3O5zRG863w9mI223ucwOcI4v0pkgfqpRQSgUryNBh5EzF5971F3ntqev3575w0UIhTfzcF77o9ZXQLaolvaqxiqRG2lmyI1fnvT6CBOY7IgbRygKKnqvHsh2t4ySTydm3oRFj/lc7PDFa5zEaaThEOvAMCJvnUav6Wal5I/MIMLOxxgtXN6rp0D62FUNiRwzxliiI7FkihSfCqTSUcHsO3CHgS88DPgyRDOfQ94cv+ipcxmEUowBGCKB4FcxjQ+DJ+yCv/Rm7oF/GjZUrbljzwoID6B45AKzRsQ4BRiwBZixxB7PuKvwYzoOqtgd6pX/CQr7XRa0BHmxPWN0sANNQ3w2urVs0AHAmkiCah6T0N770ph2qCeV3DDZqff0AHTGaO86mif9GkxGve8ZR7YombXngmiZGYNHlpTOagBGa2NQWL5UDRu9oTifIn+wEOS40UspJotp8BJdipPaIEzJeezzHYRB2h+FP1R7IYKd/w16CP93+u6Mi03+92IPxF/0m2P8RDcPXi31o5ALwhj0Hf6yJ+NNlwzK0azu0nQZHbQcdxfp9B83uu0lCkJE7BHOTkMiMGbFdEjInTxjrQf4QTdpOk6y/wj7s3hJF4QvQ15LZguiaGeHCYcmqnTrgELgzDoVoCL08L+ZxyCTjVeHkkEOz+puvvSRBBpK+WUlYR4PQQPIXVa6RmaVGjkuSWU3OV08Ts8lsxnprmqCJZMM1TczK1TVNnPU73R4fd5duBKsXALdljpmmAsuexwxwtYlvFclvqkjARFZwg4pEW/u7P0QSHNOaaUTYfpIkHS6GgA5RUkGoD6lS4YzuKtATCVCV4D5W8NEEZw9qIKdpmk0xiLNjkdann2cehQ7IEBgHfvvJS48MPhhhgxloLNggxe5zwiZwdB9lwg//Aw==</diagram></mxfile>
2010.13993/main_diagram/main_diagram.pdf ADDED
Binary file (25.8 kB). View file
 
2010.13993/paper_text/intro_method.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Following the success of neural networks in computer vision and natural language processing, there are now a wide range of *graph neural networks* (GNNs) for making predictions involving relational data [@Battaglia2018RelationalIB; @Wu2020ACS]. These models have had much success and sit atop leaderboards such as the Open Graph Benchmark [@Hu2020OpenGB]. Often, the methodological developments for GNNs revolve around creating strictly more expressive architectures than basic variants such as the Graph Convolutional Network (GCN) [@kipf2017semi] or GraphSAGE [@hamilton2017inductive]; examples include Graph Attention Networks [@velickovic2018graph], Graph Isomorphism Networks [@xu2018powerful], and various deep models [@li2019deepgcns; @rong2019dropedge; @chen2020simple]. Many ideas for new GNN architectures are adapted from new architectures in models for language (e.g., attention) or vision (e.g., deep CNNs) with the hopes that success will translate to graphs. However, as these models become more complex, understanding their performance gains is a major challenge, and scaling them to large datasets is difficult.
4
+
5
+ Here, we see how far we can get by combining much simpler models, with an emphasis on understanding where there are easy opportunities for performance improvements in graph learning, particularly transductive node classification. We propose a simple pipeline with three main parts ([1](#fig:overview){reference-type="ref+label" reference="fig:overview"}): (i) a base prediction made with node features that ignores the graph structure (e.g., an MLP or linear model); (ii) a correction step, which propagates uncertainties from the training data across the graph to correct the base prediction; and (iii) a smoothing of the predictions over the graph. Steps (ii) and (iii) are just post-processing and use classical methods for graph-based semi-supervised learning, namely, label propagation [@zhu2005semi]. [^3] With a few modifications and new deployment of these classic ideas, we achieve state-of-the-art performance on several node classification tasks, outperforming big GNN models. In our framework, the graph structure is not used to learn parameters but instead as a post-processing mechanism. This simplicity leads to models with orders of magnitude fewer parameters that take orders of magnitude less time to train and can easily scale to large graphs. We can also combine our ideas with state-of-the-art GNNs and see modest performance gains.
6
+
7
+ <figure id="fig:overview" data-latex-placement="t">
8
+ <embed src="figures/toy_blue.pdf" style="width:80.0%" />
9
+ <figcaption>Overview of our GNN-free model, Correct and Smooth, with a toy example. The left cluster belongs to orange and the right cluster belongs to blue. We use MLPs for base predictions, ignoring the graph structure, which we assume gives the same prediction on all nodes in this example. After, base predictions are corrected by propagating errors from the training data. Finally, corrected predictions are smoothed with label propagation.</figcaption>
10
+ </figure>
11
+
12
+ A major source of our performance improvements is directly using labels for predictions. This idea is not new --- early diffusion-based semi-supervised learning algorithms on graphs such as the spectral graph transducer [@joachims2003transductive], Gaussian random field models [@Zhu2003SemiSupervisedLU], and and label spreading [@zhou2004learning] all use this idea. However, the motivation for these methods was semi-supervised learning on point cloud data, so the features were used to construct the graph. Since then, these techniques have been used for learning on relational data from just the labels (i.e., no features) [@koutra2011unifying; @gleich2015using; @peel2017graph; @chin2019decoupled] but have largely been ignored in GNNs. That being said, we find that even simple label propagation (which ignores features) does surprisingly well on a number of benchmarks. This provides motivation for combining two orthogonal sources of prediction power --- one coming from the node features (ignoring graph structure) and one coming from using the known labels directly in predictions.
13
+
14
+ Recent research connects GNNs to label propagation [@wang2020unifying; @Jia-2020-GNNR] as well as Markov Random fields [@qu2019gmnn; @gao2019conditional], and some techniques use ad hoc incorporation of label information in the features [@shi2020masked]. However, these approaches are still expensive to train, while we use label propagation in two understandable and low-cost ways. We start with a cheap "base prediction" from a model that ignores graph structure (apart from perhaps a cheap pre-processing feature augmentation step like a spectral embedding). After, we use label propagation for error correction and then to smooth final predictions. These post-processing steps are based on the fact that errors and labels on connected nodes are positively correlated. Assuming similarity between connected nodes is at the center of much network analysis and corresponds to homophily or assortative mixing [@mcpherson2001birds; @newman2003mixing; @easley2010networks]. In the semi-supervised learning literature, the analog is the smoothness or cluster assumption [@chapelle2003cluster; @zhu2005semi]. The good performance of label propagation that we see across a wide variety of datasets suggests that these correlations hold on common benchmarks.
15
+
16
+ Overall, our methodology demonstrates that combining several simple ideas yields excellent performance in transductive node classification at a fraction of the cost, in terms of both model size (i.e., number of parameters) and training time. For example, on the OGB-Products benchmark, we out-perform the current best-known GNN with more than two orders of magnitude fewer parameters and more than two orders of magnitude less training time. However, our goal is *not* to say that current graph learning methods are poor or inappropriate. Instead, we aim to highlight easier ways in which to improve prediction performance in graph learning and to better understand the source of performance gains. Our main finding is that more direct incorporation of labels into the learning algorithms is key. And by combining our ideas with existing GNNs, we also see improvements, although they are minor. We hope that our approach spurs new ideas that can help in other graph learning tasks, such as inductive node classification, link prediction, and graph prediction.
2106.03904/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-05-25T17:27:33.285Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" version="14.6.13" etag="tJL9QA0ZVLAExd8z5q9g" type="device"><diagram id="dtCKXqa36pQEBdLNjtHz">7VxNd6M2FP01Pu0sMgchwHiZOGm6aOb0NIvOrOYwIBtajFwsf/XXVxiJTzEjA8I0YWX0JIR4l/t0eRKeweXm9Bw7W/8Feyic6Zp3msHHma4DoOn0J7GcU4ttgtSwjgOPNcoNr8G/iBk1Zt0HHtqVGhKMQxJsy0YXRxFyScnmxDE+lputcFi+6tZZsytqueHVdUJUa/Zn4BGfj8408opfUbD2+aVtaKU1GydrnRp2vuPhY+Fi8GkGlzHGJD3anJYoTLzHHZOO6JeG2mxkMYqIzAkMiYMT7tnNsXGRM79b5NGbZ0UcEx+vceSET7n1Icb7yENJjxot5W1+w3hLjYAa/0KEnBmSzp5gavLJJmS1OxLjvzNfGtSSjiK5dNldeB+7zAQZ8k68Ruxe5/XbB5lT6eOI8AaR+EybxCh0SHAo9+6w52Kdtcs9Rw+Y88SOhAJHWiEdxMMK07EUPWr9s8e84m538ck9bQC17SmvpEfr5PePT59+Bh94X3QUaXdpZQ2rMhJHPyDodetcPHakXCx7fRWE4RKHOL6cC1e2i1w3Q6NQ8802DVPLrndAMUGnxgeuwePsBN1inGLcN21WPhaYBJnNL5KIs78LSkZnlHRbhNIs6dk4fZ3NH+jtL8ls/jgzn2iJ/JQcXmplMaS+JSJ6cEAiHKEKeszkhME6okWXYoGo/SFBKqBx655VbALPuzBW9GSUn50eoAZGBepFHWpDgLTeA9CmUqB9EdB3YIKaQ22Zw0FtSUxhkXefzPuJw0JntwvcstfpXcbnz8wdl8KXpPDR5MXHU7Hy8cxK0vNWcZKC0s4tOM8UOI/bpOcydoXfcXChAMduUcEOmuUu0kmXnVXUEpWOTLPcEVxU0E29UOvoAnB221KYz5XS+zDF8SK5jQquFo+uA5DbliD3SPXpok59IM/93gXqojNlmgWq/oYFqmENKVB5HyoVqj5FtgbZYsyHky0AKEXaFyE9SdTC2wgcEGuZNMuYNOpC2ru30agm6Emj6nNlGhV0Twn9UKROobxJpJracCIViNJKY1CpbWNGQ4QoqltOwJK8lQ8bvctb0D3j06xv4RvWt1AfVN+KsjU961s4BcUGzQOH1DxqczS+COlJ3wqyb0NgLZOmGZO+5bPFaAUunPckcLNQrkDgdk8pTQK3tcCF1nACl/cxOoHbQPWiUL3C0x05bFQiMNArHO6Penr35FGz2n15w2pXkOBTJ3Y5S1SK3ZcpQDYIIMHGEmXxUW2qxxcBPWndQqQ1BsRaJtkzJq17hXtvo3Vr82RbrXsHqgD3OOOq3VE0id3vil0ABhS73VNUYm31vMTRQYm0ijGhhMQJTncLra61PAfZK6HWslwbfVv1AxrkG4f42opdB80CIrFVXYRpBZso33RtXD4F5HPh+EvhOA/JSeHqiFzKnY9tec3UKosnZgUQ6YhcTS7b6l6B+sg5qcN7NNgataVT42Pb+ba6d7veVY/4irJLE74CfPUSKBbsgO8Pu+oPX/7WPeF7Hb5A7w1fQVc94itKUVXxHTpnCOUmaMiGWnqNsqQfjP4/LuohkWQ2v3w8fyXv9D0jk6M8PShYDLe+w+1OoIqSRrcmiGROARrSnu4Y/jJErvw6oU3EUvt1mPCjoff7ij+3K8AO+FUBFCVz/i/UUzAPyVFPdtNlG+qp3RYk3Aw9Ue8WeyWhTJpmrNSzb0U92e0gbagnyqMo3qf1jqlX3ecz4C4OKJNRGRH1hqdZ65WoKqwKt34YoryJ4rXm98tXu5IZV7gQRYv5/76kD0b+9znw6T8=</diagram></mxfile>
2106.03904/main_diagram/main_diagram.pdf ADDED
Binary file (15.8 kB). View file
 
2106.03904/paper_text/intro_method.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Infectious diseases like seasonal influenza and COVID-19 are major global health issues, affecting millions of people [\[14,](#page-10-0) [34\]](#page-11-0). Forecasting disease time-series (such as infected cases) at various temporal and spatial resolutions is a non-trivial and important task [\[34\]](#page-11-0). Estimating various indicators e.g. future incidence, peak time/intensity and onset, gives policy makers valuable lead time to plan interventions and optimize supply chain decisions, as evidenced by various Centers for Disease Control (CDC) prediction initiatives for diseases like dengue, influenza and COVID-19 [\[33,](#page-10-1) [16,](#page-10-2) [30\]](#page-10-3).
4
+
5
+ Statistical approaches [\[5\]](#page-9-0) for the forecasting problem are fairly new compared to more traditional mechanistic approaches [\[13,](#page-10-4) [38\]](#page-11-1). While valuable for 'what-if' scenario generation, mechanistic models have several issues in real-time forecasting. For example, they cannot easily leverage data from multiple indicators or predict composite signals. In contrast, deep learning approaches in this context are a novel direction and have become increasingly promising, as they can ingest numerous data signals without laborious feature engineering [\[37,](#page-11-2) [33,](#page-10-1) [1,](#page-9-1) [8\]](#page-9-2).
6
+
7
+ However, there are several challenges in designing such methods, primarily with the need to handle uncertainty to give more reliable forecasts [\[14\]](#page-10-0). Decision makers need to understand the inherent uncertainty in the forecasts so that they can make robust decisions [\[32\]](#page-10-5). Providing probabilistic
8
+
9
+ forecasts and interpreting what signals cause the model uncertain is also helpful to better communicate the situation to the public. Due to the inherent complexity of the prediction problem, just like weather forecasting, so-called 'point' forecasts without uncertainty are increasingly seen as not very useful for planning for such high-stake decisions [14, 33].
10
+
11
+ Uncertainty quantification in purely statistical epidemic forecasting models is a little explored area. Most traditional methods optimize for accuracy of 'point-estimates' only. Some approaches that model the underlying generative distribution of the data naturally provide a probability distribution of the outputs [4, 5, 44, 32], but they do not focus on producing *calibrated* distributions [12, 22] as well. Another line of research addresses this problem with the use of simple methods such as an ensemble of models to build a sample of forecasts/uncertainty bounds [34, 6]. Recent attempts for deep learning forecasting models use ad-hoc methods such as bootstrap sampling [37]; while others disregard this aspect [42, 36]. As a result these can produce wildly wrong predictions (especially in novel/atypical scenarios) and can be even confident in their mistakes. In time-series analysis, while a large number of deep learning models [1] have been proposed, little work has been done to quantify uncertainty in their predictions. Bayesian deep learning [28, 3, 27] (and approximation methods [10, 25, 43]) and deep ensembling [24] are two directions that may mitigate this issue, but their applicability and effectiveness are still largely limited by factors such as intractable exact model inference [3, 27], difficulty of specifying proper parameter priors [26], and uncertainty underestimation [21, 19]. Neural Process (NP) [11] and Functional Neural Process (FNP) [26] are recent frameworks developed to incorporate stochastic processes with DNNs, but only for static data.
12
+
13
+ Our work aims to close these crucial gaps from both viewpoints. We propose a non-parametric model for epi-forecasting by 'marrying' deep sequential models with recent development of neural stochastic processes. Our model, called EPIFNP, leverages the expressive power of deep sequential models, while quantifying uncertainty for epidemic forecasting directly in the functional space. We extend the idea of learning dependencies between data points [26] to sequential data, and introduce additional latent representations for both local and global views of input sequences to improve model calibration. We also find that the dependencies learned by EPIFNP enable reliable interpretation of the model's forecasts.
14
+
15
+ Figure 1 shows an example of a wellcalibrated forecast due to EPIFNP in flu forecasting. CDC is interested in forecasting weighted Influenza-likeillness (wILI) counts, where ILI is defined as "fever and a cough and/or a sore throat without a known cause other than flu. Figure 1 (a) shows the historical ILI data with abnormal seasons highlighted; Figure (b) shows how our method EPIFNP, in contrast to others, is able to react well to a particularly novel event (in this case, introduction of a symptomatically simi-
16
+
17
+ <span id="page-1-0"></span>![](_page_1_Figure_4.jpeg)
18
+
19
+ ![](_page_1_Figure_5.jpeg)
20
+
21
+ quences, 2003-20
22
+
23
+ (a) Historical wILI seasons se- (b) Probabilistic predictions of all methods
24
+
25
+ Figure 1: EPIFNP (red) is the only model reacting reliably for the atypical 3rd peak of 2019/20 season and whose 95% confidence bounds completely encloses the ground truth.
26
+
27
+ lar COVID-19 disease), giving both accurate and well-calibrated forecasts.
28
+
29
+ - Probabilistic Deep Generative Model: We design a neural Gaussian processes model for epidemic forecasting, which automatically learns stochastic correlations between query sequences and historical data sequences for non-parametic uncertainty quantification.
30
+ - Calibration and Explainability: EPIFNP models the output forecast distribution based on similarity between the current season and the historical seasons in a latent space. We introduce additional latent variables to capture global information of historical seasons and local views of sequences, and show that this leads to better-calibrated forecasts. Further, the relations learned between the current season and similar patterns from previous seasons enable explaining the predictions of EPIFNP.
31
+ - Empirical analysis of accurate well-calibrated forecasting: We perform rigorous benchmarking on flu forecasting and show that EPIFNP significantly outperforms strong baselines, providing up to 2.5x more accurate and 2.4x better calibrated forecasts. We also use outlier seasons to show the uncertainty in EPIFNP makes it adapt well to unseen patterns compared with baselines.
32
+
33
+ We focus on epidemic disease forecasting in this paper. Our goal is to predict the disease incidence few week into the future given the disease surveillance dataset containing incidence from the past seasons as well as for the past weeks of the current season. This is formulated as a supervised time-series forecasting problem as follows.
34
+
35
+ **Epidemic Forecasting task:** Let the incidence for season i at week t be $x_i^{(t)}$ . During the current season N+1 and current week t, we first have the snippet of time-series values upto week t denoted by $\mathbf{x}_{N+1}^{(1...t)} = \{x_{N+1}^{(1)}, \dots, x_{N+1}^{(t)}\}$ . We are also provided with data from past historical seasons 1 to N denoted by $H = \{\mathbf{x}_i^{(1...T)}\}_{i=1}^N$ where T is number of weeks per season. In real-time forecasting, intuitively our goal is to use all the currently available data, and predict the next few future values (usually till 4 weeks in future). That is to predict the value $y_{N+1}^{(t)} = x_{N+1}^{(t+k)}$ , k week in future where $k \in \{1, 2, 3, 4\}$ given $\mathbf{x}_{N+1}^{(1...t)}$ and H. Formally, our task is: given (a) the dataset of historical incidence sequences H and (b) snippet of incidence for current season N+1 till week t, $x_{N+1}^{(1...t)}$ , estimate an accurate prediction for $y_{N+1}^{(t)}$ and a accurate probability distribution $\hat{p}(y_{N+1}^{(t)}|\mathbf{x}_{N+1}^{(1...t)}, H)$ . There are several ways to evaluate such forecasts [40], which we elaborate later in our experiments.
36
+
37
+ # Method
38
+
39
+ Overview: EPIFNP aims to produce calibrated forecasting probabilistic distribution. One popular choice is to use BNNs [3, 9] which impose probability distributions for weight parameters. However, as Deep Sequential Models (DSMs) have an enormous number of uninterpretable parameters, it is impractical to specify proper prior distributions in the parameter space. Existing works usually adopt simple distributions [3, 35], e.g., independent Gaussian distribution, which could severely under-estimate the true uncertainty [21]. To solve this issue, we propose EpiFNP, which combines (1) the power of DSMs in representation learning and capturing temporal correlations; and (2) the power of Gaussian processes (GPs) in non-parametric uncertainty estimation directly in the functional space similar to [26], instead of learning probability distributions for model parameters.
40
+
41
+ During training phase of our supervised learning task, EPIFNP is trained to predict $x_i^{(t+k)}$ given $x_i^{(1...t)}$ as input for $i \leq N$ . Therefore, we define the training set M as set of partial sequences and their forecast ground truths from historical data H, i.e, $M = \{(\mathbf{x}_i^{(1...t)}, y_i^{(t)}) : i \leq N, t+k \leq T, y_i^{(t)} = x_i^{(t+k)}\}$ . For simplicity, let $\mathbf{X}_M$ be set of the partial sequences in M and $\mathbf{y}_M$ the set of ground truth labels. Following GPs for non-parametric uncertainty quantification, EPIFNP constructs the forecasting distribution on the historical sequences. Since the number of possible sequences that can be extracted from H is prohibitively large, we narrow down the set of candidates into a set of sequences that comprehensively represents H, called the reference set R. We choose the set of full sequences of T incidence values for each season as reference set, i.e, $R = \{\mathbf{x}_i^{(1...T)}\}_{i=1}^{N_R}$ . We refer elements of M as $\{\mathbf{x}_i^M, y_i^M\}_{i=1}^{N_M}$ and R as $\{\mathbf{x}_i^M\}_{i=1}^{N_R}$ when we don't need to specify the week and season. Also let $\mathbf{X}_{\mathcal{D}} = \{\mathbf{x}_i^M\}_{i=1}^{N_M} \cup \{\mathbf{x}_i^R\}_{i=1}^{N_R}$ , the union of reference and training sequences.
42
+
43
+ The generative process of EPIFNP includes three key steps (also see Figure 2 and Eq. 1):
44
+
45
+ - (a) **Probabilistic neural sequence encoding** (Section 4.2). The first step of the generative process is to use a DSM to encode the sequence $\mathbf{x}_i \in \mathbf{X}_D$ into a *variational* latent embedding $\mathbf{u}_i \in \mathbf{U}_D$ . The representation power of DSM helps us to model complex temporal patterns within sequences, while the probabilistic encoding framework enables us to capture the uncertainty in sequence embedding.
46
+ - (b) Stochastic correlation graph construction (Section 4.3). The second step is to capture the correlations between reference ( $\mathbf{U}_R$ ) and training ( $\mathbf{U}_M$ ) data points in the *latent embedding space* (i.e. seasonal similarity between epidemic curves). We use a stochastic data correlation graph $\mathbf{G}$ , which plays a similar role to the covariance matrix in classic GPs. It encodes the dependencies between reference and training sequences, enabling non-parametric uncertainty estimation.
47
+ - (c) **Final predictive distribution parameterization** (Section 4.4). Finally, we parameterize the predictive distribution with three stochastic latent variables: (1) The global stochastic latent variable **v**, which is shared by all the sequences. This variable captures the overall information of the
48
+
49
+ <span id="page-3-0"></span>![](_page_3_Figure_0.jpeg)
50
+
51
+ Figure 2: Pipeline of proposed EPIFNP model. (i) Three main components (a), (b) and (c) correspond to the terms in Equation 1. (ii) Variables highlighted in Red correspond to steps specific to inference of sequence $\mathbf{x}_3^M$ .
52
+
53
+ underlying function based on all the reference points. (2) The local stochastic latent variables $\mathbf{Z}_M = \{\mathbf{z}_i^M\}_{i=1}^{N_M}$ . This term captures the data correlation uncertainty based on the stochastic data correlation graph $\mathbf{G}$ . (3) The stochastic sequence embeddings $\mathbf{U}_M = \{\mathbf{u}_i^M\}_{i=1}^{N_M}$ . This term captures the embedding uncertainty and provide additional information beyond the reference set.
54
+
55
+ Hence, putting it all together from the generative process, we factorize the predictive distribution of the training sequences into three corresponding parts ( $\theta$ is the union of the parameters in EPIFNP):
56
+
57
+ <span id="page-3-1"></span>
58
+ $$p(\mathbf{y}_{M}|\mathbf{X}_{M},R) = \sum_{\mathbf{G}} \int \underbrace{p_{\theta}(\mathbf{U}_{\mathcal{D}}|\mathbf{X}_{\mathcal{D}})}_{(a)} \underbrace{p(\mathbf{G}|\mathbf{U}_{\mathcal{D}})}_{(b)}$$
59
+
60
+ $$\underbrace{p_{\theta}(\mathbf{Z}_{M},|\mathbf{G},\mathbf{U}_{R})p_{\theta}(\mathbf{v}|\mathbf{U}_{R})p_{\theta}(\mathbf{y}_{M}|\mathbf{U}_{M},\mathbf{Z}_{M},\mathbf{v})}_{(c)} d\mathbf{U}_{\mathcal{D}}d\mathbf{Z}_{M}d\mathbf{v}.$$
61
+ (1)
62
+
63
+ Compared to existing recurrent neural process (RNP) [31] for sequential data (and its related predecessors [11, 17]), our EPIFNP process has stronger representation power and is more interpretable. Specifically, RNP uses a single global stochastic latent variable to capture the functional uncertainty, which is not flexible enough to represent a complicated underlying distribution. In contrast, EPIFNP constructs three stochastic latent variables to capture the uncertainty from different perspectives and can interpret the prediction based on the correlated reference sequences.
64
+
65
+ The probabilistic neural sequence encoder $p_{\theta}(\mathbf{U}_D|\mathbf{X}_D)$ aims to model the complex temporal correlations of the sequence for accurate predictions of y, while capturing the uncertainty in the sequence embedding process. To this end, we design the *sequence encoder* as a RNN and stack *a self-attention layer* to capture long-term correlations. Moreover, following Variational auto-encoder (VAE) [18], we model the latent embedding $\mathbf{u}_i$ as a Gaussian random variable to capture embedding uncertainty.
66
+
67
+ We encode all the sequences, including reference sequences and training sequences, independently. Taking one sequence $\mathbf{x}_i$ as an example, we first feed $\mathbf{x}_i$ into a Gated Recurrent Unit (GRU) [7]:
68
+
69
+ $$\{\mathbf{h}_{i}^{(1)}\dots,\mathbf{h}_{i}^{(t)}\} = \text{GRU}(\{x_{i}^{(1)}\dots,x_{i}^{(t)}\}).$$
70
+ (2)
71
+
72
+ where $\mathbf{h}_i^{(t)}$ denotes the hidden state at time step t. To obtain the embedding of $\mathbf{x}_i$ , the simplest way is to directly use the last step hidden state, $\mathbf{h}^{(t)}$ . However, using the last step embedding is inadequate for epidemic forecasting as the estimates for ILI surveillance data are often delayed and revised multiple times before they stabilize [1]. Over-reliance over the last step hidden state would harm the predictive ability of the model. Therefore, we choose to use a self-attention layer [41] to aggregate the information of the hidden states across all the time steps:
73
+
74
+ $$\{\alpha_i^{(1)}\dots,\alpha_i^{(t)}\} = \text{Self-Atten}(\{\mathbf{h}_i^{(1)}\dots,\mathbf{h}_i^{(t)}\}), \qquad \bar{\mathbf{h}}_i = \sum_{t'=1}^t \alpha_i^{(t')} \mathbf{h}_i^{(t')},$$
75
+ (3)
76
+
77
+ where $\bar{\mathbf{h}}_i$ is the summarized hidden state vector. Compared with the vanilla attention mechanism [2], self-attention is better at capturing long-term temporal correlations [41]. Though $\bar{\mathbf{h}}_i$ has encoded the temporal correlations, it is deterministic and cannot represent embedding uncertainty. Inspired by VAE, we parameterize each dimension of the latent embedding $\mathbf{u}_i$ as a Gaussian random variable:
78
+
79
+ $$p_{\theta}([\mathbf{u}_i]_k|\mathbf{x}_i) = \mathcal{N}([g_1(\bar{\mathbf{h}}_i)]_k, \exp([g_2(\bar{\mathbf{h}}_i)]_k)), \tag{4}$$
80
+
81
+ where $g_1$ and $g_2$ are two multi-layer perceptrons (MLPs), $[\cdot]_k$ is the k-th dimension of the variable.
82
+
83
+ The stochastic graph G is used to model the correlations among sequences, which is central to the non-parametric uncertainty estimation ability of EPIFNP. It is realized by constructing a bipartite graph from the reference set R to the training set M based on the similarity between their sequence embeddings. With this graph, we aim to model the dynamic similarity among epidemic curves as in [1] but in a stochastic manner, which allows us to further quantify the uncertainty coming from our latent representations of the sequences. Note that the similarity with reference sequence embeddings dynamically changes across the current season since different periods of the season may be similar to different sets of reference sequences (as we illustrate in Section 4.4).
84
+
85
+ We first construct a complete weighted bipartite graph $G_c$ from R to M, where the nodes are the sequences. The weight of each edge is calculated as similarity between two sequences in the embedding space using the radial basis function kernel $\kappa(\mathbf{u}_i^R, \mathbf{u}_j^M) = \exp(-\gamma ||\mathbf{u}_i^R - \mathbf{u}_j^M||^2)$ . Modeling such a similarity in the embedding space is more accurate than in the input space by leveraging the representation power of the neural sequence encoder.
86
+
87
+ Though we can directly use $G_c$ to encode the data correlations, such a dense complete graph
88
+
89
+ requires heavy computations and does not scale to a large dataset. Therefore, we choose to further sample from this complete graph to obtain a stochastic binary bipartite graph ${\bf G}$ as shown in Figure 3. This graph can be represented as a random binary adjacency matrix, where ${\bf G}_{i,j}=1$ means the reference sequence ${\bf x}_i^R$ is a parent of the training sequence ${\bf x}_j^M$ . We then parameterize this binary adjacency matrix using Bernoulli distributions:
90
+
91
+ <span id="page-4-0"></span>![](_page_4_Figure_8.jpeg)
92
+
93
+ <span id="page-4-1"></span>Figure 3: We sample the (sparse) binary graph G from the complete weighted (dense) graph $G_c$ .
94
+
95
+ $$p(\mathbf{G}|\mathbf{U}_{\mathcal{D}}) = \prod_{i \in R} \prod_{j \in M} \text{Bernoulli}(\mathbf{G}_{i,j}|\kappa(\mathbf{u}_i^R, \mathbf{u}_j^M)).$$
96
+ (5)
97
+
98
+ Intuitively, the edges in $G_c$ with higher weights are more likely to be kept after sampling. This sampling process leads to sparse correlations for each sampled graph, which can speed up training due to sparsity.
99
+
100
+ Here we introduce how to parameterize the final prediction based on the three latent variables mentioned in Section 4.1, which *capture the functional uncertainty from different perspectives*.
101
+
102
+ **Local latent variable \mathbf{z}\_i^M:** It summarizes the information of the correlated reference points for each training point and captures the *uncertainty of data correlations*. We generate $\mathbf{z}_i^M$ based on the structure of the data correlation graph, and each dimension k follows a Gaussian distribution:
103
+
104
+ $$\mathbf{z}_{i,k}^{M} \sim \mathcal{N}(C_i \sum_{j: \mathbf{G}_{j,i}=1} h_1(\mathbf{u}_j^R)_k, \exp(C_i \sum_{j: \mathbf{G}_{j,i}=1} h_2(\mathbf{u}_j^R)_k)), \tag{6}$$
105
+
106
+ where $h_1$ & $h_2$ are two MLPs and $C_i = \sum_j \mathbf{G}_{i,j}$ is for normalization. As we can see from Equation 6, if the sequence has lower probability to be connected with the reference sequences, $\mathbf{z}_i^M$ becomes a standard Gaussian distribution which is an uninformative prior. This property imposes a similar inductive bias as in the GPs with RBF kernel.
107
+
108
+ Global latent variable v. It encodes the information in all the reference points, computed as:
109
+
110
+ $$\beta_1, \dots, \beta_{N_R} = \text{Self-Atten}(\mathbf{u}_1^R, \dots, \mathbf{u}_{N_R}^R), \qquad \mathbf{v} = \sum_{i=1}^{N_R} \beta_i \mathbf{u}_i^R.$$
111
+ (7)
112
+
113
+ In contrast with the local variable $\mathbf{z}_i^M$ , the global latent variable $\mathbf{v}_i$ summarizes the overall information of the underlying function. It is shared by all the training sequences which allows us to capture the functional uncertainty from a global level.
114
+
115
+ **Sequence embedding** $\mathbf{u}_i^M$ : The above two latent variables are both constructed from the embeddings of the reference sequences, which may lose *novel information present in the training sequences*. Therefore, we add a direct path from the latent embedding $\mathbf{u}_i^M$ of the training sequence to the final prediction to enable the neural network to extrapolate beyond the distribution of the reference sequences. This is useful in novel/unprecedented patterns where the input sequence can not rely only on reference sequences from historical data for prediction.
116
+
117
+ We concatenate the three variables together into a single vector $\mathbf{e}_i$ and obtain the final predictive distribution (where $d_1$ and $d_2$ are MLPs):
118
+
119
+ $$\mathbf{e}_i = \operatorname{concat}(\mathbf{z}_i, \mathbf{v}_i, \mathbf{u}_i), \qquad p(y_i | \mathbf{z}_i^M, \mathbf{v}, \mathbf{u}_i^M) = \mathcal{N}(d_1(\mathbf{e}_i), \exp(d_2(\mathbf{e}_i))).$$
120
+ (8)
121
+
122
+ We now introduce how to learn the model parameters efficiently during training and forecast for a new unseen sequence at test time. Directly maximizing the data likelihood is intractable due to the summation and integral in Equation 1. Therefore, we choose to use the *amortized variational inference* and approximate the true posterior $p(\mathbf{U}_{\mathcal{D}}, \mathbf{G}, \mathbf{Z}_M, \mathbf{v}|R, M)$ with $q_{\phi}(\mathbf{U}_{\mathcal{D}}, \mathbf{G}, \mathbf{Z}_M, \mathbf{v}|R, M)$ , similar to [26], as
123
+
124
+ $$q_{\phi}(\mathbf{U}_{\mathcal{D}}, \mathbf{G}, \mathbf{Z}_{M}, \mathbf{v}|R, M) = p_{\theta}(\mathbf{U}_{\mathcal{D}}|\mathbf{X}_{\mathcal{D}})p(\mathbf{G}|\mathbf{U}_{\mathcal{D}})p(\mathbf{v}|\mathbf{U}_{R})q_{\phi}(\mathbf{Z}_{M}|M). \tag{9}$$
125
+
126
+ We design $q_{\phi}$ as a single layer of neural network parameterized by $\phi$ , which outputs mean and variance of the Gaussian distribution $q_{\phi}(\mathbf{Z}_M|\mathbf{X}_M)$ .
127
+
128
+ We then use a gradient-based method, such as Adam [18], to maximize the evidence lower bound (ELBO) of the log likelihood. After canceling redundant terms, the ELBO can be written as:
129
+
130
+ $$\mathcal{L} = -\mathbf{E}_{\mathbf{Z}_{M},\mathbf{G},\mathbf{U}_{\mathcal{D}},\mathbf{v} \sim q_{\phi}(\mathbf{Z}_{M}|\mathbf{X}_{M})p_{\theta}(\mathbf{G},\mathbf{U}_{\mathcal{D}},\mathbf{v}|\mathcal{D})}[\log P(\mathbf{y}_{M}|\mathbf{Z}_{M},\mathbf{U}_{M},\mathbf{v}) + \log P(\mathbf{Z}_{M}|\mathbf{G},\mathbf{U}_{R}) - q_{\phi}(\mathbf{Z}_{M}|\mathbf{X}_{M})].$$
131
+ (10)
132
+
133
+ We use the reparameterization trick to make the sampling procedure from the Gaussian distribution differentiable. Moreover, as sampling from the Bernoulli distribution in Equation 7 leads to discrete correlated data points, we make use of the Gumbel softmax trick [15] to make the model differentiable.
134
+
135
+ At test time, with the optimal parameter $\theta_{opt}$ , we base the predictive distribution of a new unseen partial sequence $\mathbf{x}^*$ on the reference set as:
136
+
137
+ $$p(y^*|R, \mathbf{x}^*) = p_{\theta_{\text{opt}}}(\mathbf{U}_R, \mathbf{u}^*|\mathbf{X}_M, \mathbf{x}^*) p(\mathbf{a}^*|\mathbf{U}_R, \mathbf{u}^*)$$
138
+
139
+ $$p_{\theta_{\text{opt}}}(\mathbf{z}^*|\mathbf{a}^*, \mathbf{U}_R, \mathbf{u}^*) p_{\theta_{\text{opt}}}(y^*|\mathbf{u}^*, \mathbf{z}^*, \mathbf{v}) d\mathbf{U}_R d\mathbf{z}^* d\mathbf{v},$$
140
+ (11)
141
+
142
+ where $\mathbf{a}^*$ is the binary vector that denotes which reference sequences are the parents of the new sequence. $\mathbf{u}^*$ and $\mathbf{z}^*$ are latent embedding and local latent variable for the new sequence, respectively.
2110.01428/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-03-17T16:19:30.651Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36" etag="asMgrHAVEQa81eXho_a3" version="14.4.8" type="google"><diagram id="xS85tW2BKUjXKwkugabW" name="Page-2">7VZNj9owEP01kdpDpMROIBw3YUupWqlaWm13byY2iVUnTh0DgV9fGztfgNSlUruXFRGy34zn2TN+IzswKZqFQFX+hWPCHODhxoFzBwA/AMDRn4cPFpmFU4NkgmKDeT2wokdiHVt0SzGpR46ScyZpZUHfgCkvS5LKEYaE4Pux24YzPAIqlJFRdA2sUsTIhdsjxTI3aBQOvD8SmuUts+9ZS4FaZwvUOcJ8P4DgvQMTwbk0o6JJCNPZa/Py9Qkvm+cHtwIPVRXw42o5Y64J9uGWJd0RBCnlX4f+zj49F2tURQKI5vj0uPj2Y+NGJvQOsa3NlwMmTJHEG664wCTT4xarK1TqRMiDze7k11afPk4548KBd8oosvU7tRH1Jeq/H4EwfK8nOpqng7v16a7oNdCvmj5Yy7oi6sRIEtzSq7OZHYx3tRYv3ucZL5hd412WtURlSlxGdic13MxvcvcC/vAa/1ybPvO6HvBeq8YFDEacQPBtiYmuvmeprTr11Y03lLHElE05w02UkjTVuZOC/yQDyzoKg7CLYMPreS4Lpoa+SXhKy0zHVrMdEZIqBd4xmpUKKyjGelWMLJCqe0xU+BijOj/t0O+YW5EG3ZH+eO+tPjQtaQaqtzpYEF4QKQ7KxVrh1PYw29Rc0La5fd8iVPMzWD5qD9C2JtuWsi54Lz01sOq7QYmzNyVeKrFQ3fk/yHDyJsPXkKEP4LkMg9eWYftsGenwrKLqIVDpYSr07YDxPqeSrFTuNbhX76hxVc5KPJ3r37USd5Z/1veCs4T7kXeZ8OllvkF0c7rVtH8anWyDFya8/w0=</diagram></mxfile>
2110.01428/main_diagram/main_diagram.pdf ADDED
Binary file (13.2 kB). View file
 
2110.01428/paper_text/intro_method.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Object detectors should be adaptable to "domain shift" that can occur due to many factors including changes in weather or camera, compared to the training data. Domain shifts can cause a significant drop in object detector performance [\[5,](#page-8-0) [17\]](#page-8-1). Domain adaptation methods [\[10,](#page-8-2) [9,](#page-8-3) [38,](#page-9-0) [26,](#page-8-4) [27,](#page-8-5) [29\]](#page-9-1) study this problem, casting it as a task of learning models from a source domain and
4
+
5
+ ![](_page_0_Figure_7.jpeg)
6
+
7
+ <span id="page-0-0"></span>Figure 1. Depiction of visual similarity based grouping proposed in our ViSGA method. Instance proposals from the detector are aggregated based on visual similarity to create an adaptive number of class-agnostic groups then they are aligned across the domains.
8
+
9
+ adapting to a target domain. In object detection, where collecting bounding box annotations is expensive, it becomes critical that domain adaptation can be performed without the need to annotate every new domain. This motivates the challenging setting of unsupervised domain adaptation (UDA) [\[42,](#page-9-2) [39,](#page-9-3) [28,](#page-8-6) [2\]](#page-8-7), where one has access to labeled source data and only unlabeled target data. Moreover, training data itself could be gathered under different conditions, a scenario typically referred to as a multi-source domain adaptation [\[31,](#page-9-4) [45,](#page-9-5) [46,](#page-9-6) [47\]](#page-9-7).
10
+
11
+ A dominant line in UDA works is to learn invariant representations via aligning source and target domains, with various proposed alignment strategies. Specifically in object detection, the questions of what features to align and how to induce the alignment have been the subject of recent research. Early works [\[5,](#page-8-0) [20\]](#page-8-8) propose aligning both imagelevel features from the backbone network and all instancelevel features extracted from object proposals using adversarial training [\[13\]](#page-8-9). A recent state-of-the-art approach [\[44\]](#page-9-8) argues that it is beneficial to aggregate object proposals before alignment and suggests condensing all proposals into a single category prototype vector before inducing alignment using a contrastive loss. This raises questions on what is the right aggregation-level at which to do feature alignment and what is the right mechanism to induce this alignment.
12
+
13
+ In this work, we propose a novel UDA method for object
14
+
15
+ <span id="page-1-2"></span>detection, called visually similar group alignment (ViSGA). Our method harnesses the power of adversarial training, while leveraging the visual similarity of the different proposals as a basis for aggregating them. By relying on visual similarity, we aggregate proposals from potentially different spatial locations (Figure [1\)](#page-0-0), increasing the effectiveness of adversarial training. Doing so, we drive a more powerful discriminator and hence better aligned features. To enhance the flexibility of proposal aggregation and to avoid introducing unwanted noise in the alignment process as a result of a preset fixed number of groups, we opt for dynamic clustering based on the distance at which proposals are aggregated. This improves the adaptability of our method to a variable number of objects present in the input.
16
+
17
+ Our method design choices are based on an in-depth analysis of common components of UDA methods for detection. In particular we study what is the right aggregationlevel to perform instance-level alignment, ranging from considering all instances [\[5\]](#page-8-0), multiple groups based on clustering to single prototypes [\[44\]](#page-9-8). When aggregating object proposals, we analyze whether including the predicted class label is beneficial and which distance metric performs better, including spatial overlap and visual similarity. We further compare the effectiveness of using contrastive losses versus adversarial training, as the alignment mechanism.
18
+
19
+ In summary, our key contributions are as follows: 1) We propose a novel, simple yet effective, UDA method for object detection via adversarial training and dynamic visual similarity-based grouping of proposals from the source and target domains. 2) We perform an in-depth analysis answering questions on what is the right level of alignment and how to induce alignment. 3) We evaluate our proposed approach on three different domain shift scenarios including: Adverse weather, Synthetic to Real data, and Cross camera and show state-of-the-art results. 4) We are the first to consider the important setting of multi-source domain adaptation for object detection where annotated data are gathered from different sources. We show that our method continues to improve in this highly relevant scenario, another evidence for the effectiveness of our approach.
20
+
21
+ # Method
22
+
23
+ Our generalized UDA framework comprises of three main components. First is a standard object detection network, Faster R-CNN, which takes an input image and produces bounding boxes and labels for all object instances present in the image. The second component is an imagelevel domain adaptation loss which encourages alignment of the global image representation in the backbone network. The third component is an instance-level domain adaptation loss which induces alignment of representations of each object instance. This is illustrated in Figure 2. Thus, the overall training objective of the method can be written as:
24
+
25
+ $$\mathcal{L} = \mathcal{L}_{det} + \lambda_1 \mathcal{L}_{ima} + \lambda_2 \mathcal{L}_{inst}, \tag{1}$$
26
+
27
+ where, $\mathcal{L}_{det}$ is the supervised training loss for the detector, $\mathcal{L}_{img}$ and $\mathcal{L}_{inst}$ are the image-level and instance-level domain adaptation (DA) losses respectively, $\lambda_1$ and $\lambda_2$ are trade-off parameters. For methods that do not apply instant level alignment $\lambda_2$ is set to zero. Note that $\mathcal{L}_{det}$ is only applicable in the source domain where ground-truth bounding box annotations are available.
28
+
29
+ **Detection network.** Following the convention set by early work on cross-domain object detection, we deploy Faster
30
+
31
+ R-CNN [5] as the object detection network in both, our method and the analysis. It consists of a Region Proposal Network (RPN) and a detection head. Both networks are trained with two loss terms each, a regression loss for bounding box estimation and a classification loss for label prediction. Thus the detection loss $\mathcal{L}_{det}$ for Faster R-CNN is composed of $\mathcal{L}_{RPN}$ and $\mathcal{L}_{head}$ .
32
+
33
+ The role of the domain adaptation losses ( $\mathcal{L}_{img}$ , $\mathcal{L}_{inst}$ ) is to induce alignment between the model's representation of source and target domain inputs. Downstream blocks that use such invariant representation (here for example RPN and the detection heads), would be domain-agnostic and perform equally well in both domains. While adversarial training has been the dominant paradigm for reducing the discrepancy between feature distributions [5, 35, 49], recently contrastive losses have been proposed to match source and target features [44, 22]. We present these approaches in this subsection and compare them in our experimental analysis (Section 4.2).
34
+
35
+ Adversarial training. The key idea in Adversarial Training (AT) based UDA methods is to learn domain invariant representations by fooling a discriminator which is trained to predict the input data domain based on the detector features. This approach is usually class-agnostic, ignoring the features class information and focusing on domain-level alignment. Specifically, the features $F_d$ of domain d (d = 0 for source and d = 1 for target) is fed to the discriminator $\mathcal{D}$ which predicts the domain of the extracted features. The discriminator is trained by minimizing the cross-entropy loss as below.
36
+
37
+ <span id="page-2-2"></span>
38
+ $$\mathcal{L}_{disc} = -d\log(\mathcal{D}(F_d)) - (1 - d)\log(1 - \mathcal{D}(F_d)). \quad (2)$$
39
+
40
+ Since we want to adapt the features of the two domains to be indistinguishable by the discriminator, we have to maximize the loss in Equations (2) w.r.t the features $F_d$ . This is achieved by incorporating a gradient reverse layer (GRL) [13], before features are input to the discriminator.
41
+
42
+ Contrastive learning. As an alternative to AT, one can apply max-margin contrastive losses to align source and tar-
43
+
44
+ <span id="page-3-5"></span>get features by leveraging the class information. The main idea here is to push features from the same class closer and push apart features belonging to different classes across domains. When matching a single feature vector $F_d^i$ per class i in each domain $d \in (0,1)$ , the max-margin contrastive loss takes the form:
45
+
46
+ <span id="page-3-2"></span>
47
+ $$\mathcal{L}_{CL} = \sum_{i}^{C} \left[ ||F_0^i - F_1^i||_2^2 + \sum_{j,j \neq i}^{C} \max\{0, m - ||F_0^i - F_1^j||_2^2\} \right]$$
48
+ (3)
49
+
50
+ where C is the number of classes and m is the margin. Since target data is unlabeled, the class prediction by the detector is used as a pseudo-label in [44] to apply Equation (3). In our analysis, we also study the effect of ignoring this class information. This can be achieved by considering only two sets of vectors of cardinality $N_0$ and $N_1$ , possibly unequal number $(N_0 \neq N_1)$ , from source and target domains to align. To apply contrastive losses here, we make a simple modification. Instead of matching class-specific features across domains, we match the proposals from one domain to the closest features (nn) of the other domain (4) and minimize the distance between their representations (5), as shown below.
51
+
52
+ $$nn(i) = \operatorname{argmin}_{i < N_1} ||F_0^i - F_1^j|| \tag{4}$$
53
+
54
+ $$\mathcal{L}_{CL} = \sum_{i}^{N_0} \left[ ||F_0^i - F_1^{\text{nn}(i)}||_2^2 + \sum_{i,j \neq \text{nn}(i)}^{N_1} max\{0, m - ||F_0^i - F_1^j||_2^2\} \right].$$
55
+ (5)
56
+
57
+ In our method, we utilize AT, avoiding potential noise as a result of the reliance on unstable pseudo-labels during the alignment process. Our aggregation strategy can leverage proposals similarities and possible embedded classinformation as we explain in the sequel.
58
+
59
+ In detection, two main levels of feature alignment can be considered: 1) image-level features output by the backbone network and 2) instance-level or object-level features obtained after pooling each region-of-interest proposed by the RPN network. The predominant approach aims for complete alignment at instance-level, i.e. the representation of every proposed object, in source or target domain, should be domain agnostic. This might be difficult to achieve, especially when complete alignment is challenging for the model, and when the source or target data during alignment contains some domain-specific outliers, e.g. specific backgrounds only found in a simulation domain. To address this, recent works aggregate the proposals on each of the source and target before applying feature alignment [44, 48, 49]. Both [44] and [48] take it to the other extreme, by collapsing the instances into a single prototype per category.
60
+
61
+ While [44] merges prototypes based on spatial overlap using intersection-over-union (IoU) and class labels, [48] only uses class labels to mean pool proposals into prototypes. In contrast [49] treads a middle ground by merging proposals into many discriminative regions, but still only using spatial overlap as the merging criteria.
62
+
63
+ In our analysis in Section 4.2, we compare the effectiveness of different components of this aggregation including 1) spatial grouping vs similarity based grouping (discussed in Section 3.4) 2) using class information vs class agnostic and 3) single prototypes vs multiple groups.
64
+
65
+ <span id="page-3-4"></span><span id="page-3-3"></span>In this section, we propose a novel similarity-based grouping to aggregate object proposals before performing alignment. We first aggregate proposals based on visual similarity into varying number of feature groups. AT is then applied to align the mean embeddings of the groups extracted from the source and target domains. This simple yet effective change brings three key benefits. First, adversarial training at group level enables our model to coarsely align the main feature clusters, instead of attempting complete alignment of all instances which might be infeasible. Second, in contrast to the spatial overlap used in [44, 49], visual similarity-based clustering allows our model to group objects which are located far away in the image, but look similar. Note that this still groups heavily overlapping proposals, since they tend to also be visually similar. Hence, it avoids producing duplicate visually similar groups. By using visual similarity, we do not depend on the pseudo-labels different from previous approaches [44, 48]. The pseudolabels tend to be noisy, thus avoiding such dependency can be beneficial especially in early training. Moreover, when similar proposals are aggregated, we can implicitly leverage class information since the aggregated proposals are likely to be of the same class. Finally, by adaptively varying the number of groups, instead of using single prototypes, our model retains sufficient capacity to represent intra-domain variance.
66
+
67
+ Similarity-based clustering. To perform similarity-based clustering, we take as input the N proposals generated by RPN and their fixed feature vectors denoted by $f \in \mathbb{R}^{N \times m}$ . In order to discover the main feature groups, we cluster these features using hierarchical agglomerative clustering. Starting bottom-up, each proposal is considered as an individual cluster. Then, at each step, the two closest clusters according to a distance metric are merged together. We utilize cosine distance as our merging metric:
68
+
69
+ $$distance(z_i, z_j) = 1 - \frac{z_i \cdot z_j}{||z_i|| ||z_j||},$$
70
+ (6)
71
+
72
+ where $z_i$ and $z_j$ show *i*-th and *j*-th proposal's feature embeddings. In contrast to recent work [44], which uses spatial
73
+
74
+ <span id="page-4-1"></span>overlap (measure by IoU) to group together instances, using cosine similarity enables us to pair instances which are located far from each other, but are visually similar. Merging is stopped when dissimilarity within a cluster, as defined by a *linkage function*, exceeds the cluster radius parameter τ . We apply the complete-linkage heuristic [\[8\]](#page-8-22), which ensures that the farthest distance of two members is smaller than τ .
75
+
76
+ $$MaxLink(A, B) = max\{dist(a, b) : a \in A, b \in B\}, (7)$$
77
+
78
+ where A, B are two sets of proposals' features in two clusters and dist is the cosine distance. This hierarchical clustering approach allows our model to adaptively change the number of feature groups during training, instead of having a fixed number of cluster like in k-means. Once the clustering has converged, instances assigned to each cluster are pooled to construct a representative embedding Zc<sup>i</sup> :
79
+
80
+ $$Z_{c_i} = \frac{\sum_{i=0}^{N_{c_i}} z_i}{N_{c_i}},\tag{8}$$
81
+
82
+ where Nc<sup>i</sup> is the number of instances assigned to the cluster c<sup>i</sup> . The group representative Zc<sup>i</sup> is fed to a group-level discriminator and adversarial training is applied to align groups from the two domains using Equation [\(2\)](#page-2-2).
83
+
84
+ Finally, our method (ViSGA) combines image and instance-level alignment of aggregated proposals via adversarial training as illustrated in Fig[.1.](#page-0-0)
2111.05323/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-05-24T15:56:31.048Z" agent="5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" etag="TZ6WM6NCvkkoodhpWbw-" version="14.6.13" type="device"><diagram name="Page-1" id="n5Zq0WL92huyyz-BuJLN">3Zptc5s4EIB/DR/jAYF4+RiTXHvXS5OZpJfk040Msq0WIyLk2s6vrwTiHcfvru8mmQlaSQvsPqvVimimP1t+YiiZ3tEQRxrQw6Vm3mgAeABo8lcPV7kAuFYumDAS5iK9EjySd5wLjUI6JyFOlSwXcUojTpKmMKBxjAPekCHG6KI5bEyjsCFI0AQ3HkMKHgMU4c6wZxLyaS51gVPJP2MymRZ3Nmwv7xmh4MeE0Xms7hfTGOc9M1SoUcrTKQrpovYU5q1m+oxSnl/Nlj6OpFWbFvtjTW/5yAzHfJsJn1+Y+eBjXb+/e8XPePV8/4VcKb/9RNFcmUI9LF8VtsneDkslhmYOF1PC8WOCAtm7EDAI2ZTPItUdoRGOhqVRfBpRlqkxr7MfMSTljP4ozSwnjUkU1UaOUeiFsBxZ60HYMkxhuCFigYLIkE0qnodwyZ6pi2bXMspYPzHjeFkTKUt9wnSGOVuJIaoXOO4AwHyWYtoGiulFRQjUnYEaNa3xYQKFB1JcTsobVA4SF8pHO/jL7PGXHXFpRCpetu44+21Oi46rNLPWtRgArGRZdYqrifqbaSGVQLytJTVq0Cdx/ldzhsJsft4IQsrTvPWU9d3kcwpV4v1IW72Q5c9ZiFusCe/wJlAoIpNYXAfCl1hgMJQ+JCJyr1XHjIShnD5kWLwkGmWqJAUJJTHP7A+HGryRuuacpgqbY2FiNxkx3Q4jANpdQoo4OwSQW2AhHjHX8r+No+8R+HqP364MeIERHULshlZfRLtgZNp2I4Qt6bwJQyERfqkN9ZxQd5wj+c0zGn5zvDKOa56zXND1nGGZh7tuxOM/v+Jvvn83X7K30fCvL/+kV0aP544T21lYQl+mJBE5IopvRLT+m0Wzef2Uh7PMZGmE0qnMvcWNR6xUIsPbRjNJRzxKk0yv3on4D6Nb2EAkdLyZOZQmeZYfk6XktJMh3AAHgXp/lQiA3cfXyIUWPFJOsFx30EoJNuhA4+pdZgrZ0ZHpS9+7IWPaG5EZjQUpC0kMP5e/j+At23XavuoJ8bN6y1nrrTRB8TG9tez3Vn6bs0VnKtf0ePJEE7kzO3V4Aq/pcBcav9vh7tkc/n5Wh3c2ZVPKyLt4clRMWbtLy7YTD2Ibxgnt3df93RpQzuzQdJQl/eKY8c7GzOoiFontN/gHoHPKdaedaC6AoeJY5aMiAMfhtTw5kYYU276UBE0HNjf1ssr+Pp8larNlF02lLzv2MId4SfiLmi+vX2UpNoCqdbNUlVnWWBWNWLzxS71RmyWb1bSsVcxb67mUzlmAN6dhjtgE882rNw6Ls6PdagrbcjoUwB4K4AcUqJs9yFK2tmqBFnGeMTDsppbcCmpi/VSoraudNb2S3kJVbqiOqozM0gwHwNpX9xwM675Y7YPw/ii6W6LoXSqKsL34HYCibW/WdWoWtzgP3YPFszJVnJ9vYgpcKlPt5c32wN5MQX2zrlMzZZ12fauWtNfGivbx+lbx91oj83exeLGpFrZYdBynnR73zbSOe/7lbYvD4f8riuC/nmod02ryA8G+KELgDhwDOsAVSqDV1Os65gB0Os9F6Pojk+Mfgovit+ej1Y4fqJq1pSqE6sWuEhV1boTH/KMqt6+urj7Y6NraLzLqPiFKp+W3nbXhsn15a+peEw/d7gBueF3Ard2LW9GsPofnRFX/bWDe/gI=</diagram></mxfile>
2111.05323/main_diagram/main_diagram.pdf ADDED
Binary file (15.3 kB). View file
 
2111.05323/paper_text/intro_method.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Multi-task learning [@caruana1997multitask] exploits knowledge shared among related tasks to improve their overall performance. This is especially significant when only limited data is available for each task. The central goal in multi-task learning is to explore task relatedness to improve each individual task [@argyriou2007multi; @zhang2012convex], which is non-trivial since the underlying relationships among tasks can be complicated and highly nonlinear [@zhang2021survey]. Early works deal with this by learning shared features, designing regularizers imposed on parameters [@pong2010trace; @kang2011learning; @jawanpuria2015efficient; @jalali2010dirty] or exploring priors over parameters [@heskes2000empirical; @bakker2003task; @xue2007multi; @pmlr-v9-zhang10c; @zhang2012convex]. Recently, multi-task deep neural networks have been developed, learning shared representations in the feature layers while keeping classifier layers independent [@long2017learning; @qian2020multi; @zhang2020deep; @liu2021towards]. Some methods [@liu2019end; @kendall2018multi; @chen2018gradnorm; @gao2020mtl] learn a small number of related tasks based on one shared input. However, lots of training data is usually available. Few-shot learning [@finn2017model] addresses multi-task learning with limited data, but usually relies on a large number of related tasks. Further, in general, most existing works explore either representations or classifiers for shared knowledge, leaving exploring them jointly an open problem.
4
+
5
+ In this work, we tackle the following challenging multi-task learning setting [@long2017learning]: each task contains very limited training data and only a handful of related tasks are available to gain shared knowledge from. In addition, instead of sharing the same input, each of the multiple tasks has its own input space from a different domain and these tasks are related by sharing the same target space, e.g., the same class labels in classification tasks. In this setting, it is difficult to learn a proper model for each task independently without overfitting [@long2017learning; @zhang2021survey]. It is therefore crucial to leverage the inductive bias [@baxter2000model] provided by related tasks learned simultaneously.
6
+
7
+ To address this, we propose variational multi-task learning (VMTL), a novel variational Bayesian inference approach that can explore task relatedness in a unified way. Specifically, as shown in Fig. [\[fig1\]](#fig1){reference-type="ref" reference="fig1"}, we develop a probabilistic latent variable model by treating both the feature representation and the classifier as latent variables. To explore task relatedness, we place conditional priors over the latent variables, which are dependent on data from other related tasks. By doing so, multi-task learning is cast as the joint variational inference of feature representations and classifiers for all tasks in a single unified framework. The probabilistic framework enables the model to better capture the uncertainty caused by limited data in each task [@finn2018probabilistic].
8
+
9
+ ::: wrapfigure
10
+ r0.495 ![image](vmtl.png){width="100%"}
11
+ :::
12
+
13
+ To leverage the knowledge provided by related tasks, we propose to specify the priors of each task as a mixture of the variational posteriors of other tasks. In particular, the mixing weights are constructed with the Gumbel-Softmax technique [@jang2016categorical] and jointly learned with the probabilistic modeling parameters by back-propagation. The obtained Gumbel-Softmax priors enable the model to effectively explore different correlation patterns among tasks and, more importantly, provide a unified way to explore shared knowledge jointly for representations and classifiers. We validate the effectiveness of the proposed VMTL by extensive evaluation on five multi-task learning benchmarks with limited data for both classification and regression. The results demonstrate the benefit of variational Bayesian inference for modeling multi-task learning. VMTL consistently achieves state-of-the-art performance and surpasses previous methods on all tasks.
14
+
15
+ # Method
16
+
17
+ Consider a set of $T$ related tasks, each of which is a classification or regression problem. Tasks in this paper share the same label space for classification or the same target space for regression but have different distributions. Each task $t$ has its own training data $\mathcal{D}_t = \{ \mathbf{x}_{t,n}, \mathbf{y}_{t,n}\}_{n=1}^{N_t}$, where ${N_t}$ is the number of training samples. We consider the challenging setting where each task has limited labeled data, which makes it difficult to learn a proper model for each task independently [@long2017learning; @zhang2021survey]. In contrast to most previous works, we will explore the knowledge shared among tasks for learning both the classifiers/regressors and representations of each task. Note that in this section we derive our methodology mainly using terminologies related to classification tasks, but it is also applicable to regression tasks.
18
+
19
+ To better capture uncertainty caused by limited data, we explore multi-task learning in a probabilistic framework. We define the conditional predictive distribution: $p(\mathcal{Y}_t|\mathcal{X}_t, \mathcal{D}_{1:T \backslash t})$ with respect to the current task $\mathcal{D}_t$: ${\mathcal{Y}_t} = \{\mathbf{y}_{t,n}\}^{N_t}_{n=1}$ and ${\mathcal{X}_t} = \{\mathbf{x}_{t,n}\}^{N_t}_{n=1}$, where we use $\mathcal{D}_{1:T\backslash t}$ to denote the data from all tasks excluding $\mathcal{D}_t$ of task $t$. From a probabilistic perspective, jointly learning multiple tasks amounts to maximizing the conditional predictive log-likelihood as follows: $$\begin{equation}
20
+ \frac{1}{T} \sum_{t=1}^{T} \log p(\mathcal{Y}_t|\mathcal{X}_t, \mathcal{D}_{1:T \backslash t})
21
+ = \frac{1}{T} \sum_{t=1}^{T} \sum_{n=1}^{N_t} \log p(\mathbf{y}_{t, n}|\mathbf{x}_{t, n}, \mathcal{D}_{1:T \backslash t}).
22
+ \label{cll}
23
+ \end{equation}$$ We condition the prediction for samples in each task on the data of other tasks, $\mathcal{D}_{1:T \backslash t}$, to leverage the shared knowledge from related tasks.
24
+
25
+ The probabilistic multi-task learning formalism in ([\[cll\]](#cll){reference-type="ref" reference="cll"}) provides a general framework to explore task relatedness by conditioning on other tasks. More importantly, it enables the model to incorporate shared knowledge from related tasks into the learning of both the classifier and the feature representation in a unified way by specifying their priors.
26
+
27
+ We introduce the latent variables $\mathbf{w}_t$ and $\mathbf{z}_{t, n}$ to represent the classifier and the latent representation of the sample $\mathbf{x}_{t, n}$, respectively. The joint distribution for the task $t$ can be factorized as $$\begin{equation}
28
+ \begin{aligned}
29
+ p(\mathcal{Y}_t, \mathcal{Z}_t, \mathbf{w}_t|\mathcal{X}_t, \mathcal{D}_{1:T \backslash t})
30
+ &= \prod_{n=1}^{N_t} p(\mathbf{y}_{t, n}|\mathbf{z}_{t, n}, \mathbf{w}_t) p(\mathbf{z}_{t, n}|\mathbf{x}_{t, n}, \mathcal{D}_{1:T \backslash t} ) p(\mathbf{w}_{t}|\mathcal{D}_{1:T \backslash t}),
31
+ \end{aligned}
32
+ \end{equation}$$ where we use $\mathcal{Z}_t = \{ \mathbf{z}_{t,n}\}_{n=1}^{N_t}$ to collectively represent the set of latent representations of samples in task $t$. Here, we specify conditional priors dependent on other tasks to explore task relatedness. The probabilistic latent variables capture uncertainties at different levels: $\mathbf{z}_{t, n}$ captures the uncertainty of each sample, while $\mathbf{w}_t$ works at the categorical level. A graphical illustration of the probabilistic latent variable model is shown in Fig. [\[fig1\]](#fig1){reference-type="ref" reference="fig1"}.
33
+
34
+ Solving the model for multi-task learning involves inferring the true joint posterior $p (\mathcal{Z}_t, \mathbf{w}_t |\mathcal{D}_{1:T} )$ over all latent variables $\mathcal{Z}_t$ and $\mathbf{w}_t$, which is generally intractable. We introduce a variational joint distribution $q(\mathcal{Z}_t, \mathbf{w}_t| \mathcal{D}_t)$ for the current task to approximate the true posterior. Also under the conditional independence assumption, the joint variational posterior distribution can be factorized with respect to classifiers and latent representations as follows: $$\begin{equation}
35
+ \begin{aligned}
36
+ q(\mathcal{Z}_t, \mathbf{w}_t|\mathcal{D}_t)
37
+ = q_{\theta}(\mathbf{w}_{t}|\mathcal{D}_t) \prod_{n=1}^{N_t} q_{\phi}(\mathbf{z}_{t, n}|\mathbf{x}_{t, n}),
38
+ \end{aligned}
39
+ \end{equation}$$ where $q_{\theta}(\mathbf{w}_{t}|\mathcal{D}_t)$ and $q_{\phi}(\mathbf{z}_{t, n}|\mathbf{x}_{t, n})$ are variational posterior distributions for classifiers and latent representations respectively, and $\theta$ and $\phi$ are the associated parameters. For computational feasibility, they are defined as fully factorized Gaussians, as is common practice [@kingma2013auto; @blundell2015weight].
40
+
41
+ To obtain a good approximation, the variational posterior should be close to the true posterior. This is usually achieved by minimizing the Kullback-Leibler (KL) divergence between them: $$\begin{equation}
42
+ \mathbb{D}_{\rm{KL}} \big[ q(\mathcal{Z}_t, \mathbf{w}_t|\mathcal{D}_t) || p(\mathcal{Z}_t, \mathbf{w}_t |\mathcal{D}_{1:T})\big].
43
+ \label{kl}
44
+ \end{equation}$$
45
+
46
+ By applying Bayes' rule to the true posterior, we derive the evidence lower-bound (ELBO) with the probabilistic classifier and representation for the conditional predictive log-likelihood in ([\[cll\]](#cll){reference-type="ref" reference="cll"}) : $$\begin{equation}
47
+ \begin{aligned}
48
+ \frac{1}{T} \sum_{t=1}^{T} \log p( \mathcal{Y}_t |\mathcal{X}_t, \mathcal{D}_{1:T \backslash t} ) \geq \frac{1}{T}\sum^T_{t=1}\bigg\{ \sum^{N_t}_{n=1} \Big\{ \mathbb{E}_{\mathbf{w}_t\sim q_{\theta}} \mathbb{E}_{{\mathbf{z}_{t,n}}\sim q_{\phi}} [ \log p(\mathbf{y}_{t, n}|\mathbf{z}_{t, n}, \mathbf{w}_t) ] & \\
49
+ - \mathbb{D}_{\rm{KL}} [q_{\phi}(\mathbf{z}_{t, n}|\mathbf{x}_{t, n}) || p(\mathbf{z}_{t, n}|\mathbf{x}_{t, n}, \mathcal{D}_{1:T \backslash t})] \Big\} - \mathbb{D}_{\rm{KL}} \big[ q_{\theta}(\mathbf{w}_t|\mathcal{D}_{t}) || p(\mathbf{w}_t|\mathcal{D}_{1:T \backslash t}) \big]\bigg\}.
50
+ \label{vmtl_elbo}
51
+ \end{aligned}
52
+ \end{equation}$$ The objective function provides a general probabilistic inference framework for multi-task learning. As priors naturally serve as regularizations in Bayesian inference, they offer a unified way of sharing information across multiple tasks for improving both classifiers and representations. The detailed derivation is given in the supplementary materials.
53
+
54
+ In what follows, we will describe the specification of the prior distributions over the classifier and the representation, as well as their variational posterior distributions.
55
+
56
+ The proposed variational multi-task inference framework enables related tasks to provide supportive information for the current task through the conditional priors of the latent variables. It offers a unified way to incorporate shared knowledge from related tasks into the inference of representations and classifiers for individual tasks.
57
+
58
+ To leverage the shared knowledge, we propose to specify the prior of the classifier for the current task using the variational posteriors over classifiers of other tasks: $$\begin{equation}
59
+ \begin{aligned}
60
+ p(\mathbf{w}^{(\eta)}_t|\mathcal{D}_{1:T \backslash t}) := \sum_{i \neq t} \mathcal{\alpha}_{ti} q_{\theta}(\mathbf{w}^{(\eta-1)}_i|\mathcal{D}_i),
61
+ \end{aligned}
62
+ \label{pw_gumbel}
63
+ \end{equation}$$ where $\eta$ denotes the $\eta$-th iteration. In practice, to avoid entanglement among tasks, we define the prior of task $t$ in the current iteration to be a combination of the variational posteriors [@shi2019variational] of the remaining tasks from the last iteration. Particularly, our designed prior resembles the empirical Bayesian prior [@heskes2000empirical; @bakker2003task; @tomczak2018vae; @takahashi2019variational] in that the prior of each task is specified by the observed data of other tasks, which provides a principled way to explore the shared knowledge among multiple tasks.
64
+
65
+ During training, we aim to have each task learn more shared knowledge from its most relevant tasks, while maximally reducing the interference from irrelevant tasks. To this end, we adopt the Gumbel-Softmax technique learn the mixing weights for all related tasks and define the mixing weights as follows: $$\begin{equation}
66
+ \mathcal{\alpha}_{ti} = \frac{ \exp ( (\log \pi_{ti} + g_{ti} ) / \tau) }{ \sum_{i \neq t} \exp( (\log \pi_{ti} + g_{ti} ) / \tau))}.
67
+ \label{gumbel}
68
+ \end{equation}$$ Here $\alpha_{ti}$ is the mixing weight that indicates the relatedness between tasks $t$ and $i$. $\pi_{ti}$ is the learnable parameter in the Gumbel-Softmax formulation, which denotes the probability of two tasks transferring positive knowledge. $g_{ti}$ is sampled from a Gumbel distribution, using inverse transform sampling by drawing $u \sim \text{Uniform}(0, 1)$ and computing $g = - \log( - \log(u))$. $\tau$ is the softmax temperature. By using the Gumbel-Softmax technique, our model effectively handles possible negative transfer between tasks. The Gumbel-Softmax technique encourages the model to reduce the interference from the less relevant tasks by minimizing the corresponding mixing weights. The more negative the effects of interference between pairwise tasks are, the smaller the mixing weight is likely to be.
69
+
70
+ With respect to the inference of the variational posteriors, we define them as fully factorized Gaussians for each class independently: $$\begin{equation}
71
+ \begin{aligned}
72
+ q_{\theta}(\mathbf{w}_t| \mathcal{D}_{t}) = \prod_{c=1}^{C} q_{\theta}(\mathbf{w}_{t,c}| \mathcal{D}_{t,c}) = \prod_{c=1}^{C}\mathcal{N}(\boldsymbol{\mu}_{t,c},\text{diag}(\boldsymbol{\sigma}^2_{t,c})).
73
+ \end{aligned}
74
+ \label{qw}
75
+ \end{equation}$$ where $\boldsymbol{\mu}_{t,c}$ and $\boldsymbol{\sigma}^2_{t,c}$ can be directly learned with back-propagation [@blundell2015weight].
76
+
77
+ As an alternative to direct inference, classifiers can also be inferred by the amortization technique [@gordon2018meta], which will further reduce the computational cost. In particular, different classes share the inference network to generate the parameters of the specific classifier. In practice, the inference network takes the mean of the feature representations in each class as input and returns the parameters $\boldsymbol{\mu}_{t,c}$ and $\boldsymbol{\sigma}_{t,c}$ for $\mathbf{w}_{t,c}$. The amortized classifier inference enables the cost to be shared across classes, which reduces the overall cost. Therefore, it offers an effective way to handle scenarios with a large number of object classes and can still produce competitive performance, as shown in our experiments, even in the presence of the amortization gap [@cremer2018inference].
78
+
79
+ In a similar way to ([\[pw_gumbel\]](#pw_gumbel){reference-type="ref" reference="pw_gumbel"}), we specify the prior over the latent representation $\mathbf{z}_{t, n}$ of a sample $\mathbf{x}_{t, n}$ as a mixture of distributions conditioned on the data of every other task, as follows: $$\begin{equation}
80
+ p(\mathbf{z}^{(\eta)}_{t, n}|\mathbf{x}_{t, n}, \mathcal{D}_{1:T \backslash t}) := \sum_{i \neq t} {\beta}_{ti} q_{\phi}(\mathbf{z}^{(\eta-1)}_{t, n}|\mathbf{x}_{t,n}, \mathcal{D}_i).
81
+ \label{pz_gumbel}
82
+ \end{equation}$$ Here, $\beta$ is the mixing weight which is defined in a similar way as in ([\[gumbel\]](#gumbel){reference-type="ref" reference="gumbel"}). The conditional distribution $q_{\phi}(\mathbf{z}_{t, n}| \mathbf{x}_{t,n}, \mathcal{D}_i)$ on the right hand side of ([\[pz_gumbel\]](#pz_gumbel){reference-type="ref" reference="pz_gumbel"}) indicates that we leverage the data $\mathcal{D}_i$ from every other task $i$ to help the model infer the latent representation of $\mathbf{x}_{t,n}$. The contribution of each other task is determined by the learnable mixing weight. In practice, the distribution $q_{\phi}(\mathbf{z}_{t, n}| \mathbf{x}_{t,n}, \mathcal{D}_i)$ is inferred by an amortized network [@gershman2014amortized; @kingma2013auto]. To be more specific, the inference network takes an aggregated representation of $\mathbf{x}_{t,n}$ and $\mathcal{D}_i$ as input and returns the parameters of distribution $q_\phi$. The aggregated representation is formulated as ([\[agg\]](#agg){reference-type="ref" reference="agg"}) which is established by the cross attention mechanism [@kim2019attentive]. The sample $\mathbf{x}_{t,n}$ from the current task acts as a query, and $D_{i,c}$ plays the roles of the key and the value. $D_{i,c}$ includes samples from the $i$-th task, which have the same label as $\mathbf{x}_{t, n}$. This is formulated as: $$\begin{equation}
83
+ \begin{aligned}
84
+ f(\mathbf{x}_{t, n}, \mathcal{D}_i)
85
+ = \mathrm{DotProduct}(\mathbf{x}_{t, n},D_{i,c},D_{i,c}) :=\mathrm{softmax} (\frac{\mathbf{x}_{t,n}{D^\top_{i,c}}}{\sqrt{d}})D_{i,c},
86
+ \label{agg}
87
+ \end{aligned}
88
+ \end{equation}$$ where $f$ is the aggregation function, and $D_{i,c}$ is a matrix, with each row containing a sample from class $c$ of the $i$-th related task. $d$ is the dimension of the input feature. Since we are dealing with supervised learning in this work, class labels of training samples are always available at training time. The intuition here is to find similar samples to help build the representation of the current sample.
89
+
90
+ The inference of the variational posterior $q_{\phi}(\mathbf{z}_{t, n}|\mathbf{x}_{t, n})$ over latent representations is also achieved using the amortization technique [@gershman2014amortized; @kingma2013auto]. The amortized inference network takes $\mathbf{x}_{t,n}$ as input and returns the statistics of its probabilistic latent representation.
91
+
92
+ By integrating ([\[pw_gumbel\]](#pw_gumbel){reference-type="ref" reference="pw_gumbel"}) and ([\[pz_gumbel\]](#pz_gumbel){reference-type="ref" reference="pz_gumbel"}) into ([\[vmtl_elbo\]](#vmtl_elbo){reference-type="ref" reference="vmtl_elbo"}), we obtain the following empirical objective for variational multi-task learning with Gumbel-Softmax priors: $$\begin{equation}
93
+ \begin{aligned}
94
+ & \hat{\mathcal{L}}_{\rm{VMTL}}( \theta, \phi, \alpha, \beta) = \frac{1}{T}\sum^T_{t=1} \bigg\{\sum^{N_t}_{n=1}\Big\{ \frac{1}{ML}\sum_{\ell=1}^{L}\sum_{m=1}^{M} \big[ -\log p(\mathbf{y}_t|\mathbf{z}_{t, n}^{(\ell)}, \mathbf{w}_{t}^{(m)}) \big]\\
95
+ & + \mathbb{D}_{\rm{KL}} \big[ q_{\phi}(\mathbf{z}_{t, n}|\mathbf{x}_{t, n}) || \sum_{i \neq t} {\beta}_{ti} q_{\phi}(\mathbf{z}_{t, n}| \mathbf{x}_{t, n}, \mathcal{D}_i) \big] \Big\} + \mathbb{D}_{\rm{KL}} \big[ q_{\theta}(\mathbf{w}_t|\mathcal{D}_t) || \sum_{i \neq t} \alpha_{ti} q_{\theta}(\mathbf{w}_i|\mathcal{D}_{i}) \big]\bigg\},
96
+ \end{aligned}
97
+ \label{empirical loss}
98
+ \end{equation}$$ where $\mathbf{z}_{t, n}^{(\ell)} \sim q_{\phi}(\mathbf{z}_{t, n}|\mathbf{x}_{t, n})$ and $\mathbf{w}_{t}^{(m)} \sim q_{\theta}(\mathbf{w}_t|\mathcal{D}_t)$. To sample from the variational posteriors, we adopt the reparameterization trick [@kingma2013auto]. $L$ and $M$ are the number of Monte Carlo samples for the variational posteriors of latent representations and classifiers, respectively. In practice, $L$ and $M$ are set to 10, which yields good performance while being computationally efficient. We investigate the sensitivity of $L$ and $M$ in the supplementary materials. ${\theta}$ represents the statistical parameters associated with the classifiers or the inference parameters for the amortized classifier; ${\phi}$ denotes parameters of the shared inference network for the latent representation.
99
+
100
+ We minimize this empirical objective function to optimize the model parameters jointly. The log-likelihood term is implemented as the cross-entropy loss. In the practical implementation of the KL terms, we adopt the closed-form solution based on its upper bound as done in [@nalisnick2018learning], e.g., $\mathbb{D}_{\rm{KL}} \big[ q(\mathbf{w}_t) || \sum_{i \neq t} \alpha_{ti} q(\mathbf{w}_i) \big] \leq \sum_{i \neq t} \alpha_{ti} \mathbb{D}_{\rm{KL}} \big[ q(\mathbf{w}_t) || q(\mathbf{w}_i) \big]$. Minimizing the KL terms encourages the model to leverage shared knowledge provided from related tasks at the instance level for representations and at the categorical level for classifiers.
101
+
102
+ At test time, we obtain the prediction for a test sample $\mathbf{x}_t$ by calculating the following probability using Monte Carlo sampling: $$\begin{equation}
103
+ p(\mathbf{y}_t|\mathbf{x_t}) \approx \frac{1}{ML} \sum_{l=1}^{L}\sum_{m=1}^{M} p(\mathbf{y}| \mathbf{z}_t^{(l)}, \mathbf{w}_t^{(m)}),
104
+ \end{equation}$$ where we draw samples from posteriors: $\mathbf{z}_t^{(l)} \sim q_{\phi}(\mathbf{z}|\mathbf{x})$ and ${\mathbf{w}_t}^{(m)} \sim q_{\theta}(\mathbf{w}|\mathcal{D}_t)$.
2111.15340/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2111.15340/paper_text/intro_method.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent advances in self-supervised learning [\[2,](#page-8-0)[3,](#page-8-1)[7,](#page-8-2)[8,](#page-8-3)[10,](#page-8-4) [25\]](#page-9-0) have shown great promise for downstream applications,
4
+
5
+ <span id="page-0-1"></span>![](_page_0_Figure_18.jpeg)
6
+
7
+ Figure 1. MC-SSL0.0 has basic knowledge of objects as shown by the self-learnt grouping of data-tokens (the data token on the same object has similar representation) without any labels. Notice how the concepts are refined when asking for more concepts to be discovered. For example, asking for 2 concepts gives forest floor and vegetation, 3 concepts gives forest floor, forest, and mushroom, and 4 concepts gives additional moss on tree. Still a long way to go for MC-SSL as shown by spread out representation of the third concept. This demands for training on bigger datasets and design of more advanced MC-SSL methods. MC-SSL0.0 is pretrained on only 10% of ImageNet.
8
+
9
+ <span id="page-0-0"></span><sup>1</sup>Under Review .....
10
+
11
+ <span id="page-1-0"></span>particularly for image classification datasets with labels for one dominant concept per image (also known as multi-class datasets, *e.g.* ImageNet [\[37\]](#page-9-1)). These state-of-the-art approaches train the system to extract and associate image features with a single dominant concept, but ignore the intrinsic multi-label nature of natural images that depict more than one object/concept. A study by Stock and Cisse [\[51\]](#page-9-2) provided the empirical evidence, proving that the remaining error in the ImageNet dataset is predominantly due to the single-label annotation. Indeed every pixel in an image is part of some semantic concept with no such thing as background. However, collecting an exhaustive list of labels for every single object/concept is not scalable and requires a significant human effort, making large scale supervised training infeasible for multi-label datasets. In view of this, it is pertinent to ask what the best strategy is to move forward in the presence of these deficiencies. We believe that multi-concept self-supervised learning (i.e. ability to represent each concept/object in an image without using labels) is the principled way forward.
12
+
13
+ The main aim of this study is to make a step towards building a self-supervised framework capable of learning representations for all the objects in an image. Once the multiple-concepts are extracted without supervision, the expensive multi-label annotated datasets will be needed only for evaluation and testing. Accordingly, we introduce MC-SSL0.0, a smart framework for self-supervised learning designed to model the meaningful, important, semantic information present in every patch in the image.
14
+
15
+ MC-SSL0.0 is built on Group Masked Model Learning (GMML) introduced in [\[2\]](#page-8-0). In MC-SSL0.0, the network is trained with two objectives: i) to reconstruct the raw pixel values of the GMML-based manipulated data-tokens, and, crucially, ii) learning patch level concepts/classes for individual data-tokens. MC-SSL0.0 requires the network to learn the knowledge about an object/concept (that is properties such as colour, texture and structure, as well as context) in order to reconstruct, as well as to recover the distorted data-tokens by using available information in unmasked data tokens on the object and its surroundings. This encourages all the data-tokens on an object to have similar representation to each other and incorporate local context in transformers (c.f Figure [1\)](#page-0-1). The ultimate role of the auxiliary but complementary task of learning a patch classifier is to assign a pseudo-semantic label to a group of context aware data tokens covering an object. Our conjecture is that learning pseudo labels for patches encourages data tokens on similar objects within an image and between images to belong to the same pseudo class promoting intra and inter image concept consistency (see Appendix [5\)](#page-8-5). An object consists of group of related data token sharing a common structure captured by a limited set of token level pseudo labels. This contextual discovery/learning of objects across a collection of images can conform to human semantic labels, e.g., by weak supervision corresponding to cluster of common objects.
16
+
17
+ The main contribution of the work is introducing a novel SSL framework which is not compromised by the unrealistic premise that an image contains just a single object, and even if there are other, non dominant objects, they do not impact on the extracted representation. The proposed framework makes a better use of the information conveyed by all the objects/concepts present in the image. MC-SSL0.0 is a step towards getting representations for each of the object/concept in an image, without the need for any labels. Besides, MC-SSL0.0 enables the training of data hungry transformers from scratch with only a few thousand images. The possibility of training from scratch on limited data with high accuracy will have a significant impact on small AI research groups, companies and application domains which are mainly relying on pretrained models. Additionally, we show that, although MC-SSL0.0 is unaware of semantic concepts present in an image, as evident from Figure [1,](#page-0-1) the self-learnt grouping of data-tokens corresponds to semantic concepts without using any labels. The impact of the proposed innovation is that MC-SSL0.0 outperforms state-of-the-art SSL methods by a large margin in multi-label classification tasks, and achieves competitive results in multi-class tasks. Lastly, MC-SSL0.0 based selfsupervised pretraining outperforms supervised pretraining for downstream tasks.
18
+
19
+ # Method
20
+
21
+ In this section, we introduce the self-supervised image transformer which is a step towards multi-concept self-supervised learning (MC-SSL). Our proposed framework is based on the GMML and self-learning of data token conceptualisation with the incorporation of knowledge distillation [29]. In Knowledge distillation a student network $\mathbf{s}^{\theta}(.)$ is trained to match the output of a given teacher network $\mathbf{t}^{\phi}(.)$ , where $\theta$ and $\phi$ are the parameters of the student and the teacher networks, respectively. In this work, we employ the same network architecture for both the student and the teacher (i.e. the teacher is a momentum encoder [27] of the student), where the parameters of the teacher network are updated from the past iterations of the student network using exponential moving average of the student weights with the following update rule: $\phi = \lambda \phi + (1 - \lambda)\theta$ .
22
+
23
+ The network architecture of the student network comprises three components: A vision transformer backbone $s_b(.)$ , followed by two projection heads attached to the output of the transformer. One projection head is for patch reconstruction $s_{pr}(.)$ , and the second projection head is for patch classification $s_{pc}(.)$ . The main architecture of our proposed approach is shown in Figure 2.
24
+
25
+ Similar to [20], we use vision transformer, which receives as input a feature map from the output of a convolutional block/layer. The convolutional block takes an input image $\mathbf{x} \in \mathbb{R}^{H \times W \times C}$ and converts it to feature maps of size $\mathbf{x}^{\mathbf{f}} \in \mathbb{R}^{\sqrt{n} \times \sqrt{n} \times D}$ , where, H, W, and C are height, width and channels of the input image, n is the total num-
26
+
27
+ <span id="page-3-3"></span>ber of spatial locations in the feature maps and D is the number of feature map channels. Each spatial location in the input feature maps is considered as an input data token to the transformer, yielding a total of n tokens. Both the input and output of the transformer have the same dimensions x f , y ∈ R <sup>n</sup>×D.
28
+
29
+ For SSL, we adopted the GMML as the key component of MC-SSL0.0. In NLP, a single data-token can represent a semantic concept, hence, an effective pretext task is to randomly mask the data-tokens and learn to recover them. However, this naive masking might not be particularly effective in CV as a single data token may not represent a semantic concept. The idea of GMML is to transform a group of connected patches representing a significant part of an object having semantic meaning and recover them by learning a model. To consolidate the semantically related data-tokens forming an object, GMML is performed by applying several transformations to locally connected datatoken/patches of the image, including random drop (i.e. replacing random connected patches with noise) and random replace (i.e. replacing random connected patches with patches from another image), colour distortions, recolouring, etc. Note that unlike NLP, GMML in computer vision (CV) has an added feature, where partial transformation of data-token is possible. The transformer is then trained to recover the values of the corrupted/missing data-token to learn better semantic representations of the input images.
30
+
31
+ The main strength of MC-SSL0.0 is that the group of missing/corrupted tokens can only be recovered if the transformer has learnt the context of the object from visible patches. Another strength of MC-SSL0.0 is its desirable learning behaviour. Unlike, other SSL losses, MC-SSL0.0 performance saturates slower, leading to continuous improvement in performance. This could be because a large portion of image is masked, providing heavy augmentation for training data. Nevertheless, the performance gain becomes marginal after a few hundred epochs. It should be noted that these transformed tokens can either be on the dominant (so called foreground) object or on the other (so called background) objects, and recovering these tokens is equally valid for both scenarios. Our thesis is that there is no such thing as background in natural images. Each patch/pixel in the image represents some concept with visual semantic meaning. The intuition is that by modelling all semantic concepts, the network will generalise better for unseen tasks, whether they are related to an object, a distributed object, dense prediction task like detection and segmentation or to the whole visual signal.
32
+
33
+ We leverage the strength of the transformers and incorporate it with GMML to train MC-SSL0.0 with two different objectives: Patch reconstruction (Section [3.2.1\)](#page-3-0), where the network is trained to reconstruct the image corrupted by the GMML-based manipulation and patch concept classification (Section [3.2.2\)](#page-3-1) by training the network to learn datatoken level concepts/classes for individual data-tokens.
34
+
35
+ For image reconstruction, we propose to use the transformer as a group masked autoencoder, *i.e.*, visual transformer autoencoder with GMML. By analogy to auto-encoders, our network is trained to reconstruct the input image through the output tokens of the transformer. The GMML-based manipulated images ¯x ∈ R <sup>H</sup>×W×<sup>C</sup> are fed to the student backbone and the output tokens of the student transformer are fed to the patch reconstruction projection head spr(.) to obtain x<sup>r</sup> := spr(sb(¯x)), i.e. the reconstructed image. The spr(.) projection head consists of 3 fully connected layers; first two with 2048 neurons and GeLU [\[28\]](#page-9-15) non-linearity each, and the last bottleneck layer with 256 neurons, followed by a transposed convolution to return back to the image space.
36
+
37
+ The objective of the image reconstruction is to restore the original image from the corrupted image. For this task, we use the `1-loss between the original and the reconstructed image as shown in Equation [1.](#page-3-2) Although, `2-loss generally converges faster than `1-loss, `2-loss is prone to oversmooth the edges in the restored image [\[57\]](#page-10-3). Therefore, `1-loss is commonly used for image-to-image processing rather than the `2-loss.
38
+
39
+ <span id="page-3-2"></span>
40
+ $$\mathcal{L}_{\text{recons}}(\mathbf{W}) = ||\mathbf{x} - \mathbf{x_r}|| \tag{1}$$
41
+
42
+ where ||.|| is the `1 norm, and W denotes the parameters of the transformer to be learned during training.
43
+
44
+ The idea of concept learning using SSL is first introduced in DINO [\[8\]](#page-8-3). DINO is providing a pseudo label for the student network by setting a low temperature in the softmax activation of the teacher network. This low temperature sharpens the probability vector leading to one class getting significantly higher probability than the others. Hence, DINO is focusing on learning one dominant concept in the image. Following our hypothesis that modelling only the dominant class in an image can lead to sub-optimal representation learning, we investigate the role of learning the appropriate concept for each of the data token. This is a step towards extracting a representation corresponding to each concept in an image. In a recent work, Beit [\[3\]](#page-8-1) assigned fixed classes to each of the patches using a pre-trained image tokenizer [\[49\]](#page-9-13). Then they used BERT/SiT like framework to predict the classes corresponding to a masked data token. Unlike Beit, we employ a momentum encoder (teacher) <span id="page-4-1"></span>to generate pseudo labels for the visual tokens, and force the student model to make predictions consistent with the teacher model. Patch concept learning gives the flexibility to adapt to the visual concepts present in images rather, than using a pretrained fixed tokenizer. This is an import ingredient in learning the semantic representation of each object, which will be described at the end of the section.
45
+
46
+ In order to generate pseudo labels for the visual tokens, the training images are fed to the backbone of the teacher network, and the outputs of the data tokens are then fed to a patch classification projection head to obtain zt := tpc(tb(x)) ∈ R <sup>n</sup>×K, where K represents the number of classes of visual tokens. Similar to DINO, the patch classification projection head consists of 3 fully connected layers; first two with 2048 neurons and GeLU non-linearity each, and the last bottleneck layer with 256 neurons. The output of the bottleneck layer is l2 normalised and directly connected to a weight-normalised fully connected classification layer with K neurons. For each training sample, the GMML-based manipulated images are passed to the student network to obtain z<sup>s</sup> = spc(sb(¯x)) ∈ R <sup>n</sup>×<sup>K</sup>. The task is to match pc<sup>s</sup> to pc<sup>t</sup> employing the Kullback-Leibler divergence (KL) between the outputs of the teacher and student networks.
47
+
48
+ Training the student to match the teacher output can easily lead to a trivial constant (i.e. collapsed) embeddings. To avoid the model collapse, we adopted the centring and sharpening of the momentum teacher outputs introduced in [\[8\]](#page-8-3). Centring encourages the output to follow a uniform distribution, while the sharpening has the opposite effect. Applying both operations balances their effects, which is sufficient to avoid a collapse in the presence of a momentum teacher. The centre c is updated using an exponential moving average over the teacher output. The sharpening is obtained by using a low value for the temperature τ<sup>t</sup> in the teacher softmax normalisation. The output probability distributions p<sup>t</sup> and p<sup>s</sup> from the teacher and the student networks over n patches and K dimensions are obtained as follows:
49
+
50
+ $$p_s^{(i,j)} = \frac{\exp(z_s^{(i,j)}/\tau_s)}{\sum_{k=1}^K \exp(z_s^{(i,k)}/\tau_s)}$$
51
+ (2)
52
+
53
+ $$p_t^{(i,j)} = \frac{\exp(z_t^{(i,j)}/\tau_t)}{\sum_{k=1}^K \exp(z_t^{(i,k)}/\tau_t)}$$
54
+ (3)
55
+
56
+ where, z<sup>s</sup> and z<sup>t</sup> are the class logits for the student and the teacher, ps(i, .) and pt(i, .) are the output probabilities by the student and teacher, corresponding to the i th token, and τ<sup>t</sup> and τ<sup>s</sup> are the temperature parameters for the teacher and the student, respectively.
57
+
58
+ $$\mathcal{L}_{\text{classify}}(\mathbf{W}) = \frac{1}{n \times K} \sum_{i=1}^{n} \sum_{j=1}^{K} p_t^{(i,j)} \log p_s^{(i,j)} \quad (4)$$
59
+
60
+ The reconstruction of the grouped masked tokens and learning of patch level concepts with MC-SSL0.0 gives us a mechanism to adaptively learn multiple concepts in each image. The hypothesis of MC-SSL0.0 is that a network will be able to recover semantically transformed data-tokens from the rest of the semantic clues about the object and context only if it learns the "semantics" of the objects in the image. Once transformer autoencoder is aware of a semantic concept, the role of the auxiliary patch concept learning task is to encourage a shared/common label corresponding to all semantically related data-tokens. This consolidated information from all the data-tokens about an object (distributed concept) in an image can be assigned a name which humans are using in their daily life. As a data-token represents a small portion of image/object and is likely to have less overall variability, we do not need to learn a large number of classes/concepts for patches, i.e., K can be small. In contrast, a complex object needs to consolidate information from multiple data-tokens, which constitute the object. Hence, even with a small/limited number of local concepts for each data-token, the possible representation space for objects is huge. For example, if the object consists of only four data-tokens, and a patch concept learning space is only 1024, the number of possible configurations of local concepts will be 1024<sup>4</sup> > 1.9trillions. However, due to the local stationarity of natural visual signals the actual combination will be far less.
61
+
62
+ Transformer autoencoder and auxiliary task of patch concept learning[2](#page-4-0) in MC-SSL0.0 provide a superior way to use the information present in an image. More importantly, it give us a mechanism to model all the concept/object present in an image. Some self learnt concepts in each images are shown in Figure [1.](#page-0-1) Specifically, we obtain yt := tb(x) ∈ R <sup>n</sup>×<sup>D</sup> which are the output features corresponding to the data-tokens of the input image. The output features are then clustered, employing simple k-means [\[1\]](#page-8-17)), into 2, 3, and 4 clusters, where each colour represents a different cluster. Note that the model is trained without any sort of supervision, yet, MC-SSL0.0 demonstrates the ability to differentiate between concepts. As shown in Section [4](#page-5-0) this superior way of utilising the information enables us to obtain remarkable results with extremely limited resources, limiting us to use only small models and 10% of ImageNet.
63
+
64
+ Some of the desirable properties of MC-SSL0.0 include:
65
+
66
+ 1. Training Transformers on tiny datasets: Transformers trained by supervised losses can attend to all the
67
+
68
+ <span id="page-4-0"></span><sup>2</sup>We note that in parallel with us, iBot [\[58\]](#page-10-4) used the idea of patch concept learning. However, iBot focuses on learning a dominant concept via the classification token with DINO loss and hence, models the dominant class, which we believe is a limitation of existing SSL methods.
69
+
70
+ <span id="page-5-2"></span>data-token coming from an image empowering them to model global information. However, the side effect is that they need a lot more data to model local context (stationary of visual signal in local neighbourhood) MC-SSL0.0 overcome this limitation by masking group of semantically related data-token. This masking encourages the network to learn from semantic information present in the locality of group of masked tokens to recover the missing information hence, modelling local contextual information. Therefore, using MC-SSL0.0 based pretraining it is possible to train the transformers on small datasets with high accuracy. In fact we have validated this by training vision transformer from scratch for a high accuracy using MC-SSL0.0 as pretext task.
71
+
72
+ - 2. The MC-SSL0.0 framework is aware of the notion of concepts in the sense that the heavily masked information can be reconstructed with respect to shape and texture from the available information from the visible data tokens on the same concept/object and surrounding concepts. This is also evident from the ability of MC-SSL0.0 to self-cluster semantically related datatokens from an object in the image without using any labels for training as demonstrated in Figure [1.](#page-0-1)
73
+ - 3. MC-SSL0.0 based pretraining outperforms supervised pretraining for multiple downstream tasks of multilabel and multi-class datasets given the same amount of pretraining data. Moreover, MC-SSL0.0 pretraining also outperforms state-of-the-art SSL pretraining. This is due to the fact that MC-SSL0.0 makes better use of rich semantic information present in all the data-tokens relating to each concept/class/object in an image.
74
+ - 4. Big batch size is a standard requirement in many of the constrastive learning based SSL, making them unusable when relying on a modest GPU resource. MC-SSL0.0 does not suffered from this problem and outperforms the state-of-the-art for small batch sizes as well. This strength is coming from the nature of the pretext task which does not involve negative classes.
75
+ - 5. Proposed framework is generic and is applicable to other application domains of AI, e.g., sound, medical image analysis, etc. We left this for future study.
2112.08025/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.08025/paper_text/intro_method.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Knowledge graphs (KGs) structure factual information in the form of triples $(e_s, r ,e_o)$, where $e_s$ and $e_o$ correspond to entities in the real world and $r$ to a binary relation, e. g., *(Anna, born in, Paris)*. This knowledge representation leads to an interpretation as a directed multigraph, where entities are identified with nodes and relations with edge types. Each edge $(e_s, r ,e_o)$ in the KG encodes an observed fact, where the source node $e_s$ corresponds to the subject entity, the target node $e_o$ to the object entity, and the edge type $r$ to the predicate of the factual statement.
4
+
5
+ Some real-world information also includes a temporal dimension, e. g., the event *(Anna, born in, Paris)* happened on a specific date. To model the large amount of available event data that induce complex interactions between entities over time, temporal knowledge graphs (tKGs) have been introduced. Temporal KGs extend the triples to quadruples $(e_s, r, e_o, t)$ to integrate a timestamp or time range $t$, where $t$ indicates the time validity of the static event $(e_s, r, e_o)$, e. g., *(Angela Merkel, visit, China, 2014/07/04)*. Figure [1](#fig:tkg_example){reference-type="ref" reference="fig:tkg_example"} visualizes a subgraph from the dataset ICEWS14 as an example of a tKG. In this work, we focus on tKGs where each edge is equipped with a single timestamp.
6
+
7
+ <figure id="fig:tkg_example" data-latex-placement="t">
8
+ <img src="tkg_example.png" />
9
+ <figcaption>A subgraph from the dataset ICEWS14 with the entities <em>Angela Merkel, Barack Obama, France</em>, and <em>China</em>. The timestamps are displayed in the format yy/mm/dd. The dotted blue line represents the correct answer to the query <em>(Angela Merkel, consult, ?, 2014/08/09)</em>. Previous interactions between <em>Angela Merkel</em> and <em>Barack Obama</em> can be interpreted as an explanation for the prediction.</figcaption>
10
+ </figure>
11
+
12
+ One of the common tasks on KGs is link prediction, which finds application in areas such as recommender systems [@nectr.hildebrandt.2019], knowledge base completion [@convkb.nguyen.2018], and drug repurposing [@polo.liu.2021]. Taking the additional temporal dimension into account, it is of special interest to forecast events for future timestamps based on past information. Notable real-world applications that rely on accurate event forecasting are, e. g., clinical decision support, supply chain management, and extreme events modeling. In this work, we address link forecasting on tKGs, where we consider queries $(e_s, r, ?, t)$ for a timestamp $t$ that has not been seen during training.
13
+
14
+ Several embedding-based methods have been introduced for tKGs to solve link prediction and forecasting (link prediction with future timestamps), e.g., TTransE [@ttranse.leblay.2018], TNTComplEx [@tntcomplex.lacroix.2020], and RE-Net [@renet.jin.2019]. The underlying principle is to project the entities and relations into a low-dimensional vector space while preserving the topology and temporal dynamics of the tKG. These methods can learn the complex patterns that lead to an event but often lack transparency and interpretability.
15
+
16
+ To increase the transparency and trustworthiness of the solutions, human-understandable explanations are necessary, which can be provided by logical rules. However, the manual creation of rules is often difficult due to the complex nature of events. Domain experts cannot articulate the conditions for the occurrence of an event sufficiently formally to express this knowledge as rules, which leads to a problem termed as the knowledge acquisition bottleneck. Generally, symbolic methods that make use of logical rules tend to suffer from scalability issues, which make them impractical for the application on large real-world datasets.
17
+
18
+ We propose TLogic that automatically mines cyclic temporal logical rules by extracting temporal random walks from the graph. We achieve both high predictive performance and time-consistent explanations in the form of temporal rules, which conform to the observation that the occurrence of an event is usually triggered by previous events. The main contributions of this work are summarized as follows:
19
+
20
+ - We introduce TLogic, a novel symbolic framework based on temporal random walks in temporal knowledge graphs. It is the first approach that directly learns temporal logical rules from tKGs and applies these rules to the link forecasting task.
21
+
22
+ - Our approach provides explicit and human-readable explanations in the form of temporal logical rules and is scalable to large datasets.
23
+
24
+ - We conduct experiments on three benchmark datasets (ICEWS14, ICEWS18, and ICEWS0515) and show better overall performance compared with state-of-the-art baselines.
25
+
26
+ - We demonstrate the effectiveness of our method in the inductive setting where our learned rules are transferred to a related dataset with a common vocabulary.
27
+
28
+ # Method
29
+
30
+ Let $[n] := \{1, 2, \dots, n\}$.
31
+
32
+ Let $\mathcal{E}$ denote the set of entities, $\mathcal{R}$ the set of relations, and $\mathcal{T}$ the set of timestamps.
33
+
34
+ A *temporal knowledge graph* (tKG) is a collection of facts $\mathcal{G} \subset \mathcal{E} \times \mathcal{R} \times \mathcal{E} \times \mathcal{T}$, where each fact is represented by a quadruple $(e_s, r, e_o, t)$. The quadruple $(e_s, r, e_o, t)$ is also called link or edge, and it indicates a connection between the subject entity $e_s \in \mathcal{E}$ and the object entity $e_o \in \mathcal{E}$ via the relation $r \in \mathcal{R}$. The timestamp $t \in \mathcal{T}$ implies the occurrence of the event $(e_s, r, e_o)$ at time $t$, where $t$ can be measured in units such as hour, day, and year.
35
+
36
+ For two timestamps $t$ and $\hat{t}$, we denote the fact that $t$ occurs earlier than $\hat{t}$ by $t < \hat{t}$. If additionally, $t$ could represent the same time as $\hat{t}$, we write $t \leq \hat{t}$.
37
+
38
+ We define for each edge $(e_s, r, e_o, t)$ an inverse edge $(e_o, r^{-1}, e_s, t)$ that interchanges the positions of the subject and object entity to allow the random walker to move along the edge in both directions. The relation $r^{-1} \in \mathcal{R}$ is called the inverse relation of $r$.
39
+
40
+ The goal of the *link forecasting* task is to predict new links for future timestamps. Given a query with a previously unseen timestamp $(e_s, r, ?, t)$, we want to identify a ranked list of object candidates that are most likely to complete the query. For subject prediction, we formulate the query as $(e_o, r^{-1}, ?, t)$.
41
+
42
+ A *non-increasing temporal random walk* $W$ of length $l \in \mathbb{N}$ from entity $e_{l+1} \in \mathcal{E}$ to entity $e_1 \in \mathcal{E}$ in the tKG $\mathcal{G}$ is defined as a sequence of edges $$\begin{align}
43
+ \begin{split}
44
+ ((e_{l+1}, r_l, e_l, t_l)&, (e_l, r_{l-1}, e_{l-1}, t_{l-1}), \dots, (e_2, r_1, e_1, t_1))\\
45
+ & \text{with}\;t_l \geq t_{l-1} \geq \dots \geq t_1,
46
+ \label{eq:temporal_random_walk}
47
+ \end{split}
48
+ \end{align}$$ where $(e_{i+1}, r_i, e_i, t_i) \in \mathcal{G}$ for $i \in [l]$.
49
+
50
+ A non-increasing temporal random walk complies with time constraints so that the edges are traversed only backward in time, where it is also possible to walk along edges with the same timestamp.
51
+
52
+ Let $E_i$ and $T_i$ for $i \in [l+1]$ be variables that represent entities and timestamps, respectively. Further, let $r_1, r_2, \dots, r_l, r_h \in \mathcal{R}$ be fixed.
53
+
54
+ A *cyclic temporal logical rule* $R$ of length $l \in \mathbb{N}$ is defined as $$\begin{equation*}
55
+ ((E_1, r_h, E_{l+1}, T_{l+1}) \leftarrow \wedge_{i=1}^l (E_i, r_i, E_{i+1}, T_i))
56
+ \label{eq:logical_rule}
57
+ \end{equation*}$$ with the temporal constraints $$\begin{equation}
58
+ T_1 \leq T_2 \leq \dots \leq T_l < T_{l+1}.
59
+ \label{eq:rule_time_constraints}
60
+ \end{equation}$$ The left-hand side of $R$ is called the rule head, with $r_h$ being the head relation, while the right-hand side is called the rule body, which is represented by a conjunction of body atoms $(E_i, r_i, E_{i+1}, T_i)$. The rule is called cyclic because the rule head and the rule body constitute two different walks connecting the same two variables $E_1$ and $E_{l+1}$. A temporal rule implies that if the rule body holds with the temporal constraints given by [\[eq:rule_time_constraints\]](#eq:rule_time_constraints){reference-type="eqref" reference="eq:rule_time_constraints"}, then the rule head is true as well for a future timestamp $T_{l+1}$.
61
+
62
+ The replacement of the variables $E_i$ and $T_i$ by constant terms is called grounding or instantiation. For example, a grounding of the temporal rule $$((E_1, \textit{consult}, E_2, T_2) \leftarrow (E_1, \textit{discuss by telephone}, E_2, T_1))$$ is given by the edges *(Angela Merkel, discuss by telephone, Barack Obama, 2014/07/22)* and *(Angela Merkel, consult, Barack Obama, 2014/08/09)* in Figure [1](#fig:tkg_example){reference-type="ref" reference="fig:tkg_example"}. Let rule grounding refer to the replacement of the variables in the entire rule and body grounding refer to the replacement of the variables only in the body, where all groundings must comply with the temporal constraints in [\[eq:rule_time_constraints\]](#eq:rule_time_constraints){reference-type="eqref" reference="eq:rule_time_constraints"}.
63
+
64
+ In many domains, logical rules are frequently violated so that confidence values are determined to estimate the probability of a rule's correctness. We adapt the standard confidence to take timestamp values into account. Let $(r_1, r_2, \dots, r_l, r_h)$ be the relations in a rule $R$. The body support is defined as the number of body groundings, i. e., the number of tuples $(e_1, \dots, e_{l+1}, t_1, \dots, t_l)$ such that $(e_i, r_i, e_{i+1}, t_i) \in \mathcal{G}$ for $i \in [l]$ and $t_i \leq t_{i+1}$ for $i \in [l-1]$. The rule support is defined as the number of body groundings such that there exists a timestamp $t_{l+1} > t_l$ with $(e_1, r_h, e_{l+1}, t_{l+1}) \in \mathcal{G}$. The confidence of the rule $R$, denoted by conf($R$), can then be obtained by dividing the rule support by the body support.
65
+
66
+ We introduce TLogic, a rule-based link forecasting framework for tKGs. TLogic first extracts temporal walks from the graph and then lifts these walks to a more abstract, semantic level to obtain temporal rules that generalize to new data. The application of these rules generates answer candidates, for which the body groundings in the graph serve as explicit and human-readable explanations. Our framework consists of the components rule learning and rule application. The pseudocode for rule learning is shown in Algorithm [\[alg:rule_learning\]](#alg:rule_learning){reference-type="ref" reference="alg:rule_learning"} and for rule application in Algorithm [\[alg:rule_application\]](#alg:rule_application){reference-type="ref" reference="alg:rule_application"}.
67
+
68
+ :::: algorithm
69
+ **Input**: Temporal knowledge graph $\mathcal{G}$.\
70
+ **Parameters**: Rule lengths $\mathcal{L} \subset \mathbb{N}$, number of temporal random walks $n \in \mathbb{N}$, transition distribution $d \in \{\mathrm{unif}, \exp\}$.\
71
+ **Output**: Temporal logical rules $\mathcal{TR}$.
72
+
73
+ ::: algorithmic
74
+ **return** $\mathcal{TR}$
75
+ :::
76
+ ::::
77
+
78
+ As the first step of rule learning, temporal walks are extracted from the tKG $\mathcal{G}$. For a rule of length $l$, a walk of length $l+1$ is sampled, where the additional step corresponds to the rule head.
79
+
80
+ Let $r_h$ be a fixed relation, for which we want to learn rules. For the first sampling step $m=1$, we sample an edge $(e_1, r_h, e_{l+1}, t_{l+1})$, which will serve as the rule head, uniformly from all edges with relation type $r_h$. A temporal random walker then samples iteratively edges adjacent to the current object until a walk of length $l+1$ is obtained.
81
+
82
+ For sampling step $m \in \{2, \dots, l+1\}$, let $(e_s, \tilde{r}, e_o, t)$ denote the previously sampled edge and $\mathcal{A}(m,e_o,t)$ the set of feasible edges for the next transition. To fulfill the temporal constraints in [\[eq:temporal_random_walk\]](#eq:temporal_random_walk){reference-type="eqref" reference="eq:temporal_random_walk"} and [\[eq:rule_time_constraints\]](#eq:rule_time_constraints){reference-type="eqref" reference="eq:rule_time_constraints"}, we define $$\begin{align*}
83
+ &\mathcal{A}(m, e_o, t) := \\
84
+ &\begin{cases}
85
+ \{(e_o, r, e, \hat{t}) \mid (e_o, r, e, \hat{t}) \in \mathcal{G},\; \hat{t} < t\} &\;\text{if}\; m = 2,\\
86
+ \{(e_o, r, e, \hat{t}) \mid (e_o, r, e, \hat{t}) \in \tilde{\mathcal{G}},\; \hat{t} \leq t\} &\;\text{if}\; m \in \{3, \dots, l\},\\
87
+ \{(e_o, r, e_1, \hat{t}) \mid (e_o, r, e_1, \hat{t}) \in \tilde{\mathcal{G}},\; \hat{t} \leq t\} &\;\text{if}\; m = l+1,
88
+ \end{cases}
89
+ \end{align*}$$ where $\tilde{\mathcal{G}} := \mathcal{G}\setminus \{(e_o, \tilde{r}^{-1}, e_s, t)\}$ excludes the inverse edge to avoid redundant rules. For obtaining cyclic walks, we sample in the last step $m = l+1$ an edge that connects the walk to the first entity $e_1$ if such edges exist. Otherwise, we sample the next walk.
90
+
91
+ The transition distribution for sampling the next edge can either be uniform or exponentially weighted. We define an index mapping $\hat{m} := (l+1) - (m-2)$ to be consistent with the indices in [\[eq:temporal_random_walk\]](#eq:temporal_random_walk){reference-type="eqref" reference="eq:temporal_random_walk"}. Then, the exponentially weighted probability for choosing edge $u \in \mathcal{A}\left(m,e_{\hat{m}}, t_{\hat{m}}\right)$ for $m \in \{2, \dots, l+1\}$ is given by $$\begin{equation}
92
+ \mathbb{P}(u; m, e_{\hat{m}}, t_{\hat{m}}) = \frac{\exp(t_u - t_{\hat{m}})}{\sum\limits_{\hat{u} \in \mathcal{A}\left(m, e_{\hat{m}}, t_{\hat{m}}\right)}\exp(t_{\hat{u}} - t_{\hat{m}})}
93
+ \label{eq:exp_distribution}
94
+ \end{equation}$$ where $t_u$ denotes the timestamp of edge $u$. The exponential weighting favors edges with timestamps that are closer to the timestamp of the previous edge and probably more relevant for prediction.
95
+
96
+ The resulting temporal walk $W$ is given by $$\begin{equation}
97
+ ((e_1, r_h, e_{l+1}, t_{l+1}), (e_{l+1}, r_l, e_l, t_l), \dots, (e_2, r_1, e_1, t_1)).
98
+ \label{eq:temporal_walk}
99
+ \end{equation}$$ $W$ can then be transformed to a temporal rule $R$ by replacing the entities and timestamps with variables. While the first edge in $W$ becomes the rule head $(E_1, r_h, E_{l+1}, T_{l+1})$, the other edges are mapped to body atoms, where each edge $(e_{i+1}, r_i, e_i, t_i)$ is converted to the body atom $(E_i, r_i^{-1}, E_{i+1}, T_i)$. The final rule $R$ is denoted by $$\begin{equation}
100
+ ((E_1, r_h, E_{l+1}, T_{l+1}) \leftarrow \wedge_{i=1}^l (E_i, r_i^{-1}, E_{i+1}, T_i)).
101
+ \label{eq:temporal_rule}
102
+ \end{equation}$$ In addition, we impose the temporal consistency constraints $T_1 \leq T_2 \leq \dots \leq T_l < T_{l+1}$.
103
+
104
+ The entities $(e_1, \dots, e_{l+1})$ in $W$ do not need to be distinct since a pair of entities can have many interactions at different points in time. For example, Angela Merkel made several visits to China in 2014, which could constitute important information for the prediction. Repetitive occurrences of the same entity in $W$ are replaced with the same random variable in $R$ to maintain this knowledge.
105
+
106
+ For the confidence estimation of $R$, we sample from the graph a fixed number of body groundings, which have to match the body relations and the variable constraints mentioned in the last paragraph while satisfying the condition from [\[eq:rule_time_constraints\]](#eq:rule_time_constraints){reference-type="eqref" reference="eq:rule_time_constraints"}. The number of unique bodies serves as the body support. The rule support is determined by counting the number of bodies for which an edge with relation type $r_h$ exists that connects $e_1$ and $e_{l+1}$ from the body. Moreover, the timestamp of this edge has to be greater than all body timestamps to fulfill [\[eq:rule_time_constraints\]](#eq:rule_time_constraints){reference-type="eqref" reference="eq:rule_time_constraints"}.
107
+
108
+ For every relation $r \in \mathcal{R}$, we sample $n \in \mathbb{N}$ temporal walks for a set of prespecified lengths $\mathcal{L} \subset \mathbb{N}$. The set $\mathcal{TR}_r^l$ stands for all rules of length $l$ with head relation $r$ with their corresponding confidences. All rules for relation $r$ are included in $\mathcal{TR}_r := \cup_{l \in \mathcal{L}} \mathcal{TR}_r^l$, and the complete set of learned temporal rules is given by $\mathcal{TR} := \cup_{r \in \mathcal{R}} \mathcal{TR}_r$.
109
+
110
+ It is possible to learn rules only for a single relation or a set of specific relations of interest. Explicitly learning rules for all relations is especially effective for rare relations that would otherwise only be sampled with a small probability. The learned rules are not specific to the graph from which they have been extracted, but they could be employed in an inductive setting where the rules are transferred to related datasets that share a common vocabulary for straightforward application.
111
+
112
+ :::: algorithm
113
+ **Input**: Test query $q = (e^q, r^q, ?, t^q)$, temporal logical rules $\mathcal{TR}$, temporal knowledge graph $\mathcal{G}$.\
114
+ **Parameters**: Time window $w \in \mathbb{N} \cup \{\infty\}$, minimum number of candidates $k$, score function $f$.\
115
+ **Output**: Answer candidates $\mathcal{C}$.
116
+
117
+ ::: algorithmic
118
+ **return** $\mathcal{C}$
119
+ :::
120
+ ::::
121
+
122
+ The learned temporal rules $\mathcal{TR}$ are applied to answer queries of the form $q = (e^q, r^q, ?, t^q)$. The answer candidates are retrieved from the target entities of body groundings in the tKG $\mathcal{G}$. If there exist no rules $\mathcal{TR}_{r^q}$ for the query relation $r^q$, or if there are no matching body groundings in the graph, then no answers are predicted for the given query.
123
+
124
+ To apply the rules on relevant data, a subgraph $\mathcal{SG} \subset \mathcal{G}$ dependent on a time window $w \in \mathbb{N} \cup \{\infty\}$ is retrieved. For $w \in \mathbb{N}$, the subgraph $\mathcal{SG}$ contains all edges from $\mathcal{G}$ that have timestamps $t \in [t^q-w, t^q)$. If $w = \infty$, then all edges with timestamps prior to the query timestamp $t^q$ are used for rule application, i. e., $\mathcal{SG}$ consists of all facts with $t \in [t_{\mathrm{min}}, t^q)$, where $t_{\mathrm{min}}$ is the minimum timestamp in the graph $\mathcal{G}$.
125
+
126
+ We apply the rules $\mathcal{TR}_{r^q}$ by decreasing confidence, where each rule $R$ generates a set of answer candidates $\mathcal{C}(R)$. Each candidate $c \in \mathcal{C}(R)$ is then scored by a function $f: \mathcal{TR}_{r^q} \times \mathcal{E} \rightarrow [0,1]$ that reflects the probability of the candidate being the correct answer to the query.
127
+
128
+ Let $\mathcal{B}(R,c)$ be the set of body groundings of rule $R$ that start at entity $e^q$ and end at entity $c$. We choose as score function $f$ a convex combination of the rule's confidence and a function that takes the time difference $t^q - t_1(\mathcal{B}(R,c))$ as input, where $t_1(\mathcal{B}(R,c))$ denotes the earliest timestamp $t_1$ in the body. If several body groundings exist, we take from all possible $t_1$ values the one that is closest to $t^q$. For candidate $c \in \mathcal{C}(R)$, the score function is defined as $$\begin{equation}
129
+ f(R,c) = a \cdot \mathrm{conf}(R) + (1-a) \cdot \exp(-\lambda (t^q - t_1(\mathcal{B}(R,c))))
130
+ \label{eq:score_function}
131
+ \end{equation}$$ with $\lambda > 0$ and $a \in [0,1]$.
132
+
133
+ The intuition for this choice of $f$ is that candidates generated by high-confidence rules should receive a higher score. Adding a dependency on the timeframe of the rule grounding is based on the observation that the existence of edges in a rule become increasingly probable with decreasing time difference between the edges. We choose the exponential distribution since it is commonly used to model interarrival times of events. The time difference $t^q - t_1(\mathcal{B}(R,c))$ is always non-negative for a future timestamp value $t^q$, and with the assumption that there exists a fixed mean, the exponential distribution is also the maximum entropy distribution for such a time difference variable. The exponential distribution is rescaled so that both summands are in the range $[0,1]$.
134
+
135
+ All candidates are saved with their scores as $(c, f(R,c))$ in $\mathcal{C}$. We stop the rule application when the number of different answer candidates $|\{c \mid \exists R: (c, f(R,c)) \in \mathcal{C}\}|$ is at least $k$ so that there is no need to go through all rules.
136
+
137
+ For the ranking of the answer candidates, all scores of each candidate $c$ are aggregated through a noisy-OR calculation, which produces the final score $$\begin{equation}
138
+ 1 - \Pi_{\{s \mid (c, s) \in \mathcal{C}\}} (1 - s).
139
+ \label{eq:score_aggregation}
140
+ \end{equation}$$ The idea is to aggregate the scores to produce a probability, where candidates implied by more rules should have a higher score.
141
+
142
+ In case there are no rules for the query relation $r^q$, or if there are no matching body groundings in the graph, it might still be interesting to retrieve possible answer candidates. In the experiments, we apply a simple baseline where the scores for the candidates are obtained from the overall object distribution in the training data if $r^q$ is a new relation. If $r^q$ already exists in the training set, we take the object distribution of the edges with relation type $r^q$.
2201.08214/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="b6379316-3376-4eb6-825c-9c6dae71df5b" modified="2020-09-30T02:47:11.791Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Code/1.49.1 Chrome/83.0.4103.122 Electron/9.2.1 Safari/537.36" version="13.1.3" etag="1pryFbFkEseZkvE9Rhc9"><diagram id="PRyXCxjBIDa6B1xub-gX" name="Page-1"><mxGraphModel dx="560" dy="907" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="500" pageHeight="250" math="0" shadow="0"><root><mxCell id="0"/><mxCell id="1" parent="0"/><mxCell id="3" value="0.9" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="10" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="4" value="2.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="10" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="5" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;strokeColor=#000000;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="10" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="6" value="-1.5" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="10" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="7" value="1.2" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="50" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="8" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;strokeColor=#000000;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="90" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="10" value="0.7" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="130" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="11" value="1.4" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="130" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="12" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;strokeColor=#000000;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="130" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="13" value="-0.9" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="130" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="14" value="0.9" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="90" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="15" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;labelBackgroundColor=none;fillColor=#ABABAB;strokeColor=#000000;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="50" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="16" value="0.3" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="50" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="17" value="0.8" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="90" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="18" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;strokeColor=#000000;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="50" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="19" value="1.2" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="90" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="20" value="0.3" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="10" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="21" value="-1.4" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="50" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="22" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;strokeColor=#000000;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="90" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="23" value="-0.2" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="130" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="24" value="-0.2" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="170" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="25" value="-0.8" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="170" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="26" value="-1.6" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="170" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="27" value="0.2" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="170" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="28" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;strokeColor=#000000;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="170" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="54" value="0.9" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="290" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="55" value="2.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="290" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="56" value="1.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="290" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="57" value="-1.5" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="290" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="58" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="330" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="59" value="0.3" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="370" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="60" value="0.7" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="410" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="61" value="1.4" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="410" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="62" value="0.1" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="410" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="63" value="-0.9" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="410" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="64" value="0.9" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="370" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="65" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="330" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="66" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="330" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="67" value="0.8" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="370" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="68" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="330" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="69" value="1.2" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="370" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="70" value="0.3" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="290" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="71" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;gradientColor=none;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="330" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="72" value="-1.3" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="370" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="73" value="-0.2" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=none;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="410" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="74" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="450" y="160" width="40" height="40" as="geometry"/></mxCell><mxCell id="75" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="450" y="120" width="40" height="40" as="geometry"/></mxCell><mxCell id="76" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="450" y="80" width="40" height="40" as="geometry"/></mxCell><mxCell id="77" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="450" y="40" width="40" height="40" as="geometry"/></mxCell><mxCell id="78" value="0.0" style="whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Times New Roman;fillColor=#ABABAB;fontColor=#000000;strokeColor=#000000;" vertex="1" parent="1"><mxGeometry x="450" y="200" width="40" height="40" as="geometry"/></mxCell><mxCell id="79" value="&lt;font style=&quot;font-size: 16px&quot;&gt;TRAINING&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontFamily=Times New Roman;fontSize=20;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="40" width="140" height="30" as="geometry"/></mxCell><mxCell id="84" value="&lt;font style=&quot;font-size: 16px&quot;&gt;EVALUATION&lt;/font&gt;" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontFamily=Times New Roman;fontSize=20;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="320" width="140" height="30" as="geometry"/></mxCell></root></mxGraphModel></diagram></mxfile>
2201.08214/main_diagram/main_diagram.pdf ADDED
Binary file (9.37 kB). View file
 
2201.08214/paper_text/intro_method.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ There have been considerable improvements to the quality of pre-trained contextualized representations in recent years [e.g., @petersDeepContextualizedWord2018; @devlinBERTPretrainingDeep2019; @t5]. These advances have sparked an interest in understanding what linguistic information may be lurking within the representations themselves [@poliakCollectingDiverseNatural2018; @zhang-bowman-2018-language; @rogers-etal-2020-primer *inter alia*]. One philosophy that has been proposed to extract this information is called , the task of training an external classifier to predict the linguistic property of interest directly from the representations. The hope of probing is that it sheds light onto how much linguistic knowledge is present in representations and, perhaps, how that information is structured. Probing has grown to be a fruitful area of research, with researchers probing for morphological [@tang-etal-2020-understanding-pure; @acs-etal-2021-subword], syntactic [@voita-titov-2020-information; @hall-maudslay-etal-2020-tale; @acs-etal-2021-subword], and semantic [@vulic-etal-2020-probing; @tang-etal-2020-understanding-pure] information.
4
+
5
+ <figure id="fig:heatmap_gender" data-latex-placement="t">
6
+ <embed src="images/04_bert_Number_top30.pdf" />
7
+ <figcaption>The percentage overlap between the top-30 most informative number dimensions in for the probed languages. Statistically significant overlap, after Holm–Bonferroni family-wise error correction <span class="citation" data-cites="holmSimpleSequentiallyRejective1979"></span>, with <span class="math inline"><em>α</em> = 0.05</span>, is marked with an orange square. </figcaption>
8
+ </figure>
9
+
10
+ In this paper, we focus on one type of probing known as [@dalviWhatOneGrain2019; @intrinsic], a subset of which specifically aims to ascertain how information is structured within a representation. This means that we are not solely interested in determining whether a network encodes the tense of a verb, but also in pinpointing exactly *which* neurons in the network are responsible for encoding the property. Unfortunately, the naïve formulation of intrinsic probing requires one to analyze all possible combinations of neurons, which is intractable even for the smallest representations used in modern-day NLP. For example, analyzing all combinations of 768-dimensional word representations would require us to train $2^{768}$ different probes, one for each combination of neurons, which far exceeds the estimated number of atoms in the observable universe.
11
+
12
+ To obviate this difficulty, we introduce a novel latent-variable probe for discriminative intrinsic probing. The core idea of this approach is that instead of training a different probe for each combination of neurons, we introduce a subset-valued latent variable. We approximately marginalize over the latent subsets using variational inference. Training the probe in this manner results in a set of parameters which work well across all possible subsets. We propose two variational families to model the posterior over the latent subset-valued random variables, both based on common sampling designs: Poisson sampling, which selects each neuron based on independent Bernoulli trials, and conditional Poisson sampling, which first samples a fixed number of neurons from a uniform distribution and then a subset of neurons of that size [@lohr2019sampling]. Conditional Poisson sampling offers the modeler more control over the distribution over subset sizes; they may pick the parametric distribution themselves.
13
+
14
+ We compare both variants to the two main intrinsic probing approaches we are aware of in the literature ([5](#sec:results){reference-type="ref+label" reference="sec:results"}). To do so, we train probes for morphosyntactic properties across 6 languages (English, Portuguese, Polish, Russian, Arabic, and Finnish) from the Universal Dependencies (UD; @ud-2.1) treebanks. We show that, in general, both variants of our method yield tighter estimates of the mutual information, though the model based on conditional Poisson sampling yields slightly better performance. This suggests that they are better at quantifying the informational content encoded in m-representations [@devlinBERTPretrainingDeep2019]. We make two typological findings when applying our probe. We show that there is a difference in how information is structured depending on the language with certain language--attribute pairs requiring more dimensions to encode relevant information. We also analyze whether neural representations are able to learn cross-lingual abstractions from multilingual corpora. We confirm this statement and observe a strong overlap in the most informative dimensions, especially for number ([1](#fig:heatmap_gender){reference-type="ref+label" reference="fig:heatmap_gender"}). In an additional experiment, we show that our method supports training deeper probes ([5.4](#sec:results-deeper){reference-type="ref+label" reference="sec:results-deeper"}), though the advantages of non-linear probes over their linear counterparts are modest.
15
+
16
+ The success behind pre-trained contextual representations such as  [@devlinBERTPretrainingDeep2019] suggests that they may offer a continuous analogue of the discrete structures in language, such as morphosyntactic attributes number, case, or tense. Intrinsic probing aims to recognize the parts of a network (assuming they exist) which encode such structures. In this paper, we will operate exclusively at the level of the neuron---in the case of BERT, this is one component of the 768-dimensional vector the model outputs. However, our approach can easily generalize to other settings, e.g., the layers in a transformer or filters of a convolutional neural network. Identifying individual neurons responsible for encoding linguistic features of interest has previously been shown to increase model transparency [@bauIdentifyingControllingImportant2019]. In fact, knowledge about which neurons encode certain properties has also been employed to mitigate potential biases [@vigInvestigatingGB2020], for controllable text generation [@bauIdentifyingControllingImportant2019], and to analyze the linguistic capabilities of language models [@lakretzEmergenceNumberSyntax2019].
17
+
18
+ To formally describe our intrinsic probing framework, we first introduce some notation. We define $\Pi$ to be the set of values that some property of interest can take, e.g., $\Pi = \{\prop{Singular}, \prop{Plural}\}$ for the morphosyntactic number attribute. Let $\calD = \{ (\pi^{(n)}, \vh^{(n)}) \}_{n=1}^N$ be a dataset of label--representation pairs: $\pi^{(n)} \in \Pi$ is a linguistic property and $\vh^{(n)} \in \R^d$ is a representation. Additionally, let $D$ be the set of all neurons in a representation; in our setup, it is an integer range. In the case of , we have $D = \{1, \ldots, 768\}$. Given a subset of dimensions $C \subseteq D$, we write $\vh_C$ for the subvector of $\vh$ which contains only the dimensions present in $C$.
19
+
20
+ Let $\ptheta(\pin \mid \vhCn)$ be a probe---a classifier trained to predict $\pin$ from a subvector $\vhCn$. In intrinsic probing, our goal is to find the size $k$ subset of neurons $C \subseteq D$ which are most informative about the property of interest. This may be written as the following combinatorial optimization problem [@intrinsic]: $$\begin{equation}
21
+ \label{eq:optimization}
22
+ C^\star = \argmax_{\substack{C \subseteq D, \\ |C| = k}} \sum_{n=1}^N \log \ptheta\left(\pi^{(n)} \mid \vh^{(n)}_C\right)
23
+ \end{equation}$$ To exhaustively solve [\[eq:optimization\]](#eq:optimization){reference-type="ref+Label" reference="eq:optimization"}, we would have to train a probe $\ptheta\left(\pi \mid \vh_C\right)$ for every one of the exponentially many subsets $C \subseteq D$ of size $k$. Thus, exactly solving [\[eq:optimization\]](#eq:optimization){reference-type="ref+label" reference="eq:optimization"} is infeasible, and we are forced to rely on an approximate solution, e.g., greedily selecting the dimension that maximizes the objective. However, greedy selection alone is not enough to make solving [\[eq:optimization\]](#eq:optimization){reference-type="ref+label" reference="eq:optimization"} manageable; because we must *retrain* $\ptheta\left(\pi \mid \vh_C\right)$ for *every* subset $C \subseteq D$ considered during the greedy selection procedure, i.e., we would end up training $\mathcal{O}\left(k\,|D|\right)$ classifiers. As an example, consider what would happen if one used a greedy selection scheme to find the 50 most informative dimensions for a property on 768-dimensional representations. To select the first dimension, one would need to train 768 probes. To select the second dimension, one would train an additional 767, and so forth. After 50 dimensions, one would have trained 37893 probes. To address this problem, our paper introduces a latent-variable probe, which identifies a $\vtheta$ that can be used for any combination of neurons under consideration allowing a greedy selection procedure to work in practice.
24
+
25
+ The technical contribution of this work is a novel latent-variable model for intrinsic probing. Our method starts with a generic probabilistic probe $\ptheta(\pi \mid C, \vh)$ which predicts a linguistic attribute $\pi$ given a subset $C$ of the hidden dimensions; $C$ is then used to subset $\vh$ into $\vhC$. To avoid training a unique probe $\ptheta(\pi \mid C, \vh)$ for every possible subset $C\subseteq D$, we propose to integrate a prior over subsets $p(C)$ into the model and then to marginalize out all possible subsets of neurons: $$\begin{align}
26
+ \label{eq:joint}
27
+ \ptheta(\pi \mid \vh) &= \sum_{C \subseteq D} \ptheta(\pi \mid C, \vh)\,p(C)
28
+ % &= \sum_{C \subseteq D} \ptheta(\pi, C \mid \vh)
29
+ \end{align}$$ Due to this marginalization, our likelihood is *not* dependent on any specific subset of neurons $C$. Throughout this paper we opted for a non-informative, uniform prior $p(C)$, but other distributions are also possible.
30
+
31
+ Our goal is to estimate the parameters $\vtheta$. We achieve this by maximizing the log-likelihood of the training data $\sum_{n=1}^N \log \sum_{C \subseteq D} \ptheta(\pi^{(n)}, C\mid \vhn)$ with respect to the parameters $\vtheta$. Unfortunately, directly computing this involves a sum over all possible subsets of $D$---a sum with an exponential number of summands. Thus, we resort to a variational approximation. Let $\qphi(C)$ be a distribution over subsets, parameterized by parameters $\vphi$; we will use $\qphi(C)$ to approximate the true posterior distribution. Then, the log-likelihood is lower-bounded as follows: $$\begin{align}
32
+ &\sum_{n=1}^N \log \sum_{C \subseteq D} \ptheta(\pi^{(n)}, C\mid \vhn) \label{eq:vb} \\
33
+ %\,\,&= \sum_{n=1}^N \log \sum_{C \subseteq D} \qphi(C) \frac{\ptheta(\pii{n}, C \mid \vhi{n})}{\qphi(C)} \nonumber\\
34
+ %\,\,&= \sum_{n=1}^N \log \expectq \sqr{\frac{\ptheta(\pii{n}, C \mid \vhi{n})}{\qphi(C)}} \nonumber\\
35
+ %\,\,&\ge \sum_{n=1}^N \expectq \sqr{\log \frac{\ptheta(\pii{n}, C \mid \vhi{n})}{\qphi(C)}} \label{eq:vb} \\
36
+ &\ge \sum_{n=1}^N\left( \expectq \sqr{\log \ptheta(\pii{n}, C \mid \vhi{n})} \hspace{-0.1cm} + \hspace{-0.1cm} \mathrm{H}(q)\right) \nonumber
37
+ \end{align}$$ which follows from Jensen's inequality, where $\mathrm{H}(\qphi)$ is the entropy of $\qphi$.[^1]
38
+
39
+ Our likelihood is general, and can take the form of any objective function. This means that we can use this approach to train intrinsic probes with any type of architecture amenable to gradient-based optimization, e.g., neural networks. However, in this paper, we use a linear classifier unless stated otherwise. Further, note that [\[eq:vb\]](#eq:vb){reference-type="ref+label" reference="eq:vb"} is valid for any choice of $\qphi$. We explore two variational families for $\qphi$, each based on a common sampling technique. The first (herein ) applies Poisson sampling [@hajekAsymptoticTheoryRejective1964], which assumes each neuron to be subjected to an independent Bernoulli trial. The second one [; @aires1999algorithms] corresponds to conditional Poisson sampling, which can be defined as conditioning a Poisson sample by a fixed sample size.
40
+
41
+ As mentioned above, exact computation of the log-likelihood is intractable due to the sum over all possible subsets of $D$. Thus, we optimize the variational bound presented in [\[eq:vb\]](#eq:vb){reference-type="ref+label" reference="eq:vb"}. We optimize the bound through stochastic gradient descent with respect to the model parameters $\vtheta$ and the variational parameters $\vphi$, a technique known as stochastic variational inference [@svi]. However, one final trick is necessary, since the variational bound still includes a sum over all subsets in the first term: $$\begin{align}
42
+ \gradTheta \expectq &\sqr{\log \ptheta(\pii{n}, C \mid \vhi{n})} \\
43
+ &\,\,= \expectq \sqr{ \gradTheta \log \ptheta(\pii{n}, C \mid \vhi{n}) } \nonumber \\
44
+ &\,\,\approx \sum_{m=1}^M \sqr{ \gradTheta \log \ptheta(\pii{n}, C^{(m)} \mid \vhi{n}) } \nonumber
45
+ \end{align}$$ where we take $M$ Monte Carlo samples to approximate the sum. In the case of the gradient with respect to $\vphi$, we also have to apply the REINFORCE trick  [@Williams1992SimpleSG]: $$\begin{align}
46
+ &\gradPhi \expectq \sqr{\log \ptheta(\pii{n}, C \mid \vhi{n})} \\
47
+ &\,\,= \expectq \sqr{\log \ptheta(\pii{n}, C \mid \vhi{n}) \gradPhi \log \qphi(C)} \nonumber \\
48
+ &\,\,\approx \sum_{m=1}^M \sqr{\log \ptheta(\pii{n}, C^{(m)} \mid \vhi{n}) \gradPhi \log \qphi(C)} \nonumber
49
+ \end{align}$$ where we again take $M$ Monte Carlo samples. This procedure leads to an unbiased estimate of the gradient of the variational approximation.
50
+
51
+ We consider two choices of variational family $\qphi(C)$, both based on sampling designs [@lohr2019sampling]. Each defines a parameterized distribution over all subsets of $D$.
52
+
53
+ Poisson sampling is one of the simplest sampling designs. In our setting, each neuron $d$ is given a unique non-negative weight $w_d = \exp(\phi_d)$. This gives us the following parameterized distribution over subsets: $$\begin{equation}
54
+ \label{eq:coins}
55
+ \qphi(C) = \prod_{d \in C} \frac{w_d}{1+w_d} \prod_{d \not\in C} \frac{1}{1 + w_d}
56
+ \end{equation}$$ The formulation in [\[eq:coins\]](#eq:coins){reference-type="ref+Label" reference="eq:coins"} shows that taking a sample corresponds to $|D|$ independent coin flips---one for each neuron---where the probability of heads is $\frac{w_d}{1+w_d}$. The entropy of a Poisson sampling may be computed in $\mathcal{O}\left(|D|\right)$ time: $$\begin{equation}
57
+ \label{eq:poisson-entropy}
58
+ \ent(\qphi) = \log Z - \sum_{d = 1}^{\SetSize{D}} \frac{w_d}{1 + w_d} \log w_d
59
+ \end{equation}$$ where $\log Z = \sum_{d=1}^{\SetSize{D}} \log (1 + w_d)$. The gradient of [\[eq:poisson-entropy\]](#eq:poisson-entropy){reference-type="ref+label" reference="eq:poisson-entropy"} may be computed automatically through backpropagation. Poisson sampling automatically modules the size of the sampled set $C \sim \qphi(\cdot)$ and we have the expected size $\mathbb{E}\left[|C|\right] = \sum_{d=1}^{|D|} \frac{w_d}{1+w_d}$.
60
+
61
+ We also consider a variational family that factors as follows: $$\begin{equation}
62
+ \qphi(C) = \underbrace{\qphiC(C \mid |C| = k)}_{\text{Conditional Poisson}}\,\qphik(k)
63
+ \end{equation}$$ In this paper, we take $\qphik(k) = \mathrm{Uniform}\left(D\right)$, but a more complex distribution, e.g., a Categorical, could be learned. We define $\qphiC(C \mid |C| = k)$ as a conditional Poisson sampling design. Similarly to Poisson sampling, conditional Poisson sampling starts with a unique positive weight associated with every neuron $w_d = \exp(\phi_d)$. However, an additional cardinality constraint is introduced. This leads to the following distribution $$\begin{equation}
64
+ \qphiC(C) =\mathbbm{1}\left\{|C| = k\right\} \frac{\prod_{d \in C} w_d}{\ZCP}
65
+ \end{equation}$$ A more elaborate dynamic program which runs in $\mathcal{O}\left(k\,|D| \right)$ may be used to compute $\ZCP$ efficiently [@aires1999algorithms]. We may further compute the entropy $\mathrm{H}(\qphi)$ and its the gradient in $\mathcal{O}\left(|D|^2 \right)$ time using the expectation semiring [@eisner-2002-parameter; @li-eisner-2009-first]. Sampling from $\qphiC$ can be done efficiently using quantities computed when running the dynamic program used to compute $\ZCP$ [@Kulesza_2012].[^2]
66
+
67
+ # Method
68
+
69
+ The main question we ask is how the performance of our models compares to existing intrinsic probing approaches. To investigate this research question, we compare the performance of the and probes to [@dalviWhatOneGrain2019] and [@intrinsic]. Refer to [4.3](#sec:how-to-compare){reference-type="ref+label" reference="sec:how-to-compare"} for a discussion of the limitations of our method.
70
+
71
+ ::: {#tab:model_perf}
72
+ +:-------------------------+-----:+-----:+------:+------:+------:+
73
+ | | Number of dimensions |
74
+ +--------------------------+------+------+-------+-------+-------+
75
+ | | $10$ | $50$ | $100$ | $250$ | $500$ |
76
+ +--------------------------+------+------+-------+-------+-------+
77
+ | | |
78
+ +--------------------------+------+------+-------+-------+-------+
79
+ | [C. Poisson]{.smallcaps} | 0.50 | 0.58 | 0.70 | 0.99 | 1.00 |
80
+ +--------------------------+------+------+-------+-------+-------+
81
+ | | 0.21 | 0.49 | 0.66 | 0.98 | 1.00 |
82
+ +--------------------------+------+------+-------+-------+-------+
83
+ | | |
84
+ +--------------------------+------+------+-------+-------+-------+
85
+ | [C. Poisson]{.smallcaps} | 0.99 | 1.00 | 1.00 | 1.00 | 0.98 |
86
+ +--------------------------+------+------+-------+-------+-------+
87
+ | | 0.95 | 0.99 | 1.00 | 1.00 | 0.97 |
88
+ +--------------------------+------+------+-------+-------+-------+
89
+
90
+ : Proportion of experiments where ([C. Poisson]{.smallcaps}) and beat the benchmark models and in terms of . For each of the subset sizes, we sampled 100 different subsets of dimensions at random.
91
+ :::
92
+
93
+ ::: table*
94
+ :::
95
+
96
+ <figure id="fig:qda-comparison" data-latex-placement="h">
97
+ <div class="center">
98
+ <embed src="images/02_mi.pdf" />
99
+ <embed src="images/02_accuracy.pdf" />
100
+ </div>
101
+ <figcaption>Comparison of the , ,  <span class="citation" data-cites="dalviWhatOneGrain2019"></span> and  <span class="citation" data-cites="intrinsic"></span> probes. We use the greedy selection approach in <a href="#eq:optimization" data-reference-type="ref+Label" data-reference="eq:optimization">[eq:optimization]</a> to select the most informative dimensions, and average across all language–attribute pairs we probe for.</figcaption>
102
+ </figure>
103
+
104
+ In general, tends to outperform at lower dimensions, however, tends to catch up as more dimensions are added. Our results suggest that both variants of our latent-variable model from [3](#sec:method){reference-type="ref+label" reference="sec:method"} are effective and generally outperform the baseline as shown in [1](#tab:model_perf){reference-type="ref+label" reference="tab:model_perf"}. The baseline tends to perform similarly to when we consider subsets of 10 dimensions, and it outperforms substantially. However, for subsets of size $50$ or more, both and are preferable. We believe that the robust performance of in the low-dimensional regimen can be attributed to its ability to model non-linear decision boundaries [@murphyMachineLearningProbabilistic2012 Chapter 4].
105
+
106
+ The trends above are corroborated by a comparison of the mean ([\[tab:model_mean_mi\]](#tab:model_mean_mi){reference-type="ref+label" reference="tab:model_mean_mi"}, top) achieved by each of these probes for different subset sizes. However, in terms of accuracy (see [\[tab:model_mean\]](#tab:model_mean){reference-type="ref+label" reference="tab:model_mean"} in [11](#sec:app-accuracy){reference-type="ref+label" reference="sec:app-accuracy"}), while both and generally outperform , tends to achieve higher accuracy than our methods. Notwithstanding, 's performance (in terms of ) is not stable and can yield low or even negative mutual information estimates across all subsets of dimensions. Adding a new dimension can never decrease the mutual information, so the observable decreases occur because the generative model deteriorates upon adding another dimension, which validates @intrinsic's claim that some dimensions are not adequately modeled by the Gaussian assumption. While these results suggest that may be preferable if performing a comparison based on accuracy, the instability of when considering suggests that this edge in terms of accuracy comes at a hefty cost in terms of calibration [@guo2017calibration].[^8]
107
+
108
+ Further, we compare the and probes to the baseline. This is expected to be the highest performing since it is re-trained for *every* subset under consideration and indeed, this assumption is confirmed by the results in [\[tab:model_mean_mi\]](#tab:model_mean_mi){reference-type="ref+label" reference="tab:model_mean_mi"} (bottom). The difference between our probes' performance and the baseline's performance can be seen as the cost of sharing parameters across all subsets of dimensions, and an effective intrinsic probe should minimize this.
109
+
110
+ We also conduct a direct comparison of , , and when used to identify the most informative subsets of dimensions. The average MI reported by each model across all morphosyntactic language--attribute pairs is presented in [2](#fig:qda-comparison){reference-type="ref+label" reference="fig:qda-comparison"}. On average, offers comparable performance to at low dimensionalities for both and accuracy, though the latter tends to yield a slightly higher (and thus a tighter) bound on the mutual information. However, as more dimensions are taken into consideration, our models vastly outperform . and perform comparably at high dimensions, but performs slightly better for 1--20 dimensions. outperforms at high dimensions, and outperforms for all dimensions considered. These effects are less pronounced for accuracy, which we believe to be due to accuracy's insensitivity to a probe's confidence in its prediction.
111
+
112
+ We compare performance of the probe for each attribute for all available languages in order to better understand the relatively high variance across results (see [\[tab:model_mean_mi\]](#tab:model_mean_mi){reference-type="ref+label" reference="tab:model_mean_mi"}). In [3](#fig:gen-comparison){reference-type="ref+label" reference="fig:gen-comparison"} we plot the average for gender and observe that languages with two genders present (Arabic and Portuguese) achieve higher performance than languages with three genders (Russian and Polish) which is an intuitive result due to increased task complexity. Further, we see that the slopes for both Russian and Polish are flatter, especially at lower dimensions. This implies that the information for Russian and Polish is more dispersed and more dimensions are needed to capture the typological information.
113
+
114
+ <figure id="fig:gen-comparison" data-latex-placement="t">
115
+ <div class="center">
116
+ <embed src="images/01_Gender_and_Noun_Class.pdf" />
117
+ </div>
118
+ <figcaption>Comparison of the average for gender dimensions in for each of the available languages. We use the greedy selection approach in <a href="#eq:optimization" data-reference-type="ref+Label" data-reference="eq:optimization">[eq:optimization]</a> to select the most informative dimensions, and average across all language–attribute pairs we probe for.</figcaption>
119
+ </figure>
120
+
121
+ We compare the most informative m-dimensions recovered by our probe across languages and find that, in many cases, the same set of neurons may express the same morphosyntactic phenomena across languages. For example, we find that Russian, Polish, Portuguese, English and Arabic all have statistically significant overlap in the top-30 most informative neurons for number ([1](#fig:heatmap_gender){reference-type="ref+label" reference="fig:heatmap_gender"}). Similarly, we observe presence of statistically significant overlap for gender ([5](#fig:heatmap-gender-case){reference-type="ref+label" reference="fig:heatmap-gender-case"}, left). This effect is particularly strong between Russian and Polish, where we additionally find statistically significant overlap between top-30 neurons for case ([5](#fig:heatmap-gender-case){reference-type="ref+label" reference="fig:heatmap-gender-case"}, right). These results might indicate that may be leveraging data from other languages to develop a cross-lingually entangled notion of morpho-syntax [@intrinsic], and that this effect that may be particularly strong between typologically similar languages.[^9]
122
+
123
+ <figure id="fig:nonlinear-mi" data-latex-placement="h">
124
+ <embed src="images/03_boxplots_all_mi.pdf" style="width:90.0%" />
125
+ <figcaption>Comparison of a probe to non-linear and probes for selected language-attribute pairs. For each of the subset sizes shown on the <span class="math inline"><em>x</em></span>-axis, we sampled 100 different subsets of dimensions at random.</figcaption>
126
+ </figure>
127
+
128
+ Multiple papers have promoted the use of linear probes [@tenneyWhatYouLearn2018; @liuLinguisticKnowledgeTransferability2019], in part because they are ostensibly less likely to memorize patterns in the data [@zhang-bowman-2018-language; @hewittDesigningInterpretingProbes2019], though this is subject to debate [@voita-titov-2020-information; @pimentelInformationtheoreticProbingLinguistic2020]. Here we verify our claim from [3](#sec:method){reference-type="ref+label" reference="sec:method"} that our probe can be applied to any kind of discriminative probe architecture as our objective function can be optimized using gradient descent.
129
+
130
+ We follow the setup of @hewittDesigningInterpretingProbes2019, and test MLP-1 and MLP-2 probes alongside a linear probe. The MLP-1 and MLP-2 probes are multilayer perceptrons (MLP) with one and two hidden layer(s), respectively, and Rectified Linear Unit [ReLU; @nairRectifiedLinearUnits2010] activation function.
131
+
132
+ In [4](#fig:nonlinear-mi){reference-type="ref+label" reference="fig:nonlinear-mi"}, we can see that our method not only works well for deeper probes, but also outperforms the linear probe in terms of . However, at higher dimensionalities, the advantage of a deeper probe diminishes. We also find that the difference in performance between MLP-1 and MLP-2 is negligible.
2202.07028/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-09T19:41:29.407Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15" etag="GTgYKWwXH7huEKk2F2om" version="15.7.0" type="google" pages="2"><diagram name="VLN BERT Diagram" id="AxP9xnV08Cqfdzi4QSMa">7V1bd5s6Fv41WavzEBa6IInHNGnPzKxmmjNuezqP2CaxE9v4YNw4/fUjcTMgcbPB4Bh3rTSWBcHsb3/sm7au0O1y94drrWf3ztReXEF9urtCd1cQAkwM/p8YeQtGmIGDgSd3Pg0n7QdG8992OKiHo9v51N6kJnqOs/Dm6/TgxFmt7ImXGrNc13lNT3t0Fum/uraebGlgNLEW8uhf86k3C0cBMfcf/NOeP83CP80gDT5YWvHkYGAzs6bOazDkfzn06Qrduo7jBb8td7f2Qty86L78gN8p+hPfvD4/f7RG31+2d+7H6+C2fK5zSPwVXHvlHXzqB7r7czV5vn8ZL//+Ofp6//tu8/sakvC7eW/RDbOn/P6Fbx3XmzlPzspafNqPfnSd7Wpqi9Pq/N1+zhfHWfNBwAefbc97C8FgbT2HD8285SL81N7NvZ/icI0a4dv/JT6624Wn9t+8RW9WnvsWHIU5BsMBcdy1rukQRSP7o/13qcMfbHe+tD3bDQcfnZUXXiUQp9x4rvMS40ScMrg94p5k4FQiiwgzztad2AXzYKgSlvtkewWCAjhGHFdV2+Hfwn3jB7r2wvLmv9JXZ4U68xTP2+OC/xJCowYCw6v8ZS224V+6gmThhXdQqKUVfkvy91bow8cftju1VtZ+IAWyaFAcfb3xBXDDJwC63iWPIE/i//stZ4vrL9YbFxvUv7nWavPouEvxLrgG/pWCywjmS4je41Ug7HU29+zROrjgV056aWzKEHicLxa3zsJx/bOhx8dHOJnw8V+268050dws5k8r/tlyPp36+hGcInHIlIyJQeJLqwoe8QfsXUKysvjDTw0Ssm1I0hiHJP26pzxCqBZy+SzBd9GRjWOmmFlWzkqmkuftch1NsNxJNBIqKDyWbA7iGiPJNHtaUZOMgOFnazlfiIFIBdI0g3yAWK53Ix5u+zvhj32ei3scnngazZgsrM1mPgkGwylAwtnUfrS2vj7IELYXY/9MEWRrQ7GUx6jMY8p5ppJQu+I1WsZr5ZQFVJT1YK0cl+Ngwmd8HW9s9xf/Ms5q0wFjhfDKJascyLbFVCZDGqMpsgK6LpEViGgpSVX8Gd8ODFjvqAoeaBfRrF0E4pG6dtFl8phZkccQ6BWRmV0aaHccORMuqxXsyiKzSV2LjJpjXW+T5yAVzk2S5yCSeQ5SKHS9FatM7fAZ3Tp8B/EaoileK7PCeurqAYWv93j/ou/u/vpjPJrc/xv+rf/6NrLCiV1RSXSZjRtFV8bH2y+jK4Mfpz9s/RMt5hMOK/4l1mt+RqhrmsZ/8omjTw/BxJGYkpjhrMQdnNl+xGi78jrxAg+0qZIQBG16g5iVGljIgCpvELK2eAdIAuk/72ACsvaUbp4p+eCqhg3plHxwl4bMEGkq5xYjHWnilo6CWxTOW2sWDSj23vrILClWSYSYCqJKHbMHkdlDKYxOo9TRVR5HHpFqBwdu1taqEqFgFaH8jk+zHUdjXoJMEsPJUf9vSsPNMc/U2sziuceZOm3xDNAzREMURoyKZ4y24tlQha6MFJ64GNZH3pI4FWqNo9PqhbcqS8mGws887Z1ShVW7s8ZTaK8onFjY5YDt7C73LmqZeBDuH4vqR2FKJu1EG8MZ+1CjLhlM7YQa45x6ae63atIkmtiTYGNk8vUGeWBA3mUgL3qyJZ4so2xwKqpAimwxIJtlXn2rSi9/SlUG0wnsrRrPryhgrWGM9i+YDiExDSVfQC4vQEADqZf8KEQofRZE2oIJkGDy7g1mkXBI5xsQH2GyoMBJrRQkR5N/KJwilucUvReFPMAB4hpFEiqJ08JFRAM0oUtGvzUSVYIBGGCQVWuMOTGz+GUWo6DfIJCDu37CJyNcfmc8lf+XEYVCOlYolYntJ4YKxKWCTBpUFVHTmtyZSDbsxQbTcse6phA2JhpJvbAs7SgB3Lx0izPL/S74u9yCl0jbm7Pww0MfnLlvkIeAphFe4xLVjMkR+CLhURksxpdxeJoAFaKz9SzBYQVdtYtPI7BSJZqUkm4BS+UVDkdC7DiiqhDIfW/eAZG9A1U1Ejipa5BffavIi+S591B273/muPeDdSnBAgsrQ7IfY4hk3P5+W5fsxHCCA5yycGKmRmHidc5wyi+pbQdOuXnhi4UTZUxjLN/3PSc4YTlyPfi+eXKHTCOy3GLfN65c7Ivriy8w3gx0U1cFnGU1PG3AGcsB568HpIhW7z5FVF/iwMRlEWkiU2lfCVmOSH9VPOAHn0GGgU7LQtJJGPQbBUNIuobgMSqLScOePZiNy3swE1heDnjaQA+WA267gWqriBJxUzc3PGOclwMkR/tUIBiCKnJQBWowN6hyZiCQY3QqEAyhEDkUAjSUGwo5MxDIkbXB5soNhWQEl5Z7/0IhhuxW3UxE75NIpcdupM6flmN7Op2vngbRqyt/RP1XSt6qNTCmwpJDbVlyxjk3dXq3NR5tl24YDGcjfSyDsKA0QCreUJwqfaIWq0DUAKYDgC8PwEDXZQSDwxDMz5WFMDgthHu37muA8AkgDKJn77EULM6kdczCqtx6FtMq8SbQlwb4Yau6jkBHuUYdWsa3VytCWEKxdM00WaFyiTfZziMNo7jiYtdjDQ5qaMzU41fG6+FkruvSp7VNkUx7LgPpWhTSPpEeELkoYNCDQQ8S3r+hoQI9QBpjR+sBhVnbyMj6j22rQXFvqKH/5mAm1TCTqJa10CFj2qHa4Z8QQzN+pdfFomwgre1VDHqhqvSo2VH3qxGi4Flp0yLYadciIpcg1O63qO5mNl/YG0+wAtRvZ/bkpaU+Zo0kVkS7Mzaxa7Y7GzMDG/Uf7zWSb4ZYfplSeQrlkDtgCiOgtc76VC43HHmWJ+TMJWothXyCn3LQ/ZZLxd55G0nkQ8ydf4JpdpEMirv4JrdSUEh7n4ppXt4KryBB+v5DOZ/v27GUYqMnZdJXamVX17DZmy1J2wY0bduc4OlTOLEnLVZo0yVTp2/eVb+pZWb7FNXaOIOb90ynxIQo+KmgAKqZhmFSzDCkhEZtkpoXUe+ycCn3/HL7L1XlhkjHSrkhmljKDWVAbh6DfUukDT3ALg+DcjXZSFVNdj1UlspCMjUDYYAwAQZiOJNbQvxJZkJC+CMNURMbyuZCGOj8POFPXdFMns8xqU6wyUwDUNaeySKnk76On8VmkpL789m2vK1rD96PusBf5F2IjwcfFZlYF8Mp+0auMiSceBMWkm4qbCRdowQalBFAkNGeGcvk1MqIi8geIFELEmZmMQFiSIvEehKHWN3yvThh0KMoaBO9XHCBu9pI9DS/RUuV4KkaP407q0UX2c1uEbf//dzRvhCG+FfEK1KgNCgEDr9ZYjwsF2+PQBAt35lUuUC4vf0iij2XfpIH0VmKPjQdlTBI9/vQFOxtVcHtONmGx4WXeVELB0G2R1SsmMmlg0y5cVQTiwfVkpAXFBRI4r37bQQVNWwCRrZPsyS+jhYCqUWrynxermhpUfOkcxNtrSzCexctNWlBI6NzE628mntYvpcneCA6GaWlTVRr9ijQkCJa0sQyPbURLKtj/41gncT1pKER7BuxfbOCm19/RakGsZ4TlgNmOu0Y9eCoXw+NtPTG7GJTCIYTBaat1bqpieb8t5GqH5fP7OWlssC7yUSrRdS7tUgJXhmygBXcccVGQIUTK2cB84DcPAaH3agGDHaMQVhhIc/luDv8yVOQXBYF9f1MLqtFW6u353sXLcOmWH51dp6s0gmSlfZdtSSRpHq4AxOVRxs0I2+oUZRYigkleZ84M1yyKWov/dq0T1tnM/DW19XkZ3wrFzafIINTdJUJ5f7wL6GAVtiN6FauBLHW64X9j0H1lXlclralxGppDKXFp8msLjpxUcjZLI07i6IQ5T2uuqKOdqn5pevpGioKwcpFd18euikKscHUsGkRx0hFISahyCItkoZJuSmXWDubthaJIralMhea8NqUQCnesK6XhHG4rdA6YVTeaDpfabtijCo1BvX6cJTf+UMfAkdKrGK5vyyJE3V+yfSHi9sb1s1myBuewRNvOAjPsc7szOwRUHWRZZC07oxfTmWSKOtUfYSLtf/6f2zv1XFfujFQuHnCpriWgcLgGJFWDZRsr3x/o3vJKkFttctXE0dxQqNt4kjQRln2IiYOcHWwXdJNDENhlvQwhlGwQebaWuUzwmMYJxCcEHeEENXeofqFzJIeF3P9n1lSQSpS+TBazCd2tMyKfz39Joye7Ffa+deYwy39Caokn2Ct8QzCmpEu26BY9n4Aoqra2iYWUKqZRo6Af19PLY/fHyk4dpEdRppHAqFAM9MNZQwTaIxJYCBEI1QGAyZaa1VhULZTPnxz+a++gqc1/qbvwdJT6DUhUnsgBhR6rQprHKDV/K3rCBre+yn828/unaktZvwf</diagram><diagram name="Model Diagram" id="T0sEP8CJCLNM5P5JiQsw">7V1bl5s4Ev41/dgcdIfHvqSze7ZzprOdnd2ZN2ywTWKDB+Oke3/9SAYMSLLBFy5u44ekEUKA6qtPpVJRukEPi7fPkbOcfQldb34DTfftBj3eQAgIBvw/UfKelNgIJQXTyHfTSnnBq/9/Ly0009K173qrUsU4DOexvywXjsMg8MZxqcyJovBXudoknJfvunSmnlLwOnbmaul/fTeepaWA2vmJf3j+dJbe2oIsObFwssrpm6xmjhv+KhShTzfoIQrDOPlr8fbgzUXnZf3yewT//Ar+nLx9W//1SL5O7dn75DZp7OmQS7avEHlBfN6mYdL0T2e+TvvrBtI5v8n9JOT34v3tjJMT9K+1eNH7373IdQInLxA9E7/Py7XE1berDRjueAX+8G/FK+hU/P9lzWFw++y8exGv8y1ygtUkjBbiKHkG/jLJYyT1UzFsbwajcB24nng/wE//mvmx97pMHvgXRzMvm8WLeXp6FUfhjy0GkHhIfz5/COdhtGkNTSYTOB7z8p9eFPscQXdzfxrwcwvfdcUt0yYKl7h0RAndPpq40HuTcFchNLBFEldBL1x4cfTOr0tbITQFX6p9GOPk+FeOZUqZQZLSWQHI2ZVOqkDTbeM5SPgfKU4OwAxS5OC5XOXSwyAMRFflojH50ff1YplVcKJxVpKQBeDvfx9G8SychoEzfw7DZSqy714cv6e1nHUclgXqvfnx/8QN+NsnR38Uzjy+pffeHLxnBwHvgcJF4vCP4rn8ss1Rdp2A4ZOz8OeiIFOBpDh7C7QBiBPFd4K18p7YlD35oo/Tht2sxnjurFb+OClMqwAFZ643cdYbfVAh7M1Hm5YyyG6hKERyBBC5WMN1lKq8vl4KQP5WU29fe1QP7MibO7H/s/xwZ8coVnjtxQnCiEtwzIt/G6286Cd/jDBYNUkqKQJ28skOVDVFJrZVJhNgQoVMQEYbrVAJ6R2VwKO4BBNWZJNb0zABq6CUzdGLF/m8K71o4Bktf9TgmYIt1gHR0C4NqEeOnDGXVQC7spg8eqjFxOyRaTZJcpAZjJR4DiJT4TnIYKtGE6vCSSUEANVB4IbcPzy/3hB+nfk697kk+cMH4p/lci4OJlG4EGo0Sw58oa9cZcj966eXzWWtA+fIUbHIgaBJkxtbJfRQSwEPIu2Cxxpmaf2epZGyYcVZSIOZNg0re69htRnNP+UFw2ztw1pRmRuu0owi8EQzanMp7yvnvVBhGfpBvCq0/CIKcs2xJM2hpilhP2kx14Ttox2vHFmfNM2nAOv4dJnda7VeZmVOgUsLxXnNUVYUl2qO5Jpn5WLXWc22dU8b09ua0hKNqacjXtIU8QKg9HnXU9qBb9vkW1iTbwHuctqaPWZ/KDAcKPA4CgQAbWcjvWHB/WsEOZ11ZYQWKDEnyApSBCVKzBlyIMVKUqy7ZtCNEWpjyQgFpKgB1fXtNoxWdUnj1ZsuuIC4yqTUN4oy1vMXzsbPE05yV5CkkJyGYt3cW+IzDcU5KbWNxc2jfZynY9eymjfHioAaSPLkmCotWlC4/BVmRI0xI9EMvJJkpryLljs7Jg2UcEZZdfMsHSb7MLofQ3Su9e68kiVL4ATUEr0UOutl1rf5ygFDc0kmzYy3aY18sDU7G2yzxdLK0bbuyhnodIkeWD1DHhiQdyXIsw8ZWczqkaU2ADqaP+4YczJz1sAY5T9YNposAxV/QI0/Q8AApZ/GnELlVhBtSLRQ59r94A4AseJdnv8jXmKpggKtWhbZ212Nlu0VEjC5mtCCnuGyxBA1ACsoCOm3munch1crWxtjTqHW9mfvF22/JYsUyRqGoQi3H86DmqhpTO6WaZhmLjZYljs2DY2wMTVo6YdVaWceifNLF/fM5j5kdWpwqOZy3GHRHeYnVRybLMPr9msDyThIDPv0KgmLZ/BxwhrOsY9mvVHVetOFK4JWTbeDXG8ffXinWNC8MoBvpSXNkPo9vOviUK9XspZtMFj4XbJkdUGiVytZZlmGZe02yS9Ksqq7ajDJd8kdWgZV5bY1ybeREn2xyNEVOqyAaZs6j5Wqhu16rNDgsSqtoNu4ymNFVfrrKYmiwWNVkq3JqlxWRdn2W7SDy+oAwWNU5bOCPRsh1cCnDz9CUlgdjNOuSwAd5Jr56PxJRcjtTpcAuajpBRqcPSWXADTgTpfAhUl2cPaUXALAQDtdAhcmWdXZM5g8O10CkuDKcu+hS6BGeJJ2Aasg57IEjotWO2H9q3rN8NhPzfKFQ0qtm2LAnm1nxzu+4+AHcrqVpF8bXKdL6lXHw+3IYVIzKvjEZT9gmtL3Q0AyI5MXVdb91C8jWNluRVa7C4i4hjftalXnRLBXgxh2CmJAmAElcjfxcTjWNgZxu1hWvZF3Y5GyTP3a5yUKR2ois2H0F4wE5Jm0pYvK0WU9a+wbIKz6Iv8pOtoZxHv4UoKUIJNg1VPSrnBVb+R/lq6j/UrvNeblavGnxchzXT+YDjLXyRxbEjEjaJhqkiad1HNj//xyV92UO+X+7ATT9eYDzUH0B4newjKb84kaM7c/0LHu9y+f5RBeWQ2uEy1PYmF5NVue9tS1O4lVbqjtAExMBwBfH4D5/F9FMDgOwYovAWPQLoR790HzAOEWIAwIPQ8Fi5aMjlm4Rijn1XqxeuQA7srVRRgxLFuxenOnl2GaytmDTRGm5JQysmiRtvRgWAgZ9GDf50qQGGiPHiDDsk7WAwZl24jI88eG1SDrrR5ZNMMuCRdrJjFDttChZRnHasemQQzt7a+cPALJi+RNq8r+7Ktd5r0GtTWmr+g+Cco1FstJOrpXZwnsNHVq9pgFq+SLP/dWsRABNB9m3viHSI4v4fCgENWzBFqJZPnW2DswWf7IIpg0mZmPImYwKTEfVRdtANQYDs1tRLQ/YemlJMzvK3V0FLJQTTm1E5M288E9MIGUbcls2bZUV62yIAPzsxd4kROHPeEyD7jEYwdxmU0Zcprc+MMm8sf7WS7YDqlMjdT/FPz0ozAQyWJ7IMvO0vQowmLdC6tG8P1q5izFn0lO35I85D6MxdiyLX12Rt78JVz5G4VGj6MwjsMFrzAXJ+6d8Y/pRvrl3XX47yZNIHy3WiZb8goudrKDif8m8JLmGEaPszgWe/neiZ6AT2M3MA1/HAYTn+MqMsZi+6kn14kd/p8oX4kE7dHaj9/FTqJPRDiVnu5EiuJbAC1jGUxrAQ2adYHWFJ44UmAJTsQm263HioHjlm6zqLz0/KBS4/6HSJPjIk2k6SwDBmWKgFuONCHq8sAQUHLsZIQgQ07oovkOvW0J93D7rq0FX7LfG9kFIZ9vFCcl4OP6M+CJk4vTdjDd715uDWwa5+mJSZprYA93hL1j1elEzFZv2s1OhOKOeS6SAvdwRrIV81y1JUgqWtrhuj7XjJkelCTkYrYl2DtKEinIWpehjpgGtkxGbYiSfzUjJjNsQmyGLQwZZVleo/MzmiqRYU+DfmaWrzmU0tr8dSSQz4/B/Q7lYXeDAYPNY/CgzCkf/ft/Qm2DIMxNEQoIsrAUL4j44GRDSvkohZiNicaPAwwMTN5O+q+p8+oAw2YmxbZlE8Cs5ox21cn72+i78M8NHoBDMyKJaDq6QcQGF5LLx8Ilo0XnGzBIwewxbY3hYxqMQsIsCigiDU7lVG/yKxfR4BY6fEsN+Tt2ZGSC7c4tRFW/7r/ELqzCk88H0tOmIbUnyxXLdC7xLBfvk76yTGfBEaJNLtMBJE8aiSbXZKsLP1R14n5de5uHdoQoZYVNviUctFW/EMOHdtvOKbw8tGObGLZ5KRSui3luYGNm3umajZkf/v3UyDbK1YFKkwk8MFDJpSNKmmQNJCUzwdknvlUJahsjjeyBugHHFz4Z82+fnXcR3WZ+i5xgNQmjhTgaIJNcIpkNUBPXhtpFzLm9lZeQPBPIcR5bLS2mz7Sgblm+sQSaDB4iiY8+Gado354agMi7Dqpq1KNsfExN+XHNomX7NtW4NNEOLrRiKkWb7dlV49JEq7rQhhyauwQPxLYaZWlTXeJMBgykmT01liuTfcxN4vcKg0jODJ1106PVWNa7hAw3w0rYSSthrO4232zHNt9VQD4/Boe95gcMdo3BGtkMrseU5CPPntVY8VXxBa3GWjq/3NWK1sLCG9/7WQI/jELh4Nye+8zlMPsSup6o8Tc=</diagram></mxfile>
2202.07028/main_diagram/main_diagram.pdf ADDED
Binary file (42.3 kB). View file
 
2202.07028/paper_text/intro_method.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ As autonomous agents (*e.g.*, robots) become more integrated into our daily life, it is increasingly important to develop autonomous agents that can understand natural language commands and carry out the corresponding tasks. To facilitate such a goal, various benchmarks have been proposed in the realm of robot instruction following such as vision-and-language navigation (VLN) [1, 5, 7, 12, 19, 41, 42, 55, 64], together with a number of novel algorithms that consistently push forward the state of the art [30,31,50,54]. Specifically, to succeed in VLN, an agent must comprehend the language instruction, ground it into the partially-
4
+
5
+ Goal: "Put a hot potato on the counter to the right of the sink"
6
+
7
+ <span id="page-0-0"></span>![](_page_0_Figure_10.jpeg)
8
+
9
+ Figure 1. **Illustration of our M-TRACK approach**. We show an ALFRED task [46], which consists of an overall goal (text on the top) and six subtasks (text below each image). The blue/red text box within each image is our extracted navigation/interaction milestones from the subtask instructions. An agent needs to reach the milestone of the current subtask (*e.g.*, reaching proximity to the target object for navigation milestones, or having interacted with the target objects for interaction milestone; green masks for target objects) before it can proceed to the next subtask.
10
+
11
+ observable environment with only visual perception, and plan and perform navigation and interaction actions in the environment to complete the task.
12
+
13
+ One critical challenge in VLN arises when the task horizon becomes substantially longer [46]. That is, a task is so complex that it essentially consists of multiple "subtasks" that need to be completed *sequentially* to fulfill the whole task. For example, in Figure 1 the task "put a hot potato on the counter to the right of the sink" can be decomposed into six subtasks. Moreover, the subtask "heat the potato" must be carried out before the subtask "put the potato on the counter"; otherwise, the final task is doomed to fail no matter how accurate the subsequent planning is. Such a sequential dependency requires the agent to closely monitor
14
+
15
+ <span id="page-1-0"></span>its progress and ensure it is staying on the right track when carrying out a long-horizon task.
16
+
17
+ At first glance, this challenge may seem trivial if the language instruction is detailed enough (like in [Figure 1\)](#page-0-0), such that it already defines the subtasks and their order. However, as shown in the literature [\[4,](#page-8-5) [20,](#page-8-6) [30,](#page-9-3) [47,](#page-9-8) [54,](#page-9-6) [61\]](#page-10-1) and our experiments, an agent fed with detailed instructions still frequently skips subtasks, or wanders around within a subtask even when it is already completed. In essence, what an agent truly struggles with is the lack of awareness of *where it currently is in the long subtask sequence and how much progress it has made within a subtask*.
18
+
19
+ To address this issue, we propose to equip VLN agents with an *explicit task tracker*, which keeps track of the agent's progress within a subtask and guides it for when to move on to the next. Concretely, we propose the concept of *milestone*, which renders the necessary condition of completing a subtask. Namely, for a subtask to be considered as completed, the milestone must be reached. Take the subtask *"take a cold potato out of the fridge"* in [Figure 1](#page-0-0) as example. To complete it, the necessary condition is that the agent must see the potato and the fridge, be close enough to them, and perform an interaction action with the potato. We argue that by explicitly extracting such milestones from the instructions and *grounding* them to the environment state, we can systematically determine if the agent should continue working on the current subtask or proceed to the next.
20
+
21
+ To this end, we propose the *milestone-based task tracker* (M-TRACK), which consists of two components: *milestone builder* and *milestone checker*. The milestone builder extracts the milestone (*i.e*., the necessary completion condition) of each subtask from the corresponding language instruction. We model it as a named entity recognition problem and train a BERT-CRF tagger [\[9,](#page-8-7) [48\]](#page-9-9) to accurately extract both the target objects and their action type (*i.e*., navigation or interaction). The milestone checker then tries to *ground* (*i.e*., identify and localize) the extracted target objects in the perceived environment using an object detection model [\[14\]](#page-8-8) and checks if the agent is close enough to them and/or is about to interact with them — to decide if the agent is completing the current subtask and ready to move on. It is worth noting that our M-TRACK only needs to access the language instructions, the visual input to the agent, and the agent's action, not any internal states of the agent. Thus, it is *model-agnostic* and can be easily integrated with any agent model with minimal changes.
22
+
23
+ *How can* M-TRACK *interacts with the agent to affect its action (*e.g*., to not skip a subtask)?* We propose two simple yet effective ways. First, at any time step, we feed the agent with only the part of the instructions that corresponds to the current subtask determined by the milestone tracker. This explicitly guides the agent to focus on the current subtask. Second, and more importantly, we apply the milestone checker *proactively* — before the agent executes its predicted action — to reject actions that will lead to subtask failures. For instance, we reject the action of taking a *"sponge"* if the milestone object is *"fork"* [\(Figure 2\)](#page-2-0).
24
+
25
+ We validate M-TRACK on ALFRED [\[46\]](#page-9-7), a recently released large-scale VLN dataset for common household tasks. The tasks in ALFRED are considered long-horizon because on average each task needs 50 actions to complete. In contrast, another popular dataset R2R [\[1\]](#page-8-0) needs only 5. We integrate M-TRACK into two baseline VLN models LSTM [\[46\]](#page-9-7) and VLNBERT [\[18\]](#page-8-9), and demonstrate notable and consistent performance gains. When tested in seen environments, M-TRACK leads to 16%–57% relative improvement in success rate. In more challenging unseen environments, the relative gain increases to 33%–52%. Our ablation studies and qualitative results further verify that the improvement indeed comes from agents able to better follow the sequence of subtasks and stay on the right track.
26
+
27
+ # Method
28
+
29
+ A VLN task is generally defined as follows: given a language instruction I, an agent needs to infer and perform a sequence of actions $\{a_0, a_1, \cdots, a_t, \cdots\}$ in the environment E to complete the task. In datasets like ALFRED [46], the instruction I is composed of a high-level instruction $I_H$ and a list of low-level instructions $I_L$ , as exemplified in
30
+
31
+ Figure 1. A VLN task can thus be represented by a tuple (I, E, G), in which G is the goal test of the task.
32
+
33
+ For an agent to perform the task, it will be placed in the environment E and have a certain pose at time step t, from which it can receive a visual input $v_t$ . Based on $v_t$ and the instruction I, the agent then predicts an action $a_t$ , which can either be a navigation one that changes the agent's pose (e.g., MoveAhead) or an interaction one that interacts with the environment (e.g., PickupObject). The agent also needs to predict a binary mask for the target object if it predicts an interaction action. Both types of actions can potentially change the visual input $v_{t+1}$ of the next time step. The agent will stop when it believes the task has been completed. The final state of the environment is then compared with the goal state G to determine task completion.
34
+
35
+ Following ALFRED, we discretize an agent's action space into 5 navigation action (MoveAhead, RotateRight, RotateLeft, LookUp, and LookDown), 7 interaction actions (PickupObject, PutObject, OpenObject, CloseObject, ToggleOnObject, ToggleOffObject, and SliceObject), and 1 stop action (Stop).
36
+
37
+ **Agent model.** Without loss of generality, we define an agent model as $a_t = f(v_t, I_t, h_t)$ , where $h_t$ is the memory from the previous time steps (e.g., the hidden state of an LSTM). $a_t$ is a tuple (action, object mask); the mask is null for stop and navigation actions. $I_t$ is the instruction input at time t, which can be the entire I or a portion of it.
38
+
39
+ For long-horizon VLN tasks, an agent needs to complete multiple subtasks, usually in a specific order, to complete the whole task. More specifically, each low-level instruction in $I_L$ can be seen as a subtask. Agents then have to decide, often implicitly, which subtask it is doing at each time step and when to move on to the next subtask, which itself is
40
+
41
+ <span id="page-3-4"></span>a challenging problem for the agent. To address that, we introduce an auxiliary module, *milestone-based task tracker* (M-TRACK), to explicitly and interactively guide the agent to make such decisions (see [Figure 2](#page-2-0) for an overview).
42
+
43
+ Next, we first introduce the design of M-TRACK ([§4.1\)](#page-3-2), followed by how to integrate it with agent models ([§4.2\)](#page-4-1). We then introduce two base agent models ([§4.3\)](#page-4-0) and how to train the base models with reinforcement learning ([§4.4\)](#page-5-1).
44
+
45
+ The core functionality of M-TRACK is to decide when an agent should move on to the next subtask. On the surface, this may be done simply by training a (binary) classifier, which takes all the language/visual signals as input. Doing so, however, does not exploit the fact that the (sub)tasks are compositional, composed of entities (*e.g*., objects) that are identifiable and localizable both in the environment and in the instruction. Leveraging the compositional nature of the (sub)tasks has multiple advantages. First, it reduces the input space for making the decision from the space of language/visual signals to that of discrete entities. Second, it makes the decision rule systematic and explainable: we can make the decision by directly comparing the entities detected in both modalities. Both of them could improve the generalizability of the decision function.
46
+
47
+ We design M-TRACK to explicitly consider the compositional nature of (sub)tasks. Specifically, we introduce the concept of *milestone*, which is the *necessary condition for completing a subtask*, *i.e*., an agent must reach the milestone in order for the corresponding subtask to be considered as completed. For example, if the subtask is *"move to the mug"*, then the agent must navigate to the mug, see it, and be close enough to the it. If the subtask is *"pick up the mug"*, then the agent must see the mug, be close enough to it so that it can then interact with it. These two examples render the key ingredients of a milestone, which are its target entities and its type (navigation or interaction). Meanwhile, we say an agent has reached a milestone only when it can perceive (see) the target entities, is already close to them, and is doing the right type of action with them.
48
+
49
+ To this end, we represent a milestone by a tuple (type, target), and decompose our M-TRACK into two components: 1) a *milestone builder* which constructs milestones from the low-level instructions IL, and 2) a *milestone checker* that checks if a milestone has been reached by an agent.
50
+
51
+ We generate the milestone of a subtask according to its corresponding low-level instruction in I<sup>L</sup> using named entity recognition [\[9\]](#page-8-7). For example, given an instruction *"Turn to the left and face the toilet"*, the milestone builder should output the tag (navigation, toilet). For the instruction *"Pick the*
52
+
53
+ <span id="page-3-3"></span>
54
+
55
+ | Target Type | Val Seen | Val Unseen |
56
+ |-------------|----------|------------|
57
+ | Navigation | 90.16 | 90.62 |
58
+ | Interaction | 96.85 | 97.17 |
59
+
60
+ Table 1. F1 score of milestone builder on ALFRED validation.
61
+
62
+ *soap up from the back of the toilet"*, the milestone builder should output (interaction, soap).
63
+
64
+ For an interaction milestone, it should contain the target objects that the agent is going to newly interact with in the current subtask. For instance, if the subtask is *"Put down the potato on the counter"* [\(Figure 1\)](#page-0-0), the agent is supposed to already be holding a potato (from previous subtasks). Thus, *"potato"* is not a milestone target for the current subtask but *"counter"* should be. For a subtask that has multiple objects to be interacted with, the builder is designed to tag all of them. For instance, in the subtask *"Grab a potato from the fridge"* [\(Figure 1\)](#page-0-0), the agent needs to 1) open the fridge, 2) pick up the potato, and 3) close the fridge. In this case, the builder tags both the potato and the fridge as the targets for an interaction milestone. In cases that the builder does not extract any target from the current subtask, it will merge the current subtask with the next one and use the milestone extracted from the next subtask.
65
+
66
+ Without loss of generality, we adopt a BERT-CRF model [\[9,](#page-8-7) [48\]](#page-9-9) for the milestone builder, and train it with data derived from the ALFRED training data. Training data is prepared using the metadata from the ALFRED simulator. More details are in the supplementary materials. We show that our milestone builder reaches a fairly high F1 score (see [Table 1\)](#page-3-3). More analysis will be discussed in [§5.3.2.](#page-6-0)
67
+
68
+ We introduce a milestone checker that determines if an agent has reached a milestone (see [Figure 2\)](#page-2-0). Specifically, we design it to be *explicit*: we directly estimate the state of the agent/environment from the visual input and compare it with the milestone. A navigation milestone is reached if the target object is detected in the visual input and located within a reachable distance to the agent (1.5 meters in AL-FRED). An interaction milestone is reached with an extra condition: the agent has to interact with the target.
69
+
70
+ State estimation. We train an object detector using data from the ALFRED simulator that can not only localize and identify all 116 ALFRED object classes but also estimate their reachability (*i.e*., within 1.5m or not). We build upon the Mask R-CNN model [\[14\]](#page-8-8) and introduce an additional binary classification head for the reachability of each detected object. The ground-truth labels for reachability are obtained from the ALFRED simulator for training.
71
+
72
+ Milestone checking. As mentioned earlier, to reach either a navigation or an interaction milestone, the target objects <span id="page-4-5"></span>must be detected and located within a reachable distance. To check this, we compare the target object names, which are extracted from the language instruction (e.g., "kitchen island"), to the class labels (e.g., countertop) of the objects detected by Mask R-CNN, essentially a symbol grounding task. We only consider the detected objects that are estimated to be reachable. We apply an off-the-shelf word similarity tool based on Wordnet [10] with WUP [56] similarity from NLTK [29] to match the target names with the object labels. The reachable object whose label has the highest similarity (above a threshold) to a milestone target is considered as the grounded instance of that target; the target is then marked as a success.
73
+
74
+ For interaction milestones, we need to further check if the agent is interacting/has interacted with the target. As defined in §3, an interaction action is a tuple of (action, object mask); the object mask is simply a binary map over the input image. To determine if the agent's action is for the milestone target, we calculate the intersection-over-union (IoU) score between the object mask and the milestone target (provided by Mask R-CNN): if the IoU score is over a certain threshold (0.5), it is considered matched with the target object of the milestone. For an interaction milestone with multiple targets, the agent has to perform multiple interaction actions to interact with all of them. We keep a checklist of all the milestone targets. The milestone is reached after all the targets have been interacted with.
75
+
76
+ The discussion of M-TRACK so far is detached from the agent. The next question is, how can M-TRACK affect an agent's actions, e.g., to prevent it from skipping a subtask? We propose two simple yet effective ways. First, at any time step, we feed the agent with only the instruction of the current subtask determined by M-TRACK. This explicitly guides the agent to focus on the current subtask. Specifically in ALFRED, we feed the concatenation of $I_H$ and the one sentence in $I_L$ for the current subtask as opposed to the entire $I_L$ . We do so starting from the beginning of a task, when the first sentence of $I_L$ is guaranteed to be the first subtask. We then proceed to the next sentence only after the current subtask is marked as completed by M-TRACK. The use of M-TRACK frees the agent from solely relying on its internal mechanism like attention and hidden states to decide subtask switching.
77
+
78
+ Second, we apply the milestone checker *proactively* for interaction milestones — before the agent executes its predicted action. This can prevent an agent from interacting with a wrong object, as opposed to trying to correct the mistake after it has happened. For example, if the milestone is (interaction, fork) but the agent's predicted binary mask for interaction does not overlap with the grounded instance of fork in the image, M-TRACK will reject the agent's action
79
+
80
+ <span id="page-4-3"></span>![](_page_4_Figure_5.jpeg)
81
+
82
+ Figure 3. Architecture of VLN BERT with M-TRACK.
83
+
84
+ by asking it to select another (action, object mask) tuple (Figure 2). This saves the agent from having to generate an action sequence for recovery, for example, to put the incorrectly picked-up object back down.
85
+
86
+ In our implementation, if the first interaction action is rejected, we move on to the next action in the agent's top N list (e.g., from a softmax classifier). We iterate over the N actions until we find an interaction action whose mask matches with the milestone target or we find a navigation action instead (e.g., when the right object is not in sight). If none of those happens, the agent will take its top ranked navigation action. We set N to be 5 in the experiments.
87
+
88
+ Recently, Transformer-based models are becoming increasingly popular for VLN tasks [40,40,49,61]. Following this line of work, we build upon the VLN BERT [18] model which introduces the concept of recurrent state vector into the Transformer architecture. Since VLN BERT was designed for the R2R dataset, which contains mostly short-horizon navigation tasks, we adapt it for ALFRED with a series of modifications. Input-wise, we utilize a pre-trained vision encoder to extract a scene feature from 8 panoramic views and also object features from each view as our visual input. For action prediction, unlike VLN BERT that only deals with navigation actions, we employ a pointer network [52] to choose between navigation, interaction, and stop actions: if the pointer network chooses a scene feature, agent outputs the navigation actions needed to navigate to
89
+
90
+ <span id="page-4-2"></span><sup>&</sup>lt;sup>1</sup>For simplicity, we use the same Mask R-CNN model that is used in our milestone checker, but it is not necessary.
91
+
92
+ <span id="page-5-4"></span>that scene; if it chooses an object feature, agent outputs the mask for that object, and additionally use an MLP to predict the interaction action type; if it chooses a stop feature (added to the list of visual features as an all-zero vector), agent outputs Stop. The MLP takes the concatenation of the chosen object feature and the updated state embedding as input. The architecture as well as its integration with M-TRACK is illustrated in Figure 3, and more implementation details are provided in the supplementary materials.
93
+
94
+ To further show the model-agnostic nature of our M-TRACK, we use the LSTM baseline introduced in ALFRED [46], and extend the architecture with the same pre-trained vision encoder used in VLN BERT. Furthermore, to leverage the power of the pre-trained vision encoder, we follow [40,47] and ask our agent to select an object from the detected objects instead of directly predicting a binary mask. The corresponding pixel mask is retrieved from the selected object. Refer to supplementary materials for details.
95
+
96
+ As shown in the ALFRED paper [46], base models like the LSTM performs rather poorly on ALFRED when simply trained with behavior cloning. Prior studies on other VLN tasks have demonstrated the importance of reinforcement learning (RL) [18,50,63], but its effectiveness has not been validated on ALFRED. We train the models with a combination of behavior cloning (using the cross-entropy loss between the predicted action sequence and the ground truth), object feature selection loss (for interaction actions), and RL. We apply the A2C algorithm [37] which, at time t, samples an action $a_t$ according to agent's predicted log probability distribution $\log(p_t^a)$ , and measures the advantage for that action $adv_t$ with a critic network and a reward. We consider four different types of reward: 1) the straight line distance between the agent and the current navigation/interaction target, 2) the interaction action matching with the ground-truth interaction action which we can compute from the environment state, 3) the visibility of the target, i.e., whether the target is reachable (within 1.5m in ALFRED) and is in sight by the agent, and 4) the final task success. Following VLNBERT [18], we combine behavior cloning loss, cross-entropy loss for object selection and A2C.
2205.12247/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2205.12247/paper_text/intro_method.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Pre-trained Language Models (PLMs) [@peters-etal-2018-deep; @radford2019language; @devlin-etal-2019-bert; @brown2020language] are increasingly used in various Natural Language Processing (NLP) applications. Pre-trained on large-scale text corpora, they are shown to store relational knowledge [@petroni-etal-2019-language; @jiang-etal-2020-know; @kassner-etal-2021-multilingual], e.g., commonsense knowledge [@zhou2020evaluating; @lin-etal-2020-birds; @nguyen2021advanced; @zhou-etal-2021-rica]. They have been used to construct knowledge bases while requiring limited human effort for rule creation and validation [@bosselut-etal-2019-comet; @zhou-etal-2022-prix].
4
+
5
+ <figure id="fig-intro" data-latex-placement="t">
6
+ <embed src="gdprompt_intro.pdf" />
7
+ <figcaption>Examples of prompts and gold answers in <span class="smallcaps">GeoMLama</span>. For each concept (e.g., color of wedding dress), there are multiple masked multilingual prompts (<span style="color: orange">English</span>, <span style="color: orange">Hindi</span>, <span style="color: orange">Swahili</span>, etc.) with specified country information <span style="background-color: .2, 1, 1">[X]</span> querying geo-diverse knowledge about the concept. We test multilingual PLMs by examining the extent to which masked word predictions align with the gold answers in <span style="background-color: 0, 1, 0">[MASK]</span> columns.</figcaption>
8
+ </figure>
9
+
10
+ However, *do PLMs store geo-diverse commonsense knowledge?* Geo-diverse commonsense [@yin-etal-2021-broaden] is a collection of commonsense locally shared by people from certain regions but may not apply in other regions due to cultural and geographic differences. For instance, the color of bridal outfit in American wedding is white, while it is normally red in traditional Chinese and Indian weddings. PLMs which are unaware of geo-diverse knowledge may have disparity in performance on test data associated with different regions. This may lead to disadvantage of users in certain regions and further amplify bias in AI applications, such as constructing Western-centric knowledge bases eventually.
11
+
12
+ In this paper, we concentrate on evaluating *multilingual* PLMs [@devlin-etal-2019-bert; @conneau2019cross; @conneau-etal-2020-unsupervised]. Studying geo-diversity naturally involves multilinguality. People in different regions may speak different languages, and it is natural to assume that geo-specific knowledge is better represented in its native language. Moreover, pre-trained on a collection of multilingual corpora, multilingual PLMs accumulate the knowledge from various languages. Therefore, we posit that knowledge in multilingual PLMs is more diverse than that in models trained on a single language.
13
+
14
+ Centered around multilingual PLMs, we follow the original knowledge probing task LAnguage Model Analysis ([Lama]{.smallcaps}) [@petroni-etal-2019-language] and introduce a new *geo-diverse* probing benchmark [GeoMLama]{.smallcaps}. As shown in Figure [1](#fig-intro){reference-type="ref" reference="fig-intro"}, given a masked geo-diverse prompt with a particular country name [\[X\]]{style="background-color: .2, 1, 1"}, such as "*In traditional* [\[X\]]{style="background-color: .2, 1, 1"} *weddings, the color of wedding dress is usually* [\[MASK\]]{style="background-color: 0, 1, 0"}.", and a corresponding candidate answer list, {"*red*", "*white*", "*black*", "*blue*", \...}, multilingual PLMs are required to predict the masked word [\[MASK\]]{style="background-color: 0, 1, 0"} from the candidate list.
15
+
16
+ The characteristics of [GeoMLama]{.smallcaps} are summarized as follows. 1) *Diverse answers across countries*: Each prompt is designed based on geo-diverse concept (e.g., color of traditional wedding dress in Figure [1](#fig-intro){reference-type="ref" reference="fig-intro"}) and gold answers for masked word are different across countries. 2) *Broad coverage of geo-diverse concepts*: [GeoMLama]{.smallcaps} encompasses comprehensive geo-diverse topics including habits and personal choices, cultures and customs, policies and regulations, and geography. 3) *Coverage of multiple countries and languages*: [GeoMLama]{.smallcaps} involves knowledge about the United States, China, India, Iran, and Kenya, and is constructed by the native languages of the five countries, English, Chinese, Hindi, Persian, and Swahili. Overall, there are 3,125 prompts in our benchmark.
17
+
18
+ We perform in-depth probing analysis on 11 multilingual PLMs, including mBERT [@devlin-etal-2019-bert], XLM [@conneau2019cross], XLM-R [@conneau-etal-2020-unsupervised], mT5 [@xue-etal-2021-mt5], and XGLM [@lin2021few]. In general, we observe that multilingual PLMs significantly outperform random guess, suggesting that multilingual PLMs are capable of storing geo-diverse commonsense to some extent. We then conduct fine-grained investigation across three dimensions.
19
+
20
+ We first study the correlation between model performance and *model size*. Contrary to our intuition, we notice that the largest models do not necessarily have the best performance on our benchmark. We further study *the best language to probe the knowledge about a particular country*. Surprisingly, we find that the best language is not the native language of the given country (e.g., English is not the best language to probe knowledge about the US). We also explore *the knowledge that can be most accurately probed by a particular language*. Similarly, we find that the most accurately probed knowledge is not the one about indigenous country of the language (e.g., the country for which Chinese prompts provide the most accurate predictions is not always China). Lastly, we find evidence of reporting bias that might explain such observations.
21
+
22
+ # Method
23
+
24
+ @petroni-etal-2019-language introduce the LAnaguage Model Analysis (LAMA) setup to probe knowledge stored in the pre-trained language models using masked templates. Without any additional fine-tuning, given a masked prompt, models are required to recover masked tokens with entities with the highest probability for the prompt context. Following LAMA probe, on [GeoMLama]{.smallcaps}, we study whether models are capable of seeking the most appropriate answers to from answer candidate list according to given geo-diverse prompts.
25
+
26
+ @kassner-etal-2021-multilingual follow LAMA probe to investigate entity knowledge in multilingual BERT only. In this work, we probe a diverse set of language models on *geo-diverse commonsense knowledge* by scoring answer candidates and calibrating the score of each candidate.
27
+
28
+ We score answer candidates based on log likelihood of generating answer candidates given prompts. Different model families have their individual inference methods to obtain the scores. In the following, we introduce the probing methods for masked language models. Details of other probing methods on autoregressive and encoder-decoder language models are shown in Appendix [9](#eval-main){reference-type="ref" reference="eval-main"}. []{#p_entity label="p_entity"}
29
+
30
+ Given an answer candidate $e$ (e.g., "*chopsticks*") that is tokenized into subtokens $e_1, e_2, ..., e_L$ (e.g., "*chop*", "*stic*", "*ks*") such that $e_i \in V$ where $V$ is the vocabulary and $t$ is the prompt (e.g., "*In China*, *people usually eat food with* \[$\mathrm{MASK}_1$\]\...\[$\mathrm{MASK}_L$\]."), we assign a score $l_e$ based on the log probability of recovering the answer candidate $e$ in the masked prompt. Formally, $l_e$ is defined as $$\begin{equation}
31
+ \label{eq:prob}
32
+ \small
33
+ % \begin{split}
34
+ \frac{1}{L}\sum_{i=1}^{L} \log( p([\mathrm{MASK}_i]=e_i|[\mathrm{MASK}_{<i}]=e_{<i}, t)). \tag{1}
35
+ \end{equation}$$
36
+
37
+ According to Eq.[\[eq:prob\]](#eq:prob){reference-type="eqref" reference="eq:prob"}, we perform $L$ forward passes, each of which helps in obtaining conditional probability of generating one subtoken. To illustrate, $i^{th}$ forward pass inference would be $p([\mathrm{MASK}_i] = e_i | \text{ ``\textit{In China, people usually eat food with} } e_1 \text{ } e_2 \text{ }\\ ...e_{i-1} \text{ } [\mathrm{MASK}_i]...[\mathrm{MASK}_L]\text{''}).$
38
+
39
+ Here we further normalize the sum of log likelihood by the number of subtokens $L$ to help in reducing the effect of length. The other model families discussed in Appendix [9](#eval-main){reference-type="ref" reference="eval-main"} also adopt the normalization strategy.
40
+
41
+ The way to score answer candidates $e \in \mathcal{E}$ (e.g., "*chopsticks*" $\in$ {"*chopsticks*", "*hands*", "*spoons*", "*knives*"}) given the prompt $t$ for a country $C$ (e.g., "*In China, people usually eat food with* \[MASK\].") is illustrated in §[\[p_entity\]](#p_entity){reference-type="ref" reference="p_entity"}. However, this scoring mechanism is likely to be biased towards statistical correlations learned during pre-training [@zhao2021calibrate] whilst ignoring the country-specific information present in the prompt. For instance, the model might choose "*knives*" over "*chopsticks*" because "*knives*" may occur more often than "*chopsticks*" in pre-training corpora. Hence, we calibrate models with the prior probability of answer predictions in the absence of any country information. The final score given to each answer in the answers candidate set is given by: $$\begin{align*}
42
+ \label{eq:4}
43
+ s_e = l_e-l'_e, \tag{2}
44
+ \end{align*}$$ where $l'_e$ is obtained using the same approach as $l_e$ but the input prompt for calculating $l'_e$ is the one without country information (e.g., "*People usually eat food with* \[MASK\]." without "*In China,*").
45
+
46
+ <figure id="size-summ" data-latex-placement="htbp">
47
+ <figure>
48
+ <embed src="xlm-r-size-summ.pdf" />
49
+ <figcaption>mBERT, XLM, XLM-R family.</figcaption>
50
+ </figure>
51
+ <figure>
52
+ <embed src="mt5-size-summ.pdf" />
53
+ <figcaption>mT5 family.</figcaption>
54
+ </figure>
55
+ <figure>
56
+ <embed src="xglm-size-summ.pdf" />
57
+ <figcaption>XGLM family.</figcaption>
58
+ </figure>
59
+ <figcaption>Multilingual PLMs’ performance on probing knowledge about the studied countries averaged over all languages. Complete results are shown in Appendix <a href="#results-w" data-reference-type="ref" data-reference="results-w">11</a>.</figcaption>
60
+ </figure>
61
+
62
+ <figure id="lang-summ" data-latex-placement="htbp">
63
+ <figure>
64
+ <embed src="xlm-r-lang-summ.pdf" />
65
+ <figcaption>mBERT, XLM, XLM-R family.</figcaption>
66
+ </figure>
67
+ <figure>
68
+ <embed src="mt5-lang-summ.pdf" />
69
+ <figcaption>mT5 family.</figcaption>
70
+ </figure>
71
+ <figure id="4c">
72
+ <embed src="xglm-lang-summ.pdf" />
73
+ <figcaption>XGLM family.</figcaption>
74
+ </figure>
75
+ <figcaption>Multilingual PLMs’ performance averaged over countries when using multilingual prompts. “<em>en</em>”, “<em>zh</em>”, “<em>hi</em>”, “<em>fa</em>”, and “<em>sw</em>” denote English, Chinese, Hindi, Persian, and Swahili. Complete results are shown in Appendix <a href="#results-w" data-reference-type="ref" data-reference="results-w">11</a>.</figcaption>
76
+ </figure>
2205.14345/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-01-23T11:25:55.869Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" etag="UHpbdmR8DDO-O9LEbR8C" version="16.2.4" type="device"><diagram id="g_g1V-RHdnXoKsIf5rM6" name="Page-1">7V1bc9u4Ff41nmkf6sEdxGOcy3bayUyn2e5mHxWJttWVRY+sJE5/fcmIpAkcmoYYEgcRlJeYoARS+D4cHJwbLvjru8dfdov72/fFKt9cMLJ6vOBvLhgTWpDyv6rl26GFCaoPLTe79erQRp8aPqz/l9eN9RdvPq9X+YP1wX1RbPbre7txWWy3+XJvtS12u+Kr/bHrYmM/9X5xk4OGD8vFBrb+vl7tb+tWqszTjb/n65vb+tEZq3/f3aL5cP1LHm4Xq+Jrp4m/veCvd0WxP/x19/g631Sj14zL4Xvvnrnbvtgu3+59vnDzcfnPXz68+v0x//Jv9fbNr/94/3b5Nyrr3/Flsflc/+S/LP5av/H+WzMM+/yxfMjV7f5uUzbQ8s+H/a74M39dbIpd2bIttuUnr67Xm43TtNisb7bl5bJ8zbxsv/qS7/brcoBf1Tfu1qtV9Zirr7frff7hfrGsnvm15FPZtis+b1d59QtI1X2x3dcUkdV1/eplh/njs6NC27EuWZoXd/l+9638SP0FKWt4aoJSXV9/fUJb1U23HZx53bao+XXT9vwEQflHjcIxiDRP6yLyKSFEqJYiNkgohGSZECRMCWeWZOiQSAAJBYCUHZXrxHPD1sFp8XB/WDyu14/VQFoYXTC+WuTZ9RIAWt5Ryyz/dN3eadYIOdPcIMQBQkEgsh4gsvmAUAAIfvpAGGrjwHpkVGAcNMCBnT4OlJjogMgAEGImIOrVAxkD7SwOPOPYEEClVp40BK44wodAQy1WJyiOIgAC6q4qASBadTQeIBgAIjt9IFzJJIzExoEDHMzp4wAkUwRACADEnHu462yZL3uR+JRJ8X2ccURTBEjAzXRrtTzlOSF4dEjA3TRNwK7hbiEUxd7FabidpnNNiTj3EBFgAHfSdEabRjQLBOHRIQE31HQuK18cs4FSGRsITc/dVXouy1JM08FdpSNAAu6pT9zARKUxsYEA99M0AcuGcuSSZuhAwA01ndHWF4tcynh0QMANNZ3LtBGHWJLOVrrFBA+DHr/0XGa+ODBw924RYAA30RCB7epVFR5WXi03i4eH9dIeeDumohyK3beP9a3vF39077x5tK6+NVeP6/3Hzt9/NB2Ufz99pbpovjHFmnP4qfkKxLU5gDKyX+xu8v3QSB6c+xD6DrSyB9qmbZdvFvv1F/tF+vCun/CvYl2+YmcjROmlplKzTEsmmWAW0UoKXbJMME2rQEMtmy1784CH4vNumdd9PnEKPMZY/ahm7WocB6T7DjxzfsVhDMFDvhO3HbAf4TK0Q0zFZTKGyzRSLh/AHuSy9Ga9xmW9NpfGtKznNh+ZJpeEtKxX4zhPiVCXUrWkt2cWZ/xScDzOQ7vPWX6P47zy5nyGyfmMdkld/rPpKOy7bBznM2Gz2nmIweU8tLBNJucv5SjWH7526rw3mLw33FYuuMN7SzFpTa5HE586/VhPEUR030ErEZb5Bpo1z8wfz3zty3xNLjCZ78bnKGc76K29ux0ZVmpPT/+ywGyGpuHJ2JyYvu7P5GcMFIH0derGv46lMuwJmctsNi4npoc30bAeXGaoXCZubNJoLoOejKO9zM5e6JY46xUhGMxxpTEdUqmFon22vqOVjmxIo5ZU2Rp1YOJDN9CZ+OOJ720sP8SzohGfZb0GjIb4ltlwNPEpVVY/DbUa5jPZfQfXIj8786HzbTLmp6V8H8F6iSvu2aC419OIezMo7hmuuD97O6ciPfMmPapfNDNDhmwpRdezQ0d6iLQeEvSKdH1U4Tk/o1c0QRXHn/e4nlEqBoW9mUTYUzJoLpcCV9rP5xsdy/yfl/fcm/eo3lFK5RDvJZmI92yQ9xKX99H5R39i5d6f96je0XKzOch7OhHv+SDvFSbvWVPvJx7e/9SajncMWIbqHaXlBnyI+Wwi5rsBBvZTNC7zz57U8KxH9qRmg6znE7FeDrI+w2U9m431qVlzvGPADplkeKw3g6wXE7FeDbLe4LJ+Rl9tYrLen/W4flpGBlkvJ2K9HmK9Irisn89Rm5isb3KBPViP7KQdjE6Q00QnUDIYnqBQwxMYOTtpw7Me10nLBp20chonLSWDXlqF6qVlxMdLe2q1Cih3q/M2jMLKDWYEeg5/Wz+Uo70CcPx4OfdNfl31ME0xdzEVKAKUTIZVz2jQau6M+Hi1Tq2ABJgdXKDPDuhlebfY35bdpzs9uEKfHtTHCXBCJSXAzEA/5qCtwtiB4D/bL4mvHG21E8SpAe2lCdTLLNWY6CYINOElcA4INU4VKGbQgegpq3z6QDDmnsiCDwQ0dCRwBAIQTQK7UmBbFjCtIxAYl9EBATffCRyBQI1zaJdE3+dRuN9O4AwEIJoiAAJuuOfUX2MxfQDZhI9EU70zsTMQmKTRIQG32imcgQCWCYV9lF27cUisAD9YJyJAAu6uU6j9DqRTBEj0FFlOYDdBtVtqmaAj0VNqOYG644y4BZfxkTinIE8T0tEury+GdNT2VqyQDkaUnYNsMZIJu9iEU7fKN6KDEdLtR7qFmbtvIMJWF2pNeecopoCUR43dKxeXinAkk5pKbtoTwtooiiojXlcHo1SU5E1QxfGkz6yO7CgmLrvv0EazBGP9uTLzVKwX3qxHjd0rX9SOqrNJb6dONtasoznPrDq4wl5NOO27GYzyMyYeJybo/SmPWl7lBTZaeTRyNOUH55XgmIdOtOEaZzH/w5z3Tb2st65oIdrGqsAvmcNH3T0tQrankhwdo236q+y3GZ5WVaHgtJ8x4zi5XPtjqI9aXYURJ//SZr6bSjOyBj/NhvMvFa7AZ2fmT8h83xzM2v2Lp9sz25IoiWMi9Ndm3J6k09PsDD5nEAdnL0OtkVKVtJ2IvaCn4Ow9l2yeksG+J0c862oJpXQ78fSKjWQw7EmHZvCMWb0JMjjzZjBDZXDm+CKVm+nhrx+7PbnuytkZfHZiTsTe5pMe7MWtxuCeHaXEaPnr9mRCszcSf+QUXDxeh0ZiL25VBeNordrlnL/s5QO26IzhmibiK3j8EysWjYLpQW7c+BIhXXVgrD/d7Ulzp6fZGQw9iCjhzEa/IVr3UmyOMDXjjDt+imqzuqeVolodrmwBgZ8ZKaCjpUeon9qM4MQtvIIPBANAoKSoIosm/MxIAc3XKEHl795dSR0slJkzHh0Q0BKLkqJawsDNm2AzQmc2EPhZYAIaFFFSVJFFUwRAQLsYToqq0q+uzDs02RQBEtDGg5OiGniVcKtBRYAENErgpKgiLxP46XiiZ3M9417ieemEvE7gI9G8EHqKKrJ0igCJngT6BFJUn9w30SRGSri/xklRDbxOGBYdEvPFhyXm3W3M2S+7EARufJgRAVJUsyzaFFU5Y1DZGMrPEBI5oHQhUR41oIwTGSRF1ah4U1TlfIFosQj6QKxvvDQerGe4rGfzp6hy2puFGkWKqpwvci0WQR9ItzmC8qjhES+wcZoU1eF5hZyiKueLd4tFzIfiPPPmPG79DW3sFFVh83GiFNVKMXp+24CcoSrTCIYLpeD4Mx+3BofRATJUmR6qSICcoCpnrMIREfFDiXzuTXzUShycuJWzx6b4wZ5Cp/gp6B6ZjMGRKOkDjhck9qLW1OCgyPVo9oKegrM3jdIYA26SSRnsXQlJ4JbGyNxCyGMTVGFPoRNUG33mxBkcSnX2Lu4icEtcaMcTOTpBFfYUOkFVnV2YU7HXO71aovpzGEiKHpugCnsKnaCqIvFGTqEL4CaoHsFehsteN/NubIIq0zTeBFWVRuWLUJs779otEjW6hHPuqgMjvemgp+AJqqbnSAtI4VNLteDKPbqz8aOiBakZKEp6ZMnJA8E1OhDQo55AXiQAQhh0IKCbN4W8SC2jE02RnBiJPCMURQei58TIFJLA3CmBvkjwxpqfWOoRmBOaoyPRU08ihdQjd06gr9ecMIAExOHn3wwHsrIbbz+RQQ1QEYQNxZ3zTE4Sd84NH4g7F9Igxp1zMr11/hQY7O0nMriRJsYMWBi5EFPEkAuSDUTtCqrxYsg5SaMCdSjae9dQN6gOfm7EQGShNKLvyIqjxfag9V4pjWi95+RsvZ+S997We4MaFiC468wnaqRMBz1Jp6fZGTxj9k9EDA4luX2dq/X2H01yK/PMLvxo8Qx6ykIzeMba1RExOJQM9k1Wrs0meDLYCcnWZqSCAXrKRGgtAlrHE6gayN2gTnQ3BSc91vEUitW557/gI0F7rOMJlEgDcwLfT9EMe2JFiAAS+Nbxxj104opOmCyKdtF9WdGhuKq6G3ur2VhVHfSkA6vqdMZjFo9j8JRxvMfMgikZ3KyRHgzGrSuUufXczOjNZjZgCzQM1RZIPWzgD7eL++rP5efd5tvVbrH8swLupTXzifK0Z7HbFfsSnWJbXhu4P2RTLYdVwQ5pjXejGXRYxPWlhDwSbkjqhGsiNMD+ulv8t1Qxit06fwAAlCOwB9N8W2xzRybUTR0Q3GH/fl13XF1Xw7teLjavNuubCou79WpV3bxa1A3Lcqjz3URwuMUqqdKXTAI8BO+Z1m4WyIRoeBgTmynwsN7ebPJa1L80A9zB71EBIXjP5wdNgYB0hJoQUD/UPaOvjh/98nJXFPuuvCpH6PZ9scqrT/wf</diagram></mxfile>
2205.14345/paper_text/intro_method.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ A plethora of real-world problems fall under the broad category of combinatorial optimisation (CO) (vehicle routing and scheduling [\(Korte and Vygen](#page-7-0) [2012\)](#page-7-0); protein folding [\(Perdomo-Ortiz et al.](#page-8-0) [2012\)](#page-8-0); fundamental science [\(Barahona](#page-7-1) [1982\)](#page-7-1)). Many CO problems can be formulated as mixed integer linear programmes (MILPs) whose task is to assign discrete values to a set of decision variables, subject to a mix of linear and integrality constraints, such that some objective function is maximised or minimised. The most popular method for finding exact solutions to MILPs is branch-andbound (B&B) [\(Land and Doig](#page-7-2) [1960\)](#page-7-2); a collection of heuris-
4
+
5
+ \*Work undertaken during internship at InstaDeep. Corresponding email: cwfparsonson@gmail.com Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
6
+
7
+ tics which increasingly tighten the bounds in which an optimal solution can reside (see Section [3\)](#page-2-0). Among the most important of these heuristics is *variable selection* or *branching* (which variable to use to partition the chosen node's search space), which is key to determining B&B solve efficiency [\(Achterberg and Wunderling](#page-7-3) [2013\)](#page-7-3).
8
+
9
+ State-of-the-art (SOTA) learning-to-branch approaches typically use the imitation learning (IL) paradigm to predict the action of a high quality but computationally expensive human-designed branching expert [\(Gasse et al.](#page-7-4) [2019\)](#page-7-4). Since branching can be formulated as a Markov decision process (MDP) [\(He, Daumé, and Eisner](#page-7-5) [2014\)](#page-7-5), reinforcement learning (RL) seems a natural approach. The long-term motivations of RL include the promise of learning novel policies from scratch without the need for expensive expert data, the potential to exceed expert performance without human design, and the capability to maximise the performance of a policy parameterised by an expressivity-constrained deep neural network (DNN).
10
+
11
+ However, branching has thus far proved largely intractable for RL for reasons we summarise into three key challenges. (1) *Long episodes:* Whilst even random branching policies are theoretically guaranteed to eventually find the optimal solution, poor decisions can result in episodes of tens of thousands of steps for the 500 constraint 1000 variable MILPs considered by [Gasse et al.](#page-7-4) [2019.](#page-7-4) This raises the familiar RL challenges of reward sparsity [\(Trott et al.](#page-8-1) [2019\)](#page-8-1), credit assignment [\(Harutyunyan et al.](#page-7-6) [2019\)](#page-7-6), and high variance returns [\(Mao et al.](#page-8-2) [2019\)](#page-8-2). (2) *Large state-action spaces*: Each branching step might have hundreds or thousands of potential branching candidates with a huge number of unique possible sub-MILP states. Efficient exploration to discover improved trajectories in such large state-action spaces is a well-known difficulty for RL [\(Agostinelli et al.](#page-7-7) [2019;](#page-7-7) [Ecoffet et al.](#page-7-8) [2021\)](#page-7-8). (3) *Partial observability*: When a branching decision is made, the next state given to the brancher is determined by the next sub-MILP visited by the node selection policy. Jumping around the B&B tree without the brancher's control whilst having only partial observability of the full tree makes the future states seen by the agent difficult to predict. [Etheve et al.](#page-7-9) [2020](#page-7-9) therefore postulated the benefit of keeping the MDP within a sub-tree to improve observability and introduced the SOTA fitting for minimising the sub-tree size (FMSTS) RL branching algorithm. However, in order to achieve this,
12
+
13
+ <span id="page-1-0"></span>![](_page_1_Figure_0.jpeg)
14
+
15
+ Figure 1: The proposed retro branching approach used during training. Each node is labelled with: Top: The unique ID assigned when it was added to the tree, and (where applicable); bottom: The step number (preceded by a '#') at which it was visited by the brancher in the original Markov decision process (MDP). The MILP is first solved with the brancher and the B&B tree stored as usual (forming the 'original episode'). Then, ignoring any nodes never visited by the agent, the nodes are added to trajectories using some 'construction heuristic' (see Sections [4](#page-3-0) and [6\)](#page-4-0) until each eligible node has been added to one, and only one, trajectory. Crucially, the order of the sequential states within a given trajectory may differ from the state visitation order of the original episode, but all states within the trajectory will be within the same sub-tree. These trajectories are then used for training.
16
+
17
+ FMSTS had to use a depth-first search (DFS) node selection policy which, as we demonstrate in Section [6,](#page-4-0) is highly sub-optimal and limits scalability.
18
+
19
+ In this work, we present *retro branching*; a simple yet effective method to overcome the above challenges and learn to branch via reinforcement. We follow the intuition of [Etheve](#page-7-9) [et al.](#page-7-9) [\(2020\)](#page-7-9) that constraining each sequential MDP state to be within the same sub-tree will lead to improved observability. However, we posit that a branching policy taking the 'best' actions with respect to only the sub-tree in focus can still provide strong overall performance *regardless of the node selection policy used*. This is aligned with the observation that leading heuristics such as SB and PB also do not explicitly account for the node selection policy or predict how the global bound may change as a result of activity in other sub-trees. Assuming the validity of this hypothesis, we can discard the DFS node selection requirement of FMSTS whilst retaining the condition that sequential states seen during training must be within the same sub-tree.
20
+
21
+ Concretely, our retro branching approach (shown in Figure [1](#page-1-0) and elaborated on in Section [4\)](#page-3-0) is to, during training, take the search tree after the B&B instance has been solved and *retrospectively* select each subsequent state (node) to construct multiple trajectories. Each trajectory consists of sequential nodes within a single sub-tree, allowing the brancher to learn from shorter trajectories with lower return variance and more predictable future states. This approach directly addresses challenges (1) and (3) and, whilst the state-action space is still large, the shorter trajectories implicitly define more immediate auxiliary objectives relative to the tree. This reduces the difficulty of exploration since shorter trajectory returns will have a higher probability of being improved upon via stochastic action sampling than when a single long MDP is considered, thereby addressing (2). Furthermore, retro branching relieves the FMSTS requirement that the agent must be trained in a DFS node selection setting, enabling more sophisticated strategies to be used which are better suited for solving larger, more complex MILPs.
22
+
23
+ We evaluate our approach on MILPs with up to 500 constraints and 1000 variables, achieving a 3-5× improvement over FMSTS and coming within ≈ 20% of the performance of the SOTA IL agent of [Gasse et al.](#page-7-4) [\(2019\)](#page-7-4). Furthermore, we demonstrate that, for small instances, retro branching can uncover policies superior to IL; a key motivation of using RL. Our results open the door to the discovery of new branching policies which can scale without the need for labelled data and which could, in principle, exceed the performance of SOTA handcrafted branching heuristics.
24
+
25
+ # Method
26
+
27
+ We used the same GCN architecture as Gasse et al. 2019 to parameterise our DQN value function with some minor modifications which we found to be helpful. Firstly, we replaced the ReLU activations with Leaky ReLUs which we inverted in the final readout layer in order to predict the negative Q-values of our MDP. Secondly, we initialised our linear layer weights and biases with a normal distribution ( $\mu=0,\sigma=0.01$ ) and all-zeros respectively, and our layer normalisation weights and biases with all-ones and all-zeros
28
+
29
+ respectively. Thirdly, we removed a network forward pass in the bipartite graph convolution message passing operation which we found to be unhelpfully computationally expensive. For clarity, Figure [6](#page-11-0) shows the high-level overview of the neural network architecture. For a full analysis of the benefit of using GCNs for learning to branch, refer to [Gasse et al.](#page-7-4) [2019.](#page-7-4)
30
+
31
+ The key performance criterion to optimise for any branching method is the reduction of the overall B&B solving time. However, accurate and precise solving time and primal-dual integral over time comparisons are difficult because they are hardware-dependent. This is particularly problematic in research settings where CPU/GPU resources are often shared between multiple researchers and therefore hardware performance (and consequently solving time) significantly varies. Consequently, as in other works [\(Khalil et al.](#page-7-16) [2016;](#page-7-16) [Gasse et al.](#page-7-4) [2019;](#page-7-4) [Etheve et al.](#page-7-9) [2020\)](#page-7-9), we presented and optimised for the number of B&B tree nodes as this is hardwareindependent and, in the context of prior work, can be used to infer the solving time.
32
+
33
+ Specifically, we use the same GCN-based architecture of [Gasse et al.](#page-7-4) [2019](#page-7-4) for all ML branchers, thus all ML approaches have the same per-step inference cost. Therefore the relative difference in the number of tree nodes is exactly the relative wall-clock times on equal hardware. When the per-step inference process is different (as for our non-ML baselines, such as SB), the number of tree nodes is not an adequate proxy for solving time. However, [Gasse et al.](#page-7-4) [2019](#page-7-4) have already demonstrated that the GCN-based branching policies of IL outperform the solving time of other branchers such as SB. As this ML speed-up has already been established, in this manuscript we focus on improving the per-step ML decision quality using RL rather than further optimising network architecture, or otherwise, for speed, which we leave to further work.
34
+
35
+ However, empirical solving times are of interest to the broader optimisation community. Therefore, Table [3](#page-10-3) provides a summary of the solving times of the branching agents on the large 500×1000 set covering instances under the assumption that they were ran on the same hardware as [Gasse et al.](#page-7-4) [2019.](#page-7-4)
36
+
37
+ <span id="page-10-3"></span>
38
+
39
+ | Method | Solving time (s) |
40
+ |-----------|------------------|
41
+ | SB | 33.5 |
42
+ | IL | 2.1 |
43
+ | Retro | 2.5 |
44
+ | FMSTS-DFS | 12.2 |
45
+ | FMSTS | 7.6 |
46
+ | Original | 35.8 |
47
+
48
+ Table 3: Inferred mean solving times of the branching agents on the large 500 × 1000 set covering instances under the assumption that they were ran on the same hardware as [Gasse](#page-7-4) [et al.](#page-7-4) [2019.](#page-7-4)
49
+
50
+ <span id="page-10-2"></span>As described in Section [5,](#page-4-1) we used 100 MILP instances unseen during training to evaluate the performance of each branching agent. This is in line with prior works such as [Khalil et al.](#page-7-16) [2016](#page-7-16) who used 84 instances and [Gasse et al.](#page-7-4) [2019](#page-7-4) who used 20. To ensure that 100 instances are a large enough data set to reliably compare branching agents, we also ran the agents on 1000 large 500 × 1000 set covering instances. The relative performance of each branching agent was approximately the same as when evaluated on 100 instances, with Retro scoring 65.3 nodes, FMSTS 250 (3.8× worse than Retro), IL 55.4 (17.8% better than Retro), and SB 43.3. In the interest of saving evaluation time and hardware demands and to make the development of and comparison to our work by future research projects more accessible, as well as for clarity in the per-instance Retro-IL comparison of Figure [3d](#page-5-0), we report the results for 100 evaluation instances in the main paper in the knowledge that the relative performances are unchanged as we scale the data set to a larger size.
51
+
52
+ For all non-DFS branching agents we used the same [SCIP](#page-8-10) [2022](#page-8-10) B&B parameters as [Gasse et al.](#page-7-4) [2019,](#page-7-4) as summarised in Table [4.](#page-10-4)
53
+
54
+ <span id="page-10-4"></span>
55
+
56
+ | SCIP Parameter | Value |
57
+ |--------------------------|-------|
58
+ | separating/maxrounds | 0 |
59
+ | separating/maxroundsroot | 0 |
60
+ | limits/time | 3600 |
61
+
62
+ Table 4: Summary of the [SCIP](#page-8-10) [2022](#page-8-10) hyperparameters used for all non-DFS branching agents (any parameters not specified were the default [SCIP](#page-8-10) [2022](#page-8-10) values).
63
+
64
+ <span id="page-10-0"></span>We found it useful to add 20 features to the variable nodes in the bipartite graph in addition to the 19 features used by [Gasse et al.](#page-7-4) [2019.](#page-7-4) These additional features are given in Table [5;](#page-11-1) their purpose was to help the agent to learn to aggregate over the uncertainty in the future primal-dual bound evolution caused by the partially observable activity occurring in subtrees external to its retrospectively constructed trajectory.
65
+
66
+ <span id="page-10-1"></span>[Etheve et al.](#page-7-9) [\(2020\)](#page-7-9) did not open-source any code, used the paid commercial [CPLEX](#page-7-29) [\(2009\)](#page-7-29) solver, and experimented with proprietary data sets. Furthermore, they omitted comparisons to any other ML baseline such as [Gasse et al.](#page-7-4) [\(2019\)](#page-7-4), further limiting their comparability. However, we have done a 'best effort' implementation of the relatively simple FMSTS algorithm, whose core idea is to set the Q-function of a DQN agent as minimising the sub-tree size rooted at the current node and to use a DFS node selection heuristic. To replicate the DFS setting of [Etheve et al.](#page-7-9) [\(2020\)](#page-7-9) in [SCIP](#page-8-10) [\(2022\)](#page-8-10), we used the parameters shown in Table [6.](#page-13-0) We will release the
67
+
68
+ <span id="page-11-0"></span>![](_page_11_Figure_0.jpeg)
69
+
70
+ Figure 6: Neural network architecture used to parameterise the Q-value function for our ML agents, taking in a bipartite graph representation of the MILP and outputting the predicted Q-values for each variable in the MILP.
71
+
72
+ <span id="page-11-1"></span>
73
+
74
+ | Variable Feature | Description |
75
+ |----------------------------------|--------------------------------------------|
76
+ | db_frac_change | Fractional dual bound change |
77
+ | pb_frac_change | Fractional primal bound change |
78
+ | max_db_frac_change | Maximum possible fractional dual change |
79
+ | max_pb_frac_change | Maximum possible fractional primal change |
80
+ | gap_frac | Fraction primal-dual gap |
81
+ | num_leaves_frac | # leaves divided by # nodes |
82
+ | num_feasible_leaves_frac | # feasible leaves divided by # nodes |
83
+ | num_infeasible_leaves_frac | # infeasible leaves divided by # nodes |
84
+ | num_lp_iterations_frac | # nodes divded by # LP iterations |
85
+ | num_siblings_frac | Focus node's # siblings divided by # nodes |
86
+ | is_curr_node_best | If focus node is incumbent |
87
+ | is_curr_node_parent_best | If focus node's parent is incumbent |
88
+ | curr_node_depth | Focus node depth |
89
+ | curr_node_db_rel_init_db | Initial dual divided by focus' dual |
90
+ | curr_node_db_rel_global_db | Global dual divided by focus' dual |
91
+ | is_best_sibling_none | If focus node has a sibling |
92
+ | is_best_sibling_best_node | If focus node's sibling is incumbent |
93
+ | best_sibling_db_rel_init_db | Initial dual divided by sibling's dual |
94
+ | best_sibling_db_rel_global_db | Global dual divided by sibling's dual |
95
+ | best_sibling_db_rel_curr_node_db | Sibling's dual divided by focus' dual |
96
+
97
+ Table 5: Descriptions of the 20 variable features we included in our observation in addition to the 19 features used by [Gasse et al.](#page-7-4) [2019.](#page-7-4)
98
+
99
+ full re-implementation to the community along with our own code.
100
+
101
+ Retrospective Trajectory Construction Algorithm [1](#page-12-1) shows the proposed 'retrospective trajectory construction' method, whereby fathomed leaf nodes not yet added to a trajectory are selected as the brancher's terminal states and paths to them are iteratively established using some construction method.
102
+
103
+ <span id="page-12-1"></span>Algorithm 1: Retrospectively construct trajectories.
104
+
105
+ ```
106
+ Input: B&B tree T from solving MILP
107
+ Output: Retrospectively constructed trajectories
108
+ Initialise: nodes_added, subtree_episodes = [Troot−1], []
109
+ // Construct trajectories until all valid node(s) in T added
110
+ while True do
111
+ // Root trajectories at highest level unselected node(s)
112
+ subtrees = []
113
+ for node in nodes_added do
114
+ for child_node in Tnode.children do
115
+ if child_node not in nodes_added then
116
+ // Use depth-first-search to get sub-tree
117
+ subtrees.append(dfs(T , root=child_node))
118
+ end if
119
+ end for
120
+ end for
121
+ // Construct trajectory episode(s) from sub-tree(s)
122
+ if len(subtrees) > 0 then
123
+ for subtree in subtrees do
124
+ subtree_episode = construct_path(subtree) (2)
125
+ subtree_episode[−1].done = True
126
+ subtree_episodes.append(subtree_episode)
127
+ for node in subtree_episode do
128
+ nodes_added.append(node)
129
+ end for
130
+ end for
131
+ else
132
+ // All valid nodes in T added to a trajectory
133
+ break
134
+ end if
135
+ end while
136
+ ```
137
+
138
+ Maximum Leaf LP Gain Algorithm [2](#page-12-2) shows the proposed 'maximum leaf LP gain' trajectory construction method, whereby the fathomed leaf node with the greatest change in the dual bound ('LP gain') is used as the terminal state of the trajectory.
139
+
140
+ <span id="page-12-0"></span>As well as performance being limited to that of the expert imitated, IL methods have the additional drawback of requiring an expensive data labelling phase. Figure [7](#page-12-3) shows how the explore-then-strong-branch labelling scheme of [Gasse et al.](#page-7-4) [2019](#page-7-4) scales with set covering instance size (rows × columns) and how this becomes a hindrance for larger instances. Although an elaborate infrastructure can be developed to try to
141
+
142
+ ```
143
+ Input: Sub-tree S
144
+ Output: Trajectory SE
145
+ Initialise: gains = {}
146
+ for leaf in S.leaves do
147
+ if leaf closed by brancher then
148
+ gains.leaf = |Sroot.dual_bound − Sleaf.dual_bound|
149
+ end if
150
+ end for
151
+ terminal_node = max(gains)
152
+ SE = shortest_path(source=Sroot, target=terminal_node)
153
+ ```
154
+
155
+ <span id="page-12-3"></span>![](_page_12_Figure_10.jpeg)
156
+
157
+ Figure 7: How the explore-then-strong-branch data labelling phase of the strong branching imitation agent scales with set covering instance size (rows × columns) using an Intel Xeon ES-2660 CPU and assuming 120 000 samples are needed for each set.
158
+
159
+ label large instances at scale [\(Nair et al.](#page-8-6) [2021\)](#page-8-6), ideally the need for this should be avoided; a key motivator for using RL to branch.
160
+
161
+ <span id="page-13-0"></span>
162
+
163
+ | SCIP Parameter | Value |
164
+ |--------------------------------------------|---------------|
165
+ | separating/maxrounds | 0 |
166
+ | separating/maxroundsroot | 0 |
167
+ | limits/time | 3600 |
168
+ | nodeselection/dfs/stdpriority | 1 073 741 823 |
169
+ | nodeselection/dfs/memsavepriority | 536 870 911 |
170
+ | nodeselection/restartdfs/stdpriority | −536 870 912 |
171
+ | nodeselection/restartdfs/memsavepriority | −536 870 912 |
172
+ | nodeselection/restartdfs/selectbestfreq | 0 |
173
+ | nodeselection/bfs/stdpriority | −536 870 912 |
174
+ | nodeselection/bfs/memsavepriority | −536 870 912 |
175
+ | nodeselection/breadthfirst/stdpriority | −536 870 912 |
176
+ | nodeselection/breadthfirst/memsavepriority | −536 870 912 |
177
+ | nodeselection/estimate/stdpriority | −536 870 912 |
178
+ | nodeselection/estimate/memsavepriority | −536 870 912 |
179
+ | nodeselection/hybridestim/stdpriority | −536 870 912 |
180
+ | nodeselection/hybridestim/memsavepriority | −536 870 912 |
181
+ | nodeselection/uct/stdpriority | −536 870 912 |
182
+ | nodeselection/uct/memsavepriority | −536 870 912 |
183
+
184
+ Table 6: Summary of the [SCIP](#page-8-10) [2022](#page-8-10) hyperparameters used the DFS FMSTS branching agent of [Etheve et al.](#page-7-9) [2020](#page-7-9) (any parameters not specified were the default [SCIP](#page-8-10) [2022](#page-8-10) values).
2206.12680/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-01-22T17:46:32.253Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.1.8 Chrome/87.0.4280.88 Electron/11.1.1 Safari/537.36" etag="xtKReiE5Cu7nkDcBO9G9" version="14.1.8" type="device"><diagram id="fzPrenOi0wzJYP_gDLPx" name="第 1 页">7V1Zc9tGEv41ejRq7uPRV5JK1aZ2N5uNnTdaYiTGlOii6FjeX78DiQMSA4gYjDAzDYB6SMyrKU1/3dN3X9C3tw8/bhdfbv6xuVquLwi6erig7y4IwVhx87/yme9Pzwj7xPV2dbV/0+GJX1f/W+6fRPtnv66ulve1N+42m/Vu9aX+5OXm7m55uas9t9huN9/qb/tzs65/65fF9bLxxK+Xi3Xz2d9XV7ubp2cJpeLwwk/L1fWN/WrB2dMrnxaXn6+3m693+y+829wtn165XVg6+z/y/mZxtfl29IX0/QV9u91sdk//un14u1yX52qP7OlzPzzzavU7b5d3O58P/PavDz/999PPH+9eb/ntH8u3f/1y/f4V4eSJzt+L9df9aex/3d13ezyGjuGEefDm281qt/z1y+KyfOWbAYN5bnH/5Yknf64elua73vy5Wq/fbtab7eOn6RVfqitmnr/fbTefl0evKPKJClG9Yo+elM/Yw0LmQfMv3f/xfy+3u+WDe6QGpsvN7XK3/W7esn9Vqj0X9giVdI/Qbwd+0/1bbo44bZ9b7BF2XVE+nLT5x/6wex08neXBK5T94Nk8D15mP3g+i4NXHJyqEbM8eACqRs7z4POrGjWLg9cEnKrRszz4/KpGoHkefHZVI7DHwd9dvS7dVPPocr24v19dPh7WYrtre9o5XYS0Rqj1dA3dH1bl7/sOW4JHjx/d0pKTj4/Me/eON7HvPXpsOLL9/mHPp8cHH8sHBbcP3z0cv/juu330sNp9sN9g/n30KfPo8KHygf2MHybuN1+3l8uTumZ/0OZPuV7ufCzP5VUtDtBE2RGKeAuK7HPb5XqxW/1djx60QWv/Df/crMxfd7gvsaO2GSu41IefOsGnk9jTOHbwXbLCIWuVkiX0dFANQo+4rw7hJaLgE0sALwqDwZN6w1OAhqciBp4qEJOuadhCKzosfSIt84El94alggVLHUVratqLbHSw+kSn5gNW5g1WCQusjg7VaDgd2kYrOix9HOr5wFL6wtKaZFBgqVugNIAORQ5ZnNry9Ak7gIfnwQk7+F0fL47drtNO2MHx+nhxcMkiO2H+Vi6DJQrExWwUJ6yDbGzBkD5hIfCCMRhYxVjNCaHbUGSdKuaoW3+wniSrEmtxOYlQ2mBg1SM1MjTThRTDILSNlmKF1vk06jnIFRaDpaBAKlUDWFIEgrSFlkJ5QXoOeYWFvIBFYsVwIG2hlRekVPtURxxA+lhaOQxkgLCXNyKaKIy3nYSis7JfeGiCrGzkOUJZ2UUoOiv7hVKmx0oxlFR2EorMSob6Of97Vrbe6FNh7mTklKF+zvL05NS99Kog9Etvzwah6Kzs51JOj5XSreoKlcpOQtFZ2c/xmh4r3UsvWCo7CUVn5Tz6RHijM2r/OFdFJUPz6BMRbrle/oOfR5+IdLLV2Yu3GZpHn4irapTKjvh59Im4qgbAwYMt2DjYYtWrx4+HCUkz4huSrm5DIPYdd6EU7HW5dodIbN/heZgZnEG77bCPmTFh2Ue+1YOVlgAi+6LRYBvq27mYTO2m436pp+lBUI0VgkgXClNzDUnENCasjiNNCi7LwxeMScJEIDxdnKcO3FtxmCs8e1hHsIpKBdaFMbarH1zHkQzVlyfJSp4anll7VPY10vjiuELau1HV/tunUfU02E8LSnpRwLA0tSSk0JwwxTXWhNrfrrrxdcEZZogYEWZUyEDBILJABEmCCOfafE1dMKThMCKYCWMEa4qd6yC6mPSzc6eXC8DIHYUTmkrvphSdmXMvVsKo0RYczMwuStGZOfdyJYwHk8xOStGZGVSwNOkqlynJKu3nC01QVt27Lzil3k0pOjPnXrSEsZsDC5bMTkrRmUlmz8zBJLOTUnRmzmMcLUZO6Dl/1QWdR50Rxo1hhdlPfh71Lg3M50//03kUvDQwD+DkBxnkUnOiXppX6A61DhVMZb7jhKrrEIqlgRoDR4N9APcCTF13QQP99QEzWyhbZot65/4prHB+w0KtVFl/W1c6lFJHjOz3zRKBPXQgrGbopg4MzyY8ZxAlQ+Agk07CbuGglOaAI6Mqh2f8OrBxd4brwNT+PusXvEllBz43HiLqnew7wqxCLhQ8IgdFL8jiuM558js5qHFuIncy9Z1LVtmPUBH4ggioO48vuUacSRwOgWt/YjOvxu0h/rDCElqhAqFDNa5T4yV4gdmhGjewxqvMWhZUYawpRsIQkvWbiupyiBHTnJZ1kpgm1hp5ByNnr0/sEdAAZjxhAxzBnx9+GXyRnaSbPNzGZt5pR70LaIEpV2NblVXfdVSGVpC3EUteN26/7wzFsYXesDxZzI0HKeY2GD1ZzS2yVnNzsMOJE4HXO2rHoF30RBeEcs2xVlSZH0cLPtqP5k0YG0vV3VzgD16KCikIplwQKqlwzAnMCy2kEsZgEkTQtJO1mdeW7ymD19tKhQfeodJuRjsXSB60K6nTJcgANJt2VWABOvzAYo8xxBoUCJVbaYoPo4R7D3R38dxCKzrYIgxpe5bbQHhI3WlToamCTkLRudev+3Qa3BtqQGInoejc85m6MjHusaFkr5NQdO4FtbT27bWCzs8JSWOErlbw3HM3dIWmLDsJRedehDZW6NxjeiDZ6yQUnXsRBu2D595QstdJKDb3LHomXi1A3Yq37E072icOO/6DZ+5a6vwHT+Zx8I3KzNzlMXoerZmuqsnfK6XnURHmqhoAB98vSAAmXfEsMzozFNzOdOvOUFjLA4hJR130BLtTXcX10U26kc6lDUddpeE8UEdAoc61/4PTYszNbaQOqfXcgjgB1PXRdbBKshq6LjiJ0tVMGR11Iy1bfYmu80cdbF0X3HXW0HXJgyZga1H79J29BIPIG4OwAnfU3dYQnMJy4xqJ71vecynmBO7b6hb1QB2s6icXdeHhYtpBKDrq5hG1pO60yOzBMzKPLS+iMR8m88Fz3K/I6jwk/gUKHnk3EAJb5yFR2UBY/fB6a1/4cmY3hh8vt/vj1V+//H7/ZvXb+9cfL7/d/P7jNfn5VVPZ//B1vf7+6nJzd2d0idEiriQYMd+Vuma9ur4rYW8wsDQ4f1NqgNXlYv16/8Lt6uqq/Myb7fJ+9b/Fp8fPl+j5Uv4tj38df3PB35W0vu429wck/rm5q4Cp949/WNyu1uUZ/Wd1u7w3v9Evy2/mv//e3C7u7Ef2v+NzGqkFsc8qKex0HuuWnBZuQ9sQWqqVUaTBqH+v7q7nyZ2GsdXCnaTMaV4gvxq5nSdzBAbGnGZq7MftaqZ6TTFgzGnavO8fvmzuzJ+zWqzHyCP6Zr34tFy/WVx+vn4M1Vg7cF8V9fTqZnu13DqvDHJtefAX46RuDY9QBD/Q7G0rFN31K6cMyx4tUl0RJe8OqcyhKcYj1MaPlKlDVXnmLvLkcHdApWYpGUpOOwlFZ2pQ1XymjTNnyQ1kMtwdUblv2PAhb5nD8Iz3S0BOmanEraAJFdROQrGZar//zNTG1RgsqZ2EojMVaMLs83J3eXPRZyByj9GM4DakCDI/LhAEjgtAGwCicsEd0Z47ockE0G6AlBopf2uAAJrPT6mRAHBhpKXyQyXc5f7P7U64C6imopuMDd8o5Fotqfs2xEgr6IcCo/AeKyl8y+lz+y3B7RzELaxPHRsUeQvrs4Oxh2b0rTXNrhmDE0pds/uigzFvvX32ujjhD8axaMbg5o+GZkwd0bE1efA0Y46lQ9Yu9IAmAQrNxjas4ISeG3FJfWnLmc92tlexBxh964dzgzE88u0WqSbXk8QDjBOLcDQaE3MH+2TW7oVRaQSw15NbQRCsEXCHCRZdI/Qr9pseGP0DHGCjbUwXCh82Zdn1mxabmhRcHlZlBQ7XNpdXgfVhU5az70XIgpFsm7LkSIf6ZFCpUFEs6uveKsVkUTzQwjda3/cmQC18k3lTH8PEVGrGXBrNDNVMMHZCfV2b04MYHIk+TTe9ETH3LIn3EASw6hfzcofbUb9sfZNR+HK403ST74mTM8+h9IAq1BwK4c7CuDgb43h9YZy7GjbvxjgJdtpRIhgTXxjbkBc8GCNndZwDsIF2x+H66jjpCkvW3XH2sGcLY28bFzCMByqvwMjZIVdf0Zl7iVzWNM68Rs742yhwpUKctlHIMDaKOG2jyEQ2SvugGqBdFk2QvqiDXzYwlS6/1H7uZJbnnjKv137uQDspYp+7zH3uQHsnBj531diWk/vcgXZLRD73/HrGJ0kwwXPPrmd8gtzjP3d3iXJ+PeMTsZ3euefXMz4hxgmee249Y38feDGxPuvqq5hFFZboF7PAPWIWLZBwAgL1qMNJd/U46HDS3vSAWJpBt8qtIWOs4PIoO1wn6BtcUO5qCBQtONwuB2ALp/vIwTDYJL7Y5MCxqYjBpgoEpGsRttCKjUmf2MpMMNlScnbSboaDSR1FX2q3m+o02dhIBVtknh6pLemEk541HKS6k5nRcNqzjVZsTIKtNU+PyZatCie9MDiY1C04GkB7uq1RbsFDbGyCrSAP87oOjtbHi2M/qyNTXHlaHy8OPlhMr8vbsqXQ5IC4gI3idXWQjS0VYGfxpNfYLaU6I7EihG6DUDWBIrCyrIOsSqy/wZagp0eqGqttoZkuZL2xLBiebbQUK7TOpkvBVp6nR6h3rJUAQ6hUDVRJEYjQFloKZUUo2KJywNEtcBFXMRxCW2jlRSjtlxMIHcfdwAsY7vJG7DKw2rqTUGxOkrlzspHNCOVkF6HYnOwXaZ4eJ8VQMtlJKDYn+8Vnh9s9A5e305HSNPu/AHOyEbce6OaMOMSrnZNp1n7B5aR0K7ZCZbKTUGxOBu36mhAn3QsvWCY7CcXm5DxqVLlH/1zSWkk6jxpV4Zbi5T53a5hO/NxlY49B5tpgNo+eSlfPpFxK0n7uPtGK8Z+7q2fyn/s8evt4xoVU7efeLxMwPcsUI6Zrce56/374HvCedCPbrdznHp82n9EpfgS7ln3pxubz3PMmGEeS5550Y/PZx06ZVwR3LhI+9yxMx80avle8J93YfA7K0UyJz1jGkeeedGPzee75mo6bNVyee9KNzGe4+xeGrrnqLqVSwCCoTmsEfCiO6l2c6maeWmjFBl5MBYNHoGCoG1QP9QE6CcXmZMyE8Cg4OVRCuJNQbE7GTAiPgZNsKJnsJBSbk/2K/V/qiI+Bt9OR0pjB8VFw0u1RDLXFOwlF5mTP8ePT4yRz99iFymQnodicjBngHgUnh5LJTkKROannkfIV0Mb9abjFiMl3ITUm+YTGmroIxZalICt0kjwd6qLLfM/pmLbnuDhKhpLSTkKxFyXYG3YUWduz4AYyGW4JRu77NTyX00UoOlPJmanVxpuBBLWTUHSmwi2lyH3HBktqJ6Hoy4jAThPuvWluDMAxmrFAiAksEZIaKRsztTJNaUEEQwJrTUW1v7i3/iey0Jobv11rgjS2B1KVr+NCUq6pQkgYBqVNsCqw01Q8ETeGsBelJc4wlZpoWm63FC0441JioonQdqVt7zg1L3GmBEESCUGJbUuo4YyVBQPIiBRNizNr2I52afwYcMb4SZxJrmo4I2E4Y9LgjCKheTl6B0vlVJwgUUiJKCpr1RSKt6z1mVlccEvScnswo40Q4p4TPqfM1KlEGnqOwpwyS6cTI+w5PfIcIxyl5J6j+9OLEdrvPzN1QjFCwIMoc9+x440R9pzHBC9iMy7gnGOErzDgxuzcN/54feqwLuxJMnUqljng9urc9/14feqwXuqzTz0uyT1HrafnU/Nz1Hp6PjU/x60n6FOD3as4S5+6zCAfebs4DFWPPjXDWAomEVfa9i0fZ8MpJ/u3MJoYcmCXFk4ScuanBjlVR5wTXwncXkzYURQHY9sk1hoqYm75RXTA5d3s9vIKnBECjmCBJDfIUow5M/dK7cO5eZUTYyiSwN1XHYirKVLrSCYDHNjSwueGBk1I32Fnk4sM3VBJUAeh2DDymuJ+vihjAUfpgYDTIBQdOOQMnJTAYeWFJznVjJnriCJ3ra0xsZi5kRQTSlMaaNQTBNmoZ/0C8GfIvdDGKsEgy2EQnBvDXmJ5EnI0DHKPwOZK7XGFCAUFubyhi2HM+vvPy93lzfjwh53Sd4XCIOYSkgIlRtG5qCQpcIxKIUYrUUFEqTioE44oI1el2yiEwIQHju80TDXqzyhHwjTRErshD1kgY5PxshuIEpUYb2DXzE8Tb6jEm5CIaYkUR3UoSE4KLZDBGjVQ0CSw/wyb21hiToWhYmwz7XyLwuYaZUQwcxEryWhqxIGNRkwYcVbDcew6A/IICxy9BHGVhhOCutPQjRrFiLGy3U0IlRhxPdfWDIw488t/qABmHnwsIVRw+/Ddwx5ST4++20cPq92Ho38ffco8OnyofFB9ZobBXXcVVvg8/8zpbHGeOJASOOdq4ldTjpE8MejEH28TNLuuNQM2eQKmt5y7hTfBm8ldYUvs62YOmOTFn13N2ok/C1Qw+BNuXiJ4C3fX3R0bfxMu/OjGHxkt/pAuFKZGD5ZeNLZOcpXcIgWX5dELxsqIdCA2XZAnLjafchhwwLsZ2gogYy4WQqDqxzH3ZKimPElWutZvbGxmDRlOx50eTg4YNB0tCSk0J0xxjTWhbnaG6IIzzJDx3hmjItC5Esa5QgRJggjnxr9ysoHS8BcRzARnSFOceMpWVtt2RjJi7+Xx2TESoeKg0hF3UuahgSwu81rV1omc+EIGjN2NDNabzrWRwZYx921PHNBeRHnsRQs5MKKNG7V7ofUIGIujMiuB3MKaMkxZlbw0ftPhRP23f3346b+ffv5493rLb/9Yvv3rl+v3r5hXtw4IWW9Ao0X8T8h6I+SWefuKnUz6MvOi1qEct9AIDSXr9pqEK+uNMOpQsl72EDwVWypCpU1YJLrWvbotQYj6sNd6WwYqragP4kmEifoJYR66c6Nb8PcOXKehD09DIMckDx9V00Bn4rCkVyP9BLQAYtC0wCDVAYNrgRTGvfSVewFO7htL+0LlvoHHxKlaMZPbH7mtUtkN/aBBUpNw6r3lHlqBRlPug+97V+4bcYTY971Pre4U5P65c84m90ETi6Yg9xZynXIPL+rn2vnhA7NcOz/1HB4x4a6Q4RAIzuLUqkwpHUpjnLSr4AVmh9KY0JpWxFFBFcaaYiQMIafxl+pCa800p2XdAk7bhSnyNpfkzYVajTjCEAk2qBFcH35Omz3+ivQk3dRulL0fpqhWx6ctMRbHzXhOtL1skqOaUiWIENU8n4DGqLJ0q466xPVacsKdJSP0Dk+iruwXqRpA1Qva8U6WRomcpVGSTBeO3unLbu8G3hVNdFlroDnWiiql3NblR7PPvAljY2DiwFlFGFFUSEEw5YIYGRCOIYC5kQ+pBKPmTSLxSkM554Ypb+MSInKHqsZBhBXGy6/0KnG0NzLoTKBXW6txiBykKCTlmD/igNW/eOcYhe2nYS9+B+nZXHPqDpYJ1JDGsy7k854SYoXWjVdfDEHzcLvZ7I7fvl18ufnH5mpZvuP/</diagram></mxfile>
2206.12680/paper_text/intro_method.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Decentralized stochastic gradient descent (D-SGD) facilitates simultaneous model training on a massive number of workers without a central server (Lopes & Sayed, 2008; Nedic & Ozdaglar, 2009b). In D-SGD, every worker
4
+
5
+ Proceedings of the 39<sup>th</sup> International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
6
+
7
+ only communicates with the directly connected neighbors through "gossip communication" (Xiao & Boyd, 2004; Lian et al., 2017; Koloskova et al., 2020). The communication intensity is controlled by the communication topology. This decentralized nature eliminates the requirement for an expensive central server dedicated to heavy communication. Surprisingly, existing theoretical results demonstrate that the massive models on the edge converge to a unique steady model, the consensus model, even without the control of a central server (Lu et al., 2011; Shi et al., 2015; Lian et al., 2017). Compared with the centralized synchronized SGD (C-SGD) (Dean et al., 2012; Li et al., 2014), D-SGD can achieve the same asymptotic linear speedup in convergence rate (Lian et al., 2017). In this way, D-SGD provides a promising distributed machine learning paradigm with improved privacy (Nedic, 2020), scalability (Lian et al., 2017; Kairouz et al., 2021), and communication efficiency (Ying et al., 2021b).
8
+
9
+ To date, the theoretical research on D-SGD has mainly focused on its convergence (Nedic & Ozdaglar, 2009b; Lian et al., 2017; Koloskova et al., 2020; Alghunaim & Yuan, 2021), while the understanding on the generalizability (Mohri et al., 2018; He & Tao, 2020) of D-SGD is still premature. A large amount of empirical evidence have shown that D-SGD generalizes well on well-connected topologies (Assran et al., 2019a; Ying et al., 2021a). Meanwhile, empirical results by Assran et al. (2019b), Kong et al. (2021) and Ying et al. (2021a) demonstrate that for ring topologies, the validation accuracy of the consensus model learned by D-SGD decreases as the number of workers increases. Thus, a question is raised:
10
+
11
+ How does the communication topology of D-SGD impact its generalizability?
12
+
13
+ This paper answers this question. We prove a topology-aware generalization error bound for the consensus model learned by D-SGD, which characterizes the impact of the communication topology on the generalizability of D-SGD. Our contributions are summarized as follows:
14
+
15
+ Stability and generalization bounds of D-SGD. This work proves the algorithmic stability (Bousquet & Elisseeff, 2002) and generalization bounds of vanilla D-SGD
16
+
17
+ <sup>&</sup>lt;sup>1</sup>College of Computer Science and Technology, Zhejiang University <sup>2</sup>Shanghai Institute for Advanced Study of Zhejiang University <sup>3</sup>JD Explore Academy, JD.com Inc. <sup>4</sup>School of Computer Science and Technology, University of Science and Technology of China <sup>5</sup>Institute of Artificial Intelligence, Hefei Comprehensive National Science Center <sup>6</sup>School of Computer Science, Wuhan University <sup>7</sup>Zhejiang University City College. Correspondence to: Fengxiang He <fengxiang.f.he@gmail.com>.
18
+
19
+ in the non-convex non-smooth setting. In Section 4, we present an O(N <sup>−</sup>1+m−1+λ 2 ) distributed on-average stability (see Corollary 2), where 1 − λ denotes the spectral gap of the network, a measure of the connectivity of the communication topology G. These results would suffice to derive a O(N <sup>−</sup>(1+α)/2+m−(1+α)/2+λ 1+α+ϕ<sup>S</sup> ) generalization bound in expectation of D-SGD (see Theorem 4). Our error bounds are non-vacuous, even when the worker1 number is sufficiently large, or the communication graph is sufficiently sparse. The theory can be directly applied to explain why consensus distance control in the initial phase of training can ensure better generalization.
20
+
21
+ • Communication topology and generalization of D-SGD. Our theory shows that the generalizability of D-SGD has a positive relationship with the spectral gap 1 − λ of the communication topology G. Besides, we prove that the generalizability of D-SGD decreases when the worker number increases for the ring, grid, and exponential graphs. We conduct comprehensive experiments of VGG-11 (Simonyan & Zisserman, 2014) and ResNet-18 (He et al., 2016b) on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and Tiny-ImageNet (Le & Yang, 2015) to verify our theory.
22
+
23
+ To our best knowledge, this work offers the first investigation into the topology-aware generalizability of vanilla D-SGD. The closest work in the existing literature is by Sun et al. (2021), which derives O(N <sup>−</sup>1+(1−λ) −1 ) generalization bounds for projected D-SGD based on uniform stability (Bousquet & Elisseeff, 2002). They show that the decentralized nature hurts the stability, and thus undermines generalizability. Compared with the results by Sun et al. (2021), our work makes two contributions: (1) we analyze the vanilla D-SGD, which is capable of solving optimization problems on unbounded domains, rather than the projected D-SGD; and (2) our stability and generalization bounds are non-vacuous, even in the cases where the spectral gap 1−λ is sufficiently close to 0, which characterizes the cases where the worker number is sufficiently large or the communication graph is sufficiently sparse.
24
+
25
+ # Method
26
+
27
+ Supervised learning. Supposed X ⊆ R <sup>d</sup><sup>x</sup> and Y ⊆ R are the input and output spaces, respectively. We denote the training set as S = {z1, . . . , z<sup>N</sup> }, where z<sup>ζ</sup> = (x<sup>ζ</sup> , y<sup>ζ</sup> ), ζ = 1, . . . , N are sampled independent and identically distributed (i.i.d.) from an unknown data distribution D defined on Z = X × Y.
28
+
29
+ The goal of supervised learning is to learn a predictor (hypothesis) g(·; w), parameterized by w = w(z1, z2, . . . , z<sup>N</sup> ) ∈ R d , to approximate the mapping between the input variable x ∈ X and the output variable y ∈ Y, based on the training set S. Let c : Y × Y 7→ R + be a loss function that evaluates the prediction performance of the hypothesis g. The loss of a hypothesis g with respect to (w.r.t.) the example z<sup>ζ</sup> = (x<sup>ζ</sup> , y<sup>ζ</sup> ) is denoted by f(w; z<sup>ζ</sup> ) = c(g(x<sup>ζ</sup> ; w), y<sup>ζ</sup> ), in order to measure the effectiveness of the learned model. Then, the empirical and population risks of w are defined as follows:
30
+
31
+ $$F_{\mathcal{S}}(\mathbf{w}) = \frac{1}{N} \sum_{\zeta=1}^{N} f(\mathbf{w}; z_{\zeta}), \ F(\mathbf{w}) = \mathbb{E}_{z \sim D}[f(\mathbf{w}; z)].$$
32
+
33
+ Distributed learning. Distributed learning jointly trains a learning model w on multiple workers (Shamir & Srebro, 2014). In this framework, the k-th worker (k = 1, . . . , m) can access n<sup>k</sup> independent and identically distributed (i.i.d.) training examples S<sup>k</sup> = {zk,1, . . . , zk,n<sup>k</sup> }, drawn from the
34
+
35
+ <sup>1</sup>Throughout this work, we use the term *worker* to represent the local model.
36
+
37
+ ![](_page_2_Figure_1.jpeg)
38
+
39
+ Figure 1. Illustration of some commonly-used communication topologies.
40
+
41
+ data distribution $\mathcal{D}$ . If set $n_k = n$ , the total sample size will be N = nm. In this case, the global empirical risk of $\mathbf{w}$ is
42
+
43
+ $$\hat{F}(\mathbf{w}) = \frac{1}{m} \sum_{k=1}^{m} F_{\mathcal{S}_k}(\mathbf{w}),$$
44
+
45
+ where $F_{\mathcal{S}_k}(\mathbf{w}) = \frac{1}{n} \sum_{\zeta=1}^n f(\mathbf{w}; z_{k,\zeta})$ denotes the local empirical risk on the k-th worker and $z_{k,\zeta} \in \mathcal{S}_k$ $(\zeta = 1, \ldots, n)$ is the local sample set.
46
+
47
+ The goal of D-SGD is to learn a consensus model $\overline{\mathbf{w}} = \frac{1}{m} \sum_{k=1}^{m} \mathbf{w}_k$ , on m workers, where $\mathbf{w}_k$ denotes the local model on the k-th worker. For any k, let $\mathbf{w}_k^{(t)} \in \mathbb{R}^d$ be the d-dimensional local model on the k-th worker in the t-th iteration, while $\mathbf{w}_k^{(1)} = 0$ is the initial point. We denote $\mathbf{P}$ as a doubly stochastic gossip matrix that characterizes the underlying topology $\mathcal{G}$ (see Definition A.5 and Figure 1). The intensity of gossip communications is measured by the spectral gap (Seneta, 2006) of $\mathbf{P}$ (i.e., $1 - \max{\{|\lambda_2|, |\lambda_n|\}}$ , where $\lambda_i$ ( $i = 2, \ldots, m$ ) denotes the i-th largest eigenvalue of $\mathbf{P}$ (see Definition A.6). The vanilla Adapt-While-Communicate (AWC) version of D-SGD without projecting operations updates the model on the k-th worker by
48
+
49
+ $$\mathbf{w}_{k}^{(t+1)} = \sum_{l=1}^{m} \mathbf{P}_{k,l} \mathbf{w}_{l}^{(t)} - \overbrace{\eta_{t} \nabla f(\mathbf{w}_{k}^{(t)}; z_{k,\zeta_{t}}^{(t)})}^{Computation}, \quad (1)$$
50
+
51
+ where $\{\eta_t\}$ is a sequence of positive learning rates, and $\nabla f(\mathbf{w}_k^{(t)}; z_{k,\zeta_t}^{(t)})$ is the gradient of f w.r.t. the first argument on the k-th worker, and $\zeta_t$ is i.i.d. variable drawn from the uniform distribution over $\{1,\ldots,n\}$ at the t-th iteration (Lian et al., 2017). In this paper, matrix $\mathbf{W} = [\mathbf{w}_1,\cdots,\mathbf{w}_m]^T \in \mathbb{R}^{m\times d}$ stands for all local models across the network, while matrix $\nabla f(\mathbf{W};\mathbf{Z}) = [\nabla f(\mathbf{w}_1;z_1),\cdots,\nabla f(\mathbf{w}_m;z_m)]^T \in \mathbb{R}^{m\times d}$ stacks all local gradients w.r.t. the first argument. In this way, the matrix form of Equation (1) is as follows:
52
+
53
+ $$\mathbf{W}^{(t+1)} = \mathbf{P}\mathbf{W}^{(t)} - \eta_t \nabla f(\mathbf{W}^{(t)}; \mathbf{Z}_{\mathcal{L}}^{(t)}).$$
54
+
55
+ This section proves stability and generalization bounds for D-SGD. We start with the definition of a new parameter-level stability for distributed settings. Then, the stability of D-SGD under a non-smooth condition is obtained (see Theorem 1 and Corollary 2). This implies a connection between stability and generalization in expectation (see Lemma 3), which suffices to prove the expected generalization bound of D-SGD, of order $\mathcal{O}(N^{-(1+\alpha)/2}+m^{-(1+\alpha)/2}+\lambda^{1+\alpha}+\phi_S)$ .
56
+
57
+ Understanding generalization using algorithmic stability can be traced back to Bousquet & Elisseeff (2002) and Shalev-Shwartz et al. (2010), and has been applied to stochastic gradient methods (Hardt et al., 2016; Lei & Ying, 2020). For more details, please see Appendix B.
58
+
59
+ We define a new algorithmic stability of distributed optimization algorithms below, which better characterizes the on-average sensitivity of models across multiple workers.
60
+
61
+ **Definition 1** (Distributed On-average Stability). Let $S_k = \{z_{k,1}, \ldots, z_{k,n}\}$ denote the i.i.d. local samples on k-th worker drawn from the distribution $\mathcal{D}$ . $S = \bigcup_{k=1}^m S_k = \{z_1, \ldots, z_N\}$ then denotes the whole training set. $S^{(i)} = \bigcup_{k=1}^m S_k^{(i)} = \{z_1, \ldots, \tilde{z}_i, \ldots, z_N\}$ is formed by replacing the i-th element of S with a sample $\tilde{z}_i$ drawn from the distribution $\mathcal{D}$ , where $S_k^{(i)}$ denotes the new local training samples on k-th worker<sup>2</sup>. We denote $\mathbf{w}_k$ and $\widetilde{\mathbf{w}}_k$ as the weight vectors on the k-th worker produced by the stochastic algorithm $S_k^{(i)}$ based on $S_k^{(i)}$ and $S_k^{(i)}$ , respectively. $S_k^{(i)}$ if
62
+
63
+ $$\frac{1}{mN} \sum_{i=1}^{N} \sum_{k=1}^{m} \mathbb{E}_{\mathcal{S}, \mathcal{S}^{(i)}, A} \left[ \|\mathbf{w}_k - \widetilde{\mathbf{w}}_k\|_2^2 \right] \le \epsilon^2,$$
64
+
65
+ where $\mathbb{E}_A[\cdot]$ stands for the expectation w.r.t. the randomness
66
+
67
+ <sup>&</sup>lt;sup>2</sup>Note that $S_k$ and $S_k^{(i)}$ can be exactly the same with probability $1 - \frac{1}{m}$ , since the only one data point replaced can be located in any of the m local data sets with equal probability.
68
+
69
+ of the algorithm A (see more details in Appendix A).
70
+
71
+ We then prove that D-SGD is distributed on-average stable.
72
+
73
+ **Theorem 1.** Let $S_k$ and $S_k^{(i)}$ $(k=1,\ldots,m)$ be constructed in Definition 1. Let $\mathbf{w}_k^{(t)}$ and $\widetilde{\mathbf{w}}_k^{(t)}$ be the t-th iteration on the k-th worker produced by Equation (1) based on $S_k$ and $S_k^{(i)}$ $(k=1,\ldots,m)$ respectively, and $\{\eta_t\}$ be a non-increasing sequence of positive learning rates. We assume that for all $z \in \mathcal{Z}$ , the function $\mathbf{w} \mapsto f(\mathbf{w}; z)$ is nonnegative with its gradient $\nabla f(\mathbf{w}; z)$ being $(\alpha, L)$ -Hölder continuous (see Assumption A.3). We further assume that the weight differences at the t-th iteration are multivariate normally distributed: $\mathbf{w}_k^{(t)} - \widetilde{\mathbf{w}}_k^{(t)} \stackrel{i.i.d.}{\sim} \mathcal{N}(\mu_{t,k}, \sigma_{t,k}^2 I_d)$ for all k where d denotes the dimension of weights, with unknown parameters $\mu_{t,k}$ and $\sigma_{t,k}$ satisfying some technical conditions (see Assumption A.4), and the worker number $m \geq \frac{1}{dv^2}$ <sup>3</sup>. Then we have the following:
74
+
75
+ $$\begin{split} \frac{1}{mN} \sum_{i=1}^{N} \sum_{k=1}^{m} \mathbb{E}_{\mathcal{S},\mathcal{S}^{(i)},A} \big[ \big\| \mathbf{w}_{k}^{(t+1)} - \widetilde{\mathbf{w}}_{k}^{(t+1)} \big\|_{2}^{2} \big] \\ \leq \sum_{\tau=0}^{t} C^{t-\tau} \big\{ \underbrace{\mathcal{O} \big( (1 - \frac{1}{m}) \boldsymbol{\lambda}^{2} + \frac{1}{m} \big)}_{Error from \ decentralization} \\ + \underbrace{\mathcal{O} \big( \frac{\eta_{\tau}^{2}}{N} \cdot \frac{1}{m} \sum_{k=1}^{m} \mathbb{E}_{\mathcal{S},A} \big[ F_{\mathcal{S}}^{\frac{2\alpha}{1+\alpha}} \big( \mathbf{w}_{k}^{(\tau)} \big) \big] \big) \big\},}_{\mathcal{O} \big( \frac{1}{m} \big) \cdot \text{Averaged empirical risk}} \end{split}$$
76
+
77
+ where $C = 2\eta_0 L(1 - \frac{1}{N})$ and $F_{\mathcal{S}}(\mathbf{w}_k^{(\tau)})$ is the local empirical risk of the k-th worker at iteration $\tau$ .
78
+
79
+ Theorem 1 suggests that the distributed on-average stability of decentralized SGD is positively related to the spectral gap of the given topology and negatively related to the accumulation of the averaged empirical risk. See detailed proof in Appendix D.2.
80
+
81
+ We can obtain a simplified result with fixed learning rates.
82
+
83
+ **Corollary 2** (Stability in Expectation with $\eta_t \equiv \eta$ ). Suppose all the assumptions of Theorem 1 hold. With a fixed learning rate $\eta_t \equiv \eta \leq \frac{1}{2L}(1-\frac{2}{m})$ , the distributed onaverage stability of D-SGD can be bounded as
84
+
85
+ $$\begin{split} &\frac{1}{mN} \sum_{i=1}^{N} \sum_{k=1}^{m} \mathbb{E}_{\mathcal{S},\mathcal{S}^{(i)},A} \big[ \big\| \mathbf{w}_{k}^{(t+1)} - \widetilde{\mathbf{w}}_{k}^{(t+1)} \big\|_{2}^{2} \big] \\ &\leq &\frac{1}{1 - 2\eta L (1 - \frac{1}{N})} \big\{ \mathcal{O} \big( \frac{\epsilon_{\mathcal{S}} \eta^{2}}{N} \big) + \underbrace{\mathcal{O} \big( (1 - \frac{1}{m}) \mathbf{\lambda}^{2} + \frac{1}{m} \big) \big\}}_{\textit{Error from decentralization}}, \end{split}$$
86
+
87
+ ![](_page_3_Figure_11.jpeg)
88
+
89
+ Figure 2. Histograms of the weight differences of the last layers of the ResNet-18 models (1024 dimensions $\times$ 10 classes=10240 parameters) trained by AWC D-SGD on $\mathcal{S}$ and $\mathcal{S}^{(i)}$ that differ by only one data point. Thirty ResNet-18 models are trained on data sampled from the MNIST dataset (LeCun et al., 1998), each model is trained with 16 workers for 3000 iterations.
90
+
91
+ where $\epsilon_{\mathcal{S}}$ denotes the upper bound of averaged empirical risk $\frac{1}{m} \sum_{k=1}^{m} \mathbb{E}_{\mathcal{S},A} \big[ F_{\mathcal{S}}^{\frac{2\alpha}{1+\alpha}}(\mathbf{w}_{k}^{(t)}) \big].$
92
+
93
+ Corollary 2 shows that the distributed on-averge stability of D-SGD is of the order $\mathcal{O}(N^{-1}+m^{-1}+\lambda^2)$ . We defer the proof to Appendix D.2.
94
+
95
+ Comparison with existing results. Compared with Sun et al. (2021), we relax the restrictive bounded gradient and the smoothness assumptions. Instead, a much weaker Hölder condition (see Assumption A.3) is adopted. In addition, we make a mild assumption that the weight difference $(\mathbf{w}_k^{(t)} - \widetilde{\mathbf{w}}_k^{(t)})$ is multivariate normally distributed (see Assumption A.4), which stems from our empirical observations: Figure 2 illustrates that the distribution of the weight differences in ResNet-18 models trained by D-SGD is close to a centered Gaussian. Intuitively, the assumption is based upon the fact that the weights of the consensus model are very insensitive to the change of a single data point.
96
+
97
+ We also compare the order of the derived bound with the existing literature. Hardt et al. (2016) proves that SGD is $\mathcal{O}(\sum_{\tau=1}^{t-1}\eta_{\tau}/N)$ -stable in convex and smooth settings, which corresponds to the $\mathcal{O}(\frac{1}{N})$ term in Corollary 2. Under Hölder continuous condition, Lei & Ying (2020) proposes a parameter-level stability bound of SGD of the order $\mathcal{O}(\frac{\epsilon_{\mathcal{S}}\eta^2}{N}+\eta^{\frac{2}{1-\alpha}})$ . In contrast, Corollary 2 shows that D-SGD suffers from additional terms $\mathcal{O}((1-\frac{1}{m})\lambda^2+\frac{1}{m})$ , where the first term $\mathcal{O}((1-\frac{1}{m})\lambda^2)$ can characterize the degree of disconnection of the underlying communication topology. Close work by Sun et al. (2021) proved that the stability of the projected variant of D-SGD is bounded by
98
+
99
+ $<sup>^{3}</sup>d\mu_{0}^{2}$ is the lower bound of $\|\mu_{t,k}\|_{2}^{2}$ $(k=1\dots m)$ . $m\geq\frac{1}{d\mu_{0}^{2}}$ can be easily satisfied in training overparameterized models in a decentralized manner, since both m and d are large in these cases.
100
+
101
+ $\mathcal{O}\left(\frac{\eta t B^2}{N} + \frac{\eta t B^2}{1-\lambda}\right)$ in the convex smooth setting, where B is the upper bound of the gradient norm. The term $\mathcal{O}(\frac{\eta t B^2}{1-\lambda})$ brought by decentralization is of the order $\mathcal{O}(m^2 B^2)$ for ring topologies and $\mathcal{O}(mB^2)$ for grids, respectively. Our error bound in Corollary 2 is tighter than their results, since
102
+
103
+ $$\begin{split} \frac{1}{1-2\eta L(1-\frac{1}{N})} \cdot \mathcal{O}\Big((1-\frac{1}{m})\lambda^2 + \frac{1}{m}\Big) \\ &\leq \mathcal{O}\Big(m\Big) \ll \mathcal{O}(mB^2) \leq \mathcal{O}(m^2B^2). \end{split}$$
104
+
105
+ The following lemma bridges the gap between generalization and the newly proposed distributed on-average stability.
106
+
107
+ **Lemma 3** (Generalization via Distributed On-average Stability). Let $S_k$ and $S_k^{(i)}$ be constructed in Definition $I^4$ . If $\forall z$ the pre-specified function $f(\mathbf{w};z)$ is non-negative, with its gradient $\nabla f(\mathbf{w};z)$ being $(\alpha,L)$ -Hölder continuous, and $\forall t \ \mathbb{E}_{\mathcal{S},\mathcal{S}^{(i)},A} \| \frac{1}{m} \sum_{k=1}^m \mathbf{w}_k^{(t)} - \frac{1}{m} \sum_{k=1}^m \widetilde{\mathbf{w}}_k^{(t)} \|_2 \leq 1^6$ , then
108
+
109
+ $$\mathbb{E}_{\mathcal{S},A}\left[F(\overline{\mathbf{w}}^{(t)}) - F_{\mathcal{S}}(\overline{\mathbf{w}}^{(t)})\right] \\ \leq L_{\alpha,1}\left\{\frac{1}{mN}\sum_{k=1}^{m}\sum_{i=1}^{N}\mathbb{E}_{\mathcal{S},\mathcal{S}^{(i)},A}\left[\|\mathbf{w}_{k}^{(t)} - \widetilde{\mathbf{w}}_{k}^{(t)}\|_{2}^{2}\right]\right\}^{\frac{1+\alpha}{2}} + \phi_{\mathcal{S}},$$
110
+
111
+ where $\overline{\mathbf{w}}^{(t)} = \frac{1}{m} \sum_{k=1}^{m} \mathbf{w}_{k}^{(t)}$ represents the global averaged model, $L_{\alpha,1} = \frac{L}{1+\alpha} + \frac{1}{2}$ is a constant, and $\phi_{\mathcal{S}} = \mathbb{E}_{\mathcal{S},A} \left[ \frac{1}{2N} \sum_{i=1}^{N} \|\nabla f(\overline{\mathbf{w}}^{(t)};z_{i})\|_{2}^{2} \right]$ denotes the empirical gradient norm.
112
+
113
+ We give the proof in Appendix D.3.
114
+
115
+ Lemma 3 suggests that if the consensus model learned by the distributed SGD A is $\epsilon$ -stable in the sense of Definition 1, the generalization error of the consensus model is bounded by $(\frac{L}{1+\alpha}+\frac{1}{2})\epsilon^{\frac{1+\alpha}{2}}+\phi_{\mathcal{S}}$ . The last term $\phi_{\mathcal{S}}$ is very small for overparameterized models near local or global minima (Vaswani et al., 2019). Lemma 3 improves Theorem 2 (c) of Lei & Ying (2020) by removing the $\mathcal{O}(\mathbb{E}_{S,A}\big[F^{\frac{2\alpha}{1+\alpha}}(A(S))\big])$ term, where $\mathbb{E}_{S,A}\big[F(A(S))\big]$ denotes the population risk of the learned model $A(\mathcal{S})$ (see Equation (D.25)). This improvement is significant, because $\mathbb{E}_{S,A}\big[F(A(S))\big]$ usually does not converge to zero in practice.
116
+
117
+ We now prove the generalization bound of D-SGD based on Theorem 1 and Lemma 3.
118
+
119
+ **Theorem 4** (Generalization Bound in Expectation with $\eta_t \equiv \eta$ ). Let all the assumptions of Theorem 1 hold. With a fixed step sizes of $\eta_t \equiv \eta \leq 2L(1-\frac{2}{m})$ , the generalization error of the consensus model learned by D-SGD can be controlled as
120
+
121
+ $$\begin{split} &\mathbb{E}_{\mathcal{S},A}\big[F(\overline{\mathbf{w}}^{(t)}) - F_{\mathcal{S}}(\overline{\mathbf{w}}^{(t)})\big] \\ &\leq L_{\alpha,2} \big\{\mathcal{O}\big(\big(\frac{\epsilon_{\mathcal{S}}}{N}\big)^{\frac{1+\alpha}{2}} + \underbrace{\big(\big(1 - \frac{1}{m}\big)\boldsymbol{\lambda^2} + \frac{1}{m}\big)^{\frac{1+\alpha}{2}}}_{\textit{Error from decentralization}}\big)\big\} + \phi_{\mathcal{S}}, \end{split}$$
122
+
123
+ where $\overline{\mathbf{w}}^{(t)} = \frac{1}{m} \sum_{k=1}^m \mathbf{w}_k^{(t)}$ represents the global averaged model, $L_{\alpha,2} = (\frac{L}{1+\alpha} + \frac{1}{2})/[1 - 2\eta L(1 - \frac{1}{n})]^{\frac{1+\alpha}{2}}$ is a constant, $\phi_{\mathcal{S}} = \mathbb{E}_{\mathcal{S},A} \left[ \frac{1}{2N} \sum_{i=1}^N \|\nabla f(\overline{\mathbf{w}}^{(t)}; z_i)\|_2^2 \right]$ denotes the empirical gradient norm, and $\epsilon_{\mathcal{S}}$ is the upper bounder of $\frac{1}{m} \sum_{k=1}^m \mathbb{E}_{\mathcal{S},A} \left[ F_{\mathcal{S}}^{\frac{2\alpha}{1+\alpha}}(\mathbf{w}_k^{(t)}) \right]$ .
124
+
125
+ The order of the generalization bound in Theorem 4 is $\mathcal{O}(N^{-(1+\alpha)/2}+m^{-(1+\alpha)/2}+\lambda^{1+\alpha}+\phi_{\mathcal{S}})$ and becomes $\mathcal{O}(N^{-1}+m^{-1}+\lambda^2+\phi_{\mathcal{S}})$ in the smooth settings where $\alpha=1$ . The proof is provided in Appendix D.3.
126
+
127
+ **Remark 1.** Corollary 2 and Theorem 4 indicate that the stability and generalization of D-SGD are positively related to the spectral gap $1 - \lambda$ . The intuition of the results is that D-SGD with a denser connection topology (i.e., larger $\lambda$ ) can aggregate more information from its neighbors, thus "indirectly" accessing more data at each iteration, leading to better generalization.
128
+
129
+ Our theory delivers significant practical implications.
130
+
131
+ Communication topology and generalization. The intensity of communication is controlled by the spectral gap $1-\lambda$ of the underlying communication topologies (see Table 1). Detailed analyses of the spectral gaps of some commonly-used topologies can be found in Proposition 5 of Nedić et al. (2018) and Ying et al. (2021a). Substituting the spectral gap of different topologies in Table 1 into Theorem 4, we can conclude that the generalization error of different topologies can be ranked as follows: fully-connected < exponential < grid < ring, since
132
+
133
+ $$0 < 1 - \mathcal{O}((\log_2(m))^{-1})$$
134
+
135
+ $$< 1 - \mathcal{O}((m\log_2(m))^{-1}) < 1 - \mathcal{O}(m^{-2}).$$
136
+
137
+ On the one hand, our theory provides theoretical evidence that D-SGD generalizes better on well-connected topologies (i.e., topologies with larger spectral gap). On the other hand, we prove that for a specific topology, the worker number impacts the generalization of D-SGD through affecting the spectral gap of the topology.
138
+
139
+ <sup>&</sup>lt;sup>4</sup>We appreciate Xiaolin Hu's comment regarding $S_k$ and $S_k^{(i)}$ .
140
+
141
+ <sup>&</sup>lt;sup>5</sup>We appreciate Batiste Le Bars for pointing out an issue about this assumption. The issue has been addressed.
142
+
143
+ <sup>&</sup>lt;sup>6</sup>It is a mild assumption since S and $S^{(i)}$ differ by only one data point. The assumption is solely for the purpose of making the result more concise.
144
+
145
+ Table 1. Spectral gap of gossip matrices with different topology.
146
+
147
+ | Graph topoloy | Spectral gap $1 - \lambda$ |
148
+ |------------------------|-------------------------------|
149
+ | Disconnected | 0 |
150
+ | Ring | $\mathcal{O}(1/m^2)$ |
151
+ | Grid | $\mathcal{O}(1/(m\log_2(m)))$ |
152
+ | Exponential | $\mathcal{O}(1/\log_2(m))$ |
153
+ | <b>Fully-connected</b> | 1 |
154
+
155
+ Consensus distance control. Recently, a line of studies have been devoted to understanding the connection between optimization and generalization through studying the effect of early phase training (Keskar et al., 2017; Achille et al., 2018; Frankle et al., 2020). In the decentralized settings, Kong et al. (2021) claims that there exists a "critical consensus distance" in the initial training phase—consensus distance (i.e., $\frac{1}{m} \sum_{i=1}^{m} \|\mathbf{w}_k^{(t)} - \frac{1}{m} \sum_{k=1}^{m} \mathbf{w}_k^{(t)}\|_F^2$ ) below the critical threshold ensures good generalization. However, the reason why consensus distance control can promote generalization remains an open problem. Fortunately, the following theorem can explain this phenomenon by connecting the consensus distance notion in Kong et al. (2021) with the algorithmic stability and the generalizability of D-SGD.
156
+
157
+ **Corollary 5.** Let all the assumptions of Theorem 1 plus Assumption A.1 and Assumption A.2 hold. Suppose that the consensus distance satisfies $\Gamma^2 \leq \frac{1}{m} \sum_{k=1}^m \|\mathbf{w}_k^{(\tau)} - \overline{\mathbf{w}}^{(\tau)}\|_2^2 \leq K^2$ for $\tau \leq t_{\Gamma}$ , and is controlled below $\Gamma^2$ for $\tau > t_{\Gamma}$ . We can conclude that the distributed on-average stability bound of D-SGD increases monotonically with $t_{\Gamma}$ , if the total number of iterations $t \geq \frac{-C}{2 \ln C}$ .
158
+
159
+ We give the proof in Appendix D.4.
160
+
161
+ Corollary 5 provides theoretical evidence for the following empirical findings: (1) consensus control is beneficial for the algorithmic stability and thus for the generalizability of D-SGD; and (2) it is more effective to control the consensus distance at the initial stage of training than at the end of training.
162
+
163
+ This section empirically validates our theoretical results. We first introduce the experimental setup and then study how the communication topology and the worker number affect the generalization of D-SGD. The code is available at https://github.com/Raiden-Zhu/Generalization-of-DSGD.
2210.09404/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2210.09404/paper_text/intro_method.md ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Current day deep learning networks are limited in their ability to generalize across different domains and settings. Prior studies found that these networks rely on spurious artifacts that are correlated with a target label (Schölkopf et al., 2012; Lapuschkin et al., 2019; Geirhos et al., 2019, 2020, inter alia). We refer to learning of such artifacts (also known as heuristics or shortcuts) as *heuristic memorization*. Further, neural networks can also memorize individual training examples and their labels; for instance, when a subset of the examples are incorrectly labeled (Zhang et al., 2017; Arpit et al., 2017; Tänzer et al., 2021). We refer to this behavior as *example-level memorization*. A large body of past work has established that these facets of memorization pose a threat to generalization, especially in out-ofdistribution (OOD) scenarios where the memorized input features and corresponding target mappings do not hold (Ben-David et al., 2010; Wang et al., 2021b; Hendrycks et al., 2021a; Shen et al., 2021). To simulate such OOD distributions, however, researchers are required to laboriously collect specialized and labeled datasets to measure the extent of suspected fallacies in models. While these sets make it possible to assess model behavior over a chosen set of features, the larger remaining features remain
4
+
5
+ XWork done during a visit at the Technion, Israel. The author is now at Google Research India.
6
+
7
+ YWork done while at Carnegie Mellon University, prior to joining Amazon.
8
+
9
+ Z Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion.
10
+
11
+ ![](_page_1_Figure_0.jpeg)
12
+
13
+ Figure 1: (a) A toy setup of separating concentric circles; (b) An additional feature spuriously simplifies the task, inciting *heuristic memorization*; (c) Shuffled target labels induce *example-level memorization*; (d) Neuron activations for a two-layered feed-forward network trained for the base task in (a); (e) Activation patterns for the network reflect low *intra-neuron* and *inter-neuron* diversity when trained on (b); (f) High *intra-neuron* and *inter-neuron* diversity is seen when the network is trained on (c); (g) *Entropy* acts as a proxy to intra-neuron diversity; (h) *Mutual Information* acts as a proxy to inter-neuron diversity. Distinguishable patterns for the three networks are seen in (g) and (h).
14
+
15
+ hard to identify and study. Moreover, these sets are truly extrinsic in nature, necessitating the use of performance measures, which in turn lack interpretability and are not indicative of internal workings that manifest certain model behaviors. These considerations motivate evaluation strategies that are intrinsic to a network and indicate model generalization while not posing practical bottlenecks in terms of specialized labeled sets. Here, we study information organization as one such potential strategy.
16
+
17
+ In this work, we posit that organization of information across internal activations of a network could be indicative of memorization. Consider a sample task of separating concentric circles, illustrated in Figure 1a. A two-layered feed-forward network can learn the circular decision boundary for this task. However, if the nature of this learning task is changed, the network may resort to memorization. When a spurious feature is introduced in this dataset such that its value (+/-) correlates to the label (0/1) (Figure 1b), the network *memorizes* the feature-to-label mapping, reflected in a uniform activation pattern across neurons (Figure 1e). In contrast, when labels for the original set are shuffled (Figure 1c), the same network memorizes individual examples during training and shows a high amount of diversity in its activation patterns (Figure 1f). This example demonstrates how memorizing behavior is observed through diversity in neuron activations.
18
+
19
+ We formalize the notion of *diversity* across neuron activations through two measures: (i) *intra-neuron* diversity: the variation of activations for a neuron across a set of examples, and (ii) *inter-neuron* diversity: the dissimilarity between pairwise neuron activations on the same set of examples. We hypothesize that the nature of these quantities for two networks could point to underlying differences in their generalizing behavior. In order to quantify intra-neuron and inter-neuron diversity, we adopt the information-theoretic measures of *entropy* and *mutual information* (MI), respectively.
20
+
21
+ Throughout this work, we investigate if diversity across neural activations (§2) reflects model generalizability. We compare networks with varying levels of heuristic (§3) or example-level (§4) memorization across a variety of settings: synthetic setups based on the IMDb (Maas et al., 2011)
22
+
23
+ and MNIST (Lecun et al., 1998) datasets for both memorization types, as well as naturally occurring scenarios of gender bias on Bias-in-Bios (De-Arteaga et al., 2019) and OOD image classification on NICO (Zhang et al., 2022). We find that the information measures consistently capture differences among networks with varying degrees of memorization: Low entropy and high MI are characteristic of networks that show heuristic memorization, while high entropy and low MI are indicative of example-level memorization. Lastly, we evaluate these measures from the viewpoint of model selection and note strong correlations to rankings from domain-specific evaluation metrics (§5).
24
+
25
+ # Method
26
+
27
+ As per the data processing inequality (Beaudry & Renner, 2012), a part of the neural network (referred to as the encoder) compresses the most relevant information of a given input X, into a representation H. This compressed information is processed by a $classification\ head$ (or, a decoder) to produce an output Y corresponding to the given input. We hypothesize that the organization of information across neurons of the encoder is indicative of model generalization. We study two complementary properties that capture this information organization for a given network:
28
+
29
+ - (i) **Intra-neuron diversity**: How do activations of a given neuron vary across different input examples. We measure the *entropy* of neural activations (across examples) as a proxy.
30
+ - (ii) **Inter-neuron diversity**: How unique is the activation of a neuron compared to other neurons. We quantify this via *mutual information* between activations of pairwise neurons.
31
+
32
+ Below, we discuss the information measures formally.
33
+
34
+ For any given encoder (consisting of N neurons) that maps the input to a dense hidden representation, we denote the activation of the $i^{th}$ neuron as a random variable, $A_i \in \{a_i^1, \ldots, a_i^S\}$ , where each measurement is an activation over an example from a set of size S. The probability over this continuous activation space is computed by binning it into discrete ranges (Darbellay & Vajda, 1999), and we denote each discretized activation value as $\hat{a}$ . Importantly, the set of examples on which the activations are computed come from a distribution that is similar to the underlying training set itself.
35
+
36
+ **Entropy** We measure the Shannon entropy for each neuron in the concerned network, as a proxy of intra-neuron diversity. Following the definition of Shannon entropy, this is given as:
37
+
38
+ $$H(A_i) = \underset{\hat{a}_i^s \in A_i}{\mathbb{E}} [h(\hat{a}_i^s)] = \sum_{i=1}^{N_{\text{bins}}} p(\hat{a}_i^j) \log(\frac{1}{p(\hat{a}_i^j)})$$
39
+ (1)
40
+
41
+ **Mutual Information** We compute the mutual information (MI) between underlying neurons as a proxy to inter-neuron diversity. Specifically, we compute the MI between all neuron pairs in the network. Thus, the set of MI values $I(A_i)$ for a particular neuron $A_i$ , is given as:
42
+
43
+ $$I(A_i) = \{I(A_i; A_1), \dots, I(A_i; A_N)\}$$
44
+ (2)
45
+
46
+ where, I(X;Y) depicts the MI between variables X and Y. Unless stated otherwise, this $I(A_i)$ is computed $\forall i \in \{1, ..., N\}$ , resulting into a square matrix of size $(N \times N)$ .
47
+
48
+ This process of computing the information measures for a network on a given set of examples is summarized in Algorithm 1. Further details on the computation are given in appendix A.
49
+
50
+ Here, we briefly discuss the information-theoretic metrics for the example of concentric circles from the introduction (Figure 1). To recap, we consider a setup to compare networks showing the two forms of memorization and observe discernible differences in their activation patterns: heuristic
51
+
52
+ <sup>&</sup>lt;sup>1</sup>In principle, we would compute MI across neuron sets; we approximate this through individual neuron pairs.
53
+
54
+ **Algorithm 1** Computation of information measures. Algorithmic procedures ENTROPY and MI are specified by algorithms 2 and 3 in appendix A.
55
+
56
+ ```
57
+ 1: A_1, \ldots, A_N \leftarrow \{f(x_i)\}_{i=1}^S
58
+ > Computing activations for all neurons
59
+ 2: H \leftarrow \{\}; I \leftarrow \{\}
60
+ 3: for i \in \{1, \dots, N\} do
61
+ ▶ Initiating computations for Entropy and MI
62
+ ▶ Iterating over the set of neurons
63
+ ▶ Initiating MI for a particular neuron
64
+ 5:
65
+ H_i \leftarrow \text{Entropy}(A_i)
66
+ ▶ Following Equation 1 and Algorithm 2
67
+ for j \in \{1, \dots, N\} do
68
+ ▶ Inner loop over the set of neurons
69
+ 6:
70
+ I_i \leftarrow I_i \bigoplus MI(A_i, A_j)
71
+ 7:
72
+ ▶ Following Equation 3 and Algorithm 3
73
+ 8:
74
+ end for
75
+ H \leftarrow H \bigoplus H_i
76
+ 9:
77
+ I \leftarrow I \bigoplus I_i
78
+ 10:
79
+ ▶ Following Equation 2
80
+ 11: end for
81
+ ```
82
+
83
+ memorization corresponds to low intra-neuron and inter-neuron diversity, while example-level memorization corresponds to high diversity (Figures 1e and 1f). We expect that this difference in diversity would be captured through the above defined information measures.
84
+
85
+ Figure 1g presents the distribution of entropy values for each of the three networks with varying generalization behaviors. Throughout this work, we visualize this distribution of entropy using similar box-plots, where a black marker within the boxes depicts the median of the distribution and a notch neighboring this marker depicts the 95% confidence interval around the median. We observe that entropy for the network exhibiting heuristic memorization is distributed around a lower point than the others, whereas entropy for the network with example-level memorization is higher.
86
+
87
+ Furthermore, Figure 1h shows the distribution of MI for the three networks. To interpret the distribution of MI (an $N \times N$ square matrix), we fit a Gaussian mixture model over all values and visualize it through a density plot, where the density (y-axis) at each point corresponds to the number of neurons pairs that exhibit that MI value (x-axis). Larger peaks in these density plots suggest a large number of neurons pairs are concentrated in that region. Interestingly, we see such peaks for the three networks at distinct values of MI. For the network showing example-level memorization (high inter-neuron diversity), most of the neuron pairs show low values of MI. In contrast, heuristic memorization (low inter-neuron diversity) has high neuron pair density for higher MI values.<sup>2</sup>
88
+
89
+ Based on these findings, we formulate two hypotheses, summarized in Table 1:
90
+
91
+ - **H1** Networks exhibiting heuristic memorization would show low inter- and intra-neuron diversity, reflected through low entropy and high MI values.
92
+ - **H2** Networks exhibiting example-level memorization would show high inter- and intra-neuron diversity, reflected through high entropy and low MI values.
93
+
94
+ Table 1: Summarizing our hypotheses.
95
+
96
+ | Memorization | Dive | ersity | | |
97
+ |----------------------------|--------------------------|-------------------------------------------------------------------------|--|--|
98
+ | | Intra-neuron (∝ Entropy) | $\begin{array}{c} \text{Inter-neuron} \\ (\propto MI^{-1}) \end{array}$ | | |
99
+ | Heuristic<br>Example-level | <b>†</b> | <b>†</b> | | |
100
+
101
+ Here, we study different networks with varying degrees of heuristic memorization, and examine if the information measures—aimed to capture neuron diversity—indicate the extent of memorization.
102
+
103
+ We synthetically introduce spurious artifacts in the training examples such that they co-occur with target labels. Networks trained on such a set are prone to memorizing these artifacts. The same correlations with an artifact do not hold in the validation sets. To obtain a set of networks with varying
104
+
105
+ <sup>&</sup>lt;sup>2</sup>This difference in neuron activation patterns for the two memorizing sets could be caused by several factors, including functional complexity (Lee et al., 2020): Functions that encode individual data points (as in example-level memorization) need to be much more complex than functions that learn shortcuts (heuristic memorization). We make a comparison with standard complexity measures in appendix C.4 and observe that our information measures correlate more strongly with generalization performance—especially for heuristic memorization.
106
+
107
+ ![](_page_4_Figure_0.jpeg)
108
+
109
+ Figure 2: The relation between entropy of neural activations and heuristic memorization. For both the setups, networks trained on higher $\alpha$ show higher heuristic memorization (as depicted by the dipping model accuracy line), accompanied with lower entropy values.
110
+
111
+ ![](_page_4_Figure_2.jpeg)
112
+
113
+ Figure 3: Distribution of mutual information (MI) of pairs of neurons for networks with varying heuristic memorization. For both settings, networks trained on training sets with larger amounts of spurious correlations ( $\uparrow \alpha$ ) exhibit higher mutual information across their neuron pairs.
114
+
115
+ degrees of this heuristic memorization, we consider a parameter $\alpha$ that controls the fraction of the training examples for which the spurious correlation holds true. We consider the following setups:
116
+
117
+ **Colored MNIST** In this setting, the MNIST dataset (Lecun et al., 1998) is configured such that a network trained on this set simply learns to identify the color of images and not the digits themselves (Arjovsky et al., 2019). Particularly, digits 0–4 are grouped as one label while 5–9 as the other, and images for these labels are colored green and red, respectively. For this setup, we train multi-layer perceptron (MLP) networks for varying values of $\alpha$ , which corresponds to the fraction of training instances that abide to the color-to-label correlation. The considered values of $\alpha$ and other details for this setup are given in appendix B.1.
118
+
119
+ Sentiment Adjectives In this setup, we sub-sample examples from the IMDb dataset (Maas et al., 2011) that contain at least one adjective from a list of positive and negative adjectives. Then, examples that contain any of the positive adjectives ("good", "great", etc.) are marked with the positive label, whereas ones that contain any negative adjectives ("bad", "awful", etc.) are labeled as negative. We exclude examples that contain adjectives from both lists. The motivation to use this setup is to introduce heuristics in the form of adjectives in the training set. We fine-tune DistilBERT-base models (Sanh et al., 2019) on this task for different values of $\alpha$ (fraction of examples that obey the heuristic). The full set of adjectives considered and further details are outlined in appendix B.2.
120
+
121
+ **Results:** Through these experiments, we first note that **low entropy across neural activations indicates heuristic memorization in networks**. This is evident from Figure 2, where we see that (1) as we increase $\alpha$ the validation performance decreases, indicating heuristic memorization (see the solid line in the plots); and (2) with an increase in this heuristic memorization, we see lower entropy across neural activations. We show the entropy values of neural activations for the 3 layers of an MLP
122
+
123
+ ![](_page_5_Figure_0.jpeg)
124
+
125
+ Figure 4: Distributions of entropy and MI across final layer activations of RoBERTa-base differentiate networks fine-tuned on original and de-biasing sets for Bias-in-Bios. Color of boxes and Gaussian plots corresponds to *extractability* of gender information in model representations as estimated through MDL probing (Voita & Titov, 2020)—lighter colors indicate lower extractability (less bias).
126
+
127
+ trained on Colored MNIST (left sub-plot) and for the last two layers of DistilBERT on Sentiment Adjectives (right sub-plot). In both these two scenarios, we see a consistent drop in the entropy with increasing values of $\alpha$ , with a particularly sharp decline when $\alpha = 1.0$ .
128
+
129
+ Furthermore, we observe that **networks with higher heuristic memorization exhibit higher mutual information** across pairs of neurons. In Figure 3, networks with higher memorization ( $\uparrow \alpha$ ), have larger density of neurons in the high mutual information region. While this trend is consistent across the two settings, we see some qualitative differences: The memorizing ( $\alpha = 1.0$ ) MLP network on Colored MNIST (left) has a uniform distribution across the entire scale of MI values, while DistilBERT on Sentiment Adjectives (right) largely has a high-density peak for an MI of $\sim 0.9$ .
130
+
131
+ Next, we investigate setups where spurious correlations are not synthetically induced, but occur naturally in the datasets. Below, we describe two such scenarios:
132
+
133
+ **Occupation Prediction** We first study the task of predicting occupations from biographies on the Bias-in-Bios dataset (De-Arteaga et al., 2019). Given the skewed distribution of genders across occupations, models pick up cues that reveal the biographee's gender. For instance, most biographies corresponding to the "professor" occupation are of males. Models trained on this dataset can learn such spurious associations. To evaluate how much the trained networks encode gender, we measure *compression* values by training a gender classifier on the internal representations of the network and computing its minimum description length (MDL). These compression values act as a proxy to the ease of extracting gender information from representations (Voita & Titov, 2020; Orgad et al., 2022).
134
+
135
+ We consider a variety of training sets by *sub-sampling* and *over-sampling* examples for each profession in the dataset: This is done to balance the number of examples across each gender. We do this for both the original inputs in the dataset (*raw*) and *scrubbed* examples, wherein gender-specific information (such as pronouns) is removed (similar to setups in De-Arteaga et al. (2019)). We perform our analysis on RoBERTa-base (Liu et al., 2019) fine-tuned for these training sets.<sup>4</sup>
136
+
137
+ **Results:** In Figure 4, we observe the distribution of the two information measures for the last layer of networks trained on the different training sets. This variation is shown in conjunction with compression values across the network using the MDL probe. Following our initial hypothesis (Table 1; H1), we expect that networks with higher representation of bias will have lower entropy. Indeed, in Figure 4 (left), the network trained on the original training set (i.e., raw original) shows the lowest entropy. This finding is in line with our hypothesis, since the other networks are trained on either gender-balanced or scrubbed sets. However, we do not observe consistent trends among
138
+
139
+ <sup>&</sup>lt;sup>3</sup>Considerable changes in entropy values are not seen for initial DistilBERT layers, suggesting that spurious correlations are largely captured by later layers. Detailed results covering other layers are given in appendix C.1.
140
+
141
+ <sup>&</sup>lt;sup>4</sup>We use trained checkpoints released by Orgad et al. (2022). More details are given in appendix B.3.
142
+
143
+ <sup>&</sup>lt;sup>5</sup>The difference in compression values across training sets is more prominent in higher layers, yet the correlation between compression and MI remains high throughout the network. We discuss this in appendix C.3.
144
+
145
+ ![](_page_6_Figure_0.jpeg)
146
+
147
+ ResNet-18 on NICO<sup>++</sup>
148
+ Set Type
149
+ Unbalanced
150
+ Balanced
151
+
152
+ Do 10
153
+ 0.05
154
+ 0.10
155
+ 0.15
156
+ 0.20
157
+ Mutual Information
158
+
159
+ Figure 5: Entropy and MI for ResNet-18 on the NICO<sup>++</sup> dataset. The two training sets—balanced and unbalanced—result into models that vary in their generalization to contextual features beyond on what they were trained on. This distinction is reflected in the information measurements.
160
+
161
+ networks trained on these de-biasing sets. On the other hand, we do see clear patterns in MI that distinguish networks in line with their compression values (Figure 4, right). As we go from lower to higher values of MI (left to right), the density plots get darker, corresponding to higher compression values (higher bias). A prominent distinction is seen between the raw and scrubbed sets, which are separated on two sides of the plot.
162
+
163
+ **Image Classification with Contexts** Next, we consider a scenario from computer vision, where the task is to identify the presented object in a particular context. We use a subset of the $NICO^{++}$ dataset (Zhang et al., 2022), which consists of images of animals in a variety of contexts. For each animal class, there exist two types of contexts: *individual*, those that are specific to only that animal and are not present for all classes (such as a *roaring* bear), and *common*, contexts that exist across all classes (such as images taken in *dark*).
164
+
165
+ For our analysis, we design two training sets—unbalanced and balanced—varying in the distribution of common contexts across examples. Each animal in the unbalanced set occurs in a particular common context that is chosen for that animal. In contrast, the balanced set contains images from all common contexts, for each animal. Thus, a network trained on the unbalanced set is likely to pick the context-to-animal mapping (i.e., a case of heuristic memorization).
166
+
167
+ **Results:** We train ResNet-18 (He et al., 2015) networks for the two sets and evaluate them on the common NICO<sup>++</sup> evaluation set, balanced across all common contexts. We consider the hidden representation from each of the 4 blocks of layers in the network to compute the information measures reported in Figure 5. From the left sub-figure, we observe that the entropy for networks trained on the balanced set is consistently greater than the unbalanced set across all layer blocks. Furthermore, we observe that distribution of MI (right) across pairwise neurons also reflects the difference between the networks, corroborating our hypothesis. Neuron pairs for the network that memorizes the correlation with image contexts (unbalanced) are more densely concentrated at higher MI values.
168
+
169
+ We now examine how the distribution of information measures across networks change when they memorize individual examples. Following our original hypotheses (Table 1; H2), we expect such networks to display high intra-neuron and inter-neuron diversity, and thus high entropy and low MI.
170
+
171
+ We perform the analysis for example-level memorization on the standard datasets of MNIST (Lecun et al., 1998) and IMDb (Maas et al., 2011) on a 3-layered MLP and DistilBERT-base, respectively. In order to study how the diversity of neurons changes with increasing example-level memorization, we induce varying levels of label noise by randomly shuffling a fraction of training examples' target labels (denoted by a parameter $\beta$ ). We then analyze these trained networks on the original validation set.
172
+
173
+ **Results:** First, we note that model performance on the validation set decreases with increased label shuffling, validating an increase in example-level memorization (Figure 6). Interestingly, this dip in validation accuracy is accompanied with a consistent rise in entropy across the neurons. For MLP networks trained on MNIST (left), we see a distinct rise in entropy even with a small amount of label
174
+
175
+ ![](_page_7_Figure_0.jpeg)
176
+
177
+ Figure 6: Entropy across neuron activations increases with greater example-level memorization ( $\uparrow \beta$ ).
178
+
179
+ ![](_page_7_Figure_2.jpeg)
180
+
181
+ Figure 7: Networks that show higher example-level memorization ( $\uparrow \beta$ ) have high density of neuron pairs for lower MI values. Here, MI is computed across the first layer for both the networks.
182
+
183
+ shuffling ( $\beta=0.25$ ), followed by a steady increase (layers 2 and 3) or no change (layer 1) in entropy. A dissimilar trend is seen for DistilBERT fine-tuned on IMDb (right): a distinct rise for high values of $\beta$ and a consistent value for low or no label shuffling. While our hypothesis holds true in both settings, we speculate the difference between them is due to the pre-trained initialization of DistilBERT, which has been shown to act as an implicit regularization during fine-tuning (Tu et al., 2020). That is, here, DistilBERT might be learning task-relevant information despite some amount of label noise (note that this is not evident through validation performance alone).
184
+
185
+ Our hypothesis for the relation between example-level memorization and MI is supported by Figure 7. In both settings, networks trained on higher $\beta$ values consist of neuron pairs that show low values of MI (left side of the plots). In line with the previous observations, we find that MLPs trained on some amount of label noise (any $\beta > 0.00$ ) on MNIST (left sub-plot) have a higher density of neuron pairs concentrated at low values of MI. Meanwhile, for DistilBERT on IMDb (right sub-plot), we observe that neuron pair density gradually shifts towards lower values of MI with increasing $\beta$ .
186
+
187
+ In the previous sections, we have seen that studying information organization through the presented measures allow us to qualitatively distinguish networks with different generalizing behaviors. A natural application of our findings is the problem of model selection: given a list of models, rank them based on their generalizability. To demonstrate the utility of our insights, we compare the correlations between rankings obtained through our information-theoretic measures (which do no require labeled data) and the generalization ability of the model on a labeled held-out set.
188
+
189
+ We consider the same tasks and networks as discussed in the prior sections, and compute the rankings using (i) extrinsic evaluation metrics defined for the task (such as validation accuracy for Colored MNIST and compression for Bias-in-Bios), (ii) the mean of entropy values, and (iii) the mean of
190
+
191
+ <sup>&</sup>lt;sup>6</sup>Although MI values remain non-negative throughout, the x-axis in our density plots might show negative values as an artifact of fitting a Gaussian mixture model.
192
+
193
+ Table 2: We measure the correlation (Kendall's $\tau$ ) between model rankings based on their generalization as estimated through extrinsic metrics on labeled test sets and those obtained via information measures. Note that $\tau$ can range from -1.0 (perfect disagreement) to 1.0 (perfect agreement).
194
+
195
+ | | Sentiment<br>Adjectives | Colored<br>MNIST | Bias-in-Bios | | Shuffled<br>IMDb | Shuffled<br>MNIST | |
196
+ |--------------|-------------------------|------------------------|------------------|------------|------------------|------------------------|------------------------|
197
+ | | Validation<br>Accuracy | Validation<br>Accuracy | Comp-<br>ression | TPR<br>Gap | Suff.<br>Gap | Validation<br>Accuracy | Validation<br>Accuracy |
198
+ | Mean Entropy | 0.80 | 1.00 | 0.47 | 0.20 | 0.20 | 0.60 | 1.00 |
199
+ | Mean MI | 0.80 | 1.00 | 0.60 | 0.07 | 0.33 | 0.80 | 1.00 |
200
+
201
+ MI values computed for the same networks. We then compute the Kendall rank correlation coefficient, $\tau$ , between these rankings (between (i) & (ii), and (i) & (iii)) to evaluate the agreement amongst them.
202
+
203
+ We observe high correlation values for all the comparisons (Table 2). Particularly high correlations are observed for setups with synthetically induced spurious correlations (§3.1) and shuffled labels (§4), with rankings on Colored MNIST being perfectly correlated. Correlations on Bias-in-Bios are positive but lower, likely due to the more nuanced setup, where the memorization is less pronounced and extrinsic metrics are weakly correlated even among themselves (appendix D.1; Orgad et al., 2022). These positive correlations are important because—unlike the other metrics across which the correlations are computed—the information measures are purely intrinsic to the model and do not assume access to any OOD data. We perform an additional comparative discussion with standard conventional methods for model selection in appendix D.2.
2210.13014/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2210.13014/paper_text/intro_method.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Modern graph neural networks (GNNs) [@kipf2016semi; @velivckovic2018graph; @wu2019simplifying] have shown remarkable performance in learning representations for structured instances. From the perspective of geometric deep learning [@bronstein2017geometric; @bronstein2021geometric; @monti2017geometric], part of the achievement of GNNs can be attributed to their implementation of the permutation invariance property as *geometric priors* [^2] into the architecture design. Nevertheless, in practice, GNNs highly rely on graph topology, as essential input information, to explore the relational knowledge implicit in interactions of instance pairs throughout the entire message passing process, termed as *geometric knowledge* in this paper. As advances in generalized distillation [@lopez2015unifying; @vapnik2015learning] reveal the possibility of encoding input features into model construction, natural questions arise as to:
4
+
5
+ *Is it possible, and if so, how can we encode graph topology as a special type of 'geometric prior' into a GNN model, such that the model could precisely capture the underlying geometric knowledge even without full graph topology as input?*
6
+
7
+ In specific, we are interested in the following *geometric knowledge transfer* problem: a GNN model (with node-specific outputs for node-level prediction [@hu2020open]) is exposed with a partial graph, which is a subset of the complete graph. Formally speaking, we have notations: $$\begin{equation}
8
+ \mathcal G = \{\mathcal V, \mathcal E\}\mbox{ (partial graph)}, \; \tilde{\mathcal G} = \{\tilde{\mathcal V}, \tilde{\mathcal E}\}\mbox{ (complete graph)}, \;\mbox{where }\mathcal V \subseteq \tilde{\mathcal V}, \mathcal E \subseteq \{\mathcal{V} \times \mathcal{V}\} \cap \tilde{\mathcal E}.
9
+ \end{equation}$$ Our goal is to transfer or encode geometric knowledge extracted from $\tilde{\mathcal G}$ to the target GNN model that is only aware of $\mathcal G$. Studying this problem is also of much practical value. As a non-exhaustive list of applications: improving efficiency without compromising on effectiveness for coarsened graphs [@fahrbach2020faster; @jin2020graph; @zhang2021graph], privacy constrained scenarios in social recommenders or federated learning where the complete graph is unavailable [@socialrec; @wang2021privileged; @zhang2021subgraph], promoting concentration on targeted community to bring up economic benefits [@wu2019personalizing].
10
+
11
+ Achieving this target is non-trivial for that we need to first find a principled and fundamental way to encapsulate the geometric knowledge extracted by GNN model, which requires in-depth investigation on the role of graph topology throughout the progressive process of message passing. Therefore, we take a thermodynamic view borrowed from physics and propose a new methodology built upon recent advances revealing the connection between heat diffusion and architectures of GNNs [@chamberlain2021grand; @wang2021dissecting; @chamberlain2021beltrami]. Specifically, we interpret feature propagation as heat flows on the underlying Riemmanian manifold, whose characteristics (that are dependent on graph topology and the GNN model) pave the way for a principled representation of the latent geometric knowledge.
12
+
13
+ **New theoretical perspective for analyzing latent graph geometry.** On top of the connection between heat equation and GNNs, we step further to inspect the implication of heat kernel for GNNs, and propose a novel notion of *Neural Heat Kernel* (NHK) with rigorous proof of its existence. Heat kernel intrinsically defines the unique solution to the heat equation and can be a fundamental characterization for the geometric property of the underlying manifold [@grigoryan2009heat; @grigor1999estimates]. Likewise, NHK uncovers geometric property of the latent graph manifold for GNNs, and governs how information flows between pairs of instances, which lends us a mathematical tool to encapsulate geometric knowledge extracted from GNN model and enables geometric knowledge transfer. This result alone might also be useful in broader contexts for understanding GNNs.
14
+
15
+ **Flexible distillation framework with versatile instantiations.** Based on the above insights, we treat NHK matrices as representation of the latent geometric knowledge, upon which we build a flexible and principled distillation framework dubbed as *Geometric Knowledge Distillation* (GKD), which aims at encoding and transferring geometric knowledge by aligning latent manifolds behind GNN models as illustrated in Fig. [1](#fig_motiv){reference-type="ref" reference="fig_motiv"}. We also develop non-parametirc and parametric versions of GKD, in terms of different ways to approximate NHK computation. Specifically, the former derives explicit NHKs via assumptions on latent space, and the later learns NHK in a data-driven manner.
16
+
17
+ **Applications for geometric knowledge transfer and conventional KD purposes.** We verify the efficacy of GKD in terms of different geometric knowledge types (i.e., edge-aware and node-aware ones), and further show its effectiveness for conventional KD purposes (e.g., model compression, self distillation, online distillation) for broader applicability. We highlight that our methods consistently exceed teacher model and rival with the oracle model that gives the performance upper bound in principle.
18
+
19
+ <figure id="fig_motiv" data-latex-placement="tb!">
20
+ <p>[]</p>
21
+ <p><span><embed src="motiv.pdf" style="width:50.0%" /></span></p>
22
+ <figcaption>Feature propagation on the underlying manifold <span class="math inline">ℳ</span>. (a) <span>Teacher</span>: aware of the complete graph topology, and faithfully explore geometric knowledge about the underlying manifold. (b) <span>Student before GKD</span>: only aware of partial graph topology, and estimate biased geometry property. (c) <span>Student after GKD</span>: able to propagate features on the same space as teacher by alignment of NHKs.</figcaption>
23
+ </figure>
24
+
25
+ # Method
26
+
27
+ We commence with a brief detour to heat equation on Riemannian manifolds, and its connection with modern GNN architectures. Moreover, we bring forth the notion of *heat kernel* to motivate this work.
28
+
29
+ We are interested in heat equation defined on a smooth $k$-dimensional Riemannian manifold $\mathcal M$. Suppose the manifold is associated with a scalar- or vector-valued function $x(u,t): \mathcal M \times[0, \infty) \rightarrow \mathbb{R}^d$, quantifying a specific type of signals such as *heat* at a point $u\in \mathcal M$ and time $t$. Fourier's law of heat conductivity describes the flow of heat with respect to time and space, via a partial differential equation (PDE) called *heat equation* [@cannon1984one], i.e., $$\begin{equation}
30
+ \label{heq}
31
+ \frac{\partial x(u, t)}{\partial t} = - c\;\Delta x(u, t),
32
+ \end{equation}$$ where $c>0$ is the *thermal conductivity* coefficient, and $\Delta$ is the natural *Laplace--Beltrami operator* associated with $\mathcal M$. Rewriting $\Delta$ as the functional composition of the *divergence operator* $\nabla^*$ and *gradient operator* $\nabla$, i.e., $\Delta = \nabla^{*} \circ \nabla$, we can interpret the heat equation as: the variation of temperature within an infinitesimal time interval at a point is equivalent to the divergence between its own temperature and the average temperature on an infinitesimal sphere around it.
33
+
34
+ A spatial discretisation of a continuous manifold yields a graph $\mathcal G = \{\mathcal V, \mathcal E\}$, whose nodes can be thought of as embedded on the base manifold. In fact, the heat equation along with variants thereof (e.g., Schrödinger equation) have found widespread use in modeling graph dynamics [@chung1997spectral; @keller2010unbounded; @medvedev2014nonlinear]. More importantly, it has been recently revealed to be intimately related with the architectures of modern GNNs [@wang2021dissecting; @chamberlain2021grand; @chamberlain2021beltrami]: suppose $\mathbf{X}(0)=\{x(u,0)\}_{u\in \mathcal V}\in \mathbb R^{n\times d}$ denotes the initial condition for Eqn. [\[heq\]](#heq){reference-type="eqref" reference="heq"} determined by input node features, then solving the heat equation under certain definitions of $\nabla^*$ and $\nabla$ (i.e., definition of $\Delta$) amounts to different architectures of GNNs. For instance:
35
+
36
+ **Example 1.** [@wang2021dissecting] Define the discretised counterpart of $\Delta$ as the graph Laplacian matrix $\mathbf{L}=\widetilde{\mathbf{D}}^{-\frac{1}{2}}(\widetilde{\mathbf{D}}-\widetilde{\mathbf{A}}) \widetilde{\mathbf{D}}^{-\frac{1}{2}}$. Numerically solving Eqn. [\[heq\]](#heq){reference-type="eqref" reference="heq"} using the forward Euler method with step size $\tau=1$ yields the formulation of Simple Graph Convolution (SGC) [@wu2019simplifying], where $\Theta$ denote learnable transformation matrix $$\begin{equation}
37
+ \label{sgc}
38
+ \hat{\mathbf{X}}(t)=\left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}}\right)^{t} \mathbf{X}(0),\quad \hat{\mathbf{Y}}=\operatorname{softmax}\left(\hat{\mathbf{X}}(t) \boldsymbol{\Theta}\right).
39
+ \end{equation}$$ **Example 2.** [@chamberlain2021grand] Define the gradient operator $\nabla_{ij}$ as the difference of source and target node features, the divergence operator $\nabla_{i}^*$ as the sum of features of all edges for the node. Numerically solving Eqn. [\[heq\]](#heq){reference-type="eqref" reference="heq"} using the explicit Euler scheme with step size $\tau$ yields the following recursive formulation $$\begin{equation}
40
+ \label{grand}
41
+ \hat{\mathbf{X}}(t+\tau)=\tau\left(\mathbf{G}-I\right)\hat{\mathbf{X}}(t) + \hat{\mathbf{X}}(t)
42
+ \end{equation}$$ where $\mathbf G$ is a diffusivity coefficient matrix in place of $c$.
43
+
44
+ Moreover, stacking a non-linear transformation layer after each step yields the formulation of Graph Convolution Networks (GCN) [@kipf2016semi] for Eqn. [\[sgc\]](#sgc){reference-type="eqref" reference="sgc"}, Graph Attention Networks (GAT) [@velivckovic2018graph] with residual connection for Eqn. [\[grand\]](#grand){reference-type="eqref" reference="grand"}, and even more GNN architectures by virtue of the flexibility of interpretion for heat equation on graphs.
45
+
46
+ Intriguingly, it turns out that the initial value problem of heat equation on any manifold $\mathcal M$ has a smallest positive fundamental solution depending on the Laplace operator $\Delta$, known as the *heat kernel* [@berline2003heat]. It is denoted as a kernel function $\kappa(x, y, t)$, such that $$\begin{equation}
47
+ \label{eqn_kernel}
48
+ x(u_i,t)=e^{-t \Delta} x(u_i,0)=\int_{\mathcal M} \kappa(u_i, u_j, t) x(u_j,0) \mathrm{d} \mu(u_j),
49
+ \end{equation}$$ where $\mu$ is a non-negative measure associated with $\mathcal M$. In physics, the heat kernel $\kappa(x, y, t)$ can be interpreted as a transition density that describes the asymptotic behavior of a natural Brownian motion on the manifold. Its formulation thus can be treated as *a unique reflection or representation of the geometry of the underlying manifold*. For example, if the manifold is a $k$-dimensional *Euclidean Space* $\mathbb R^k$ or a *Hyperbolic Space* $\mathbb H^k$, the explicit formula of heat kernel is respectively given by, $$\begin{equation}
50
+ \label{eqn_gaus}
51
+ \kappa(u_i, u_j, t)=\frac{1}{(4 \pi t)^{k / 2}} \exp \left(-\frac{\rho^2}{4 t}\right)\mbox{ and } \kappa(u_i, u_j, t)=\frac{(-1)^{m}}{2^{m} \pi^{m}} \frac{1}{(4 \pi t)^{\frac{1}{2}}}\left(\frac{1}{\sinh \rho} \frac{\partial}{\partial \rho}\right)^{m} e^{-m^{2} t-\frac{\rho^{2}}{4 t}},
52
+ \end{equation}$$ where $\rho = d(u_i, u_j)$ denote geodesic distance. Heat kernel has also been adopted for graph-related applications such as community detection [@kloster2014heat], graph clustering [@xiao2010geometric].
53
+
54
+ The starting point of this work is the development of *neural heat kernel*, built upon the previously-mentioned connection of GNNs and heat equation. As will be discussed later, this novel notion lends us a thermodynamic perspective to the intrinsic geometric property of the latent graph manifold embodied in GNNs, and hence paves the way for distilling geometric knowledge.
55
+
56
+ Consider the graph signal $\mathbf{X}(t)$ at time $t$ and node features $\mathbf{H}^{(l)}$ at layer $l$ as interchangeable notions. Consequently, feature propagation using one layer of GNN amounts to heat diffusion on the base manifold $\mathcal M$ within a certain time interval $\tau$, leading to the equivalences of $\mathbf{X}(t+\tau)$ and $\mathbf{H}^{(l+1)}$: $$\begin{equation}
57
+ \label{eqn_equi}
58
+ \begin{split}
59
+ \mathbf{H}^{(l+1)} = f_\theta(\mathbf{H}^{(l)}, \mathcal G), \;\mathbf{X}(t+\tau) = e^{-\tau \Delta(f_\theta,\mathcal G)} \mathbf{X}(t),
60
+ \end{split}
61
+ \end{equation}$$ where $f_\theta$ denotes an arbitrary GNN model with parameter $\theta$, and $\Delta(f_\theta,\mathcal G)$ denotes a generalization of Laplace-Beltrami operator defined over the base manifold $\mathcal M$ associated with graph $\mathcal G$ and the arbitrary backbone GNN model $f_\theta$.
62
+
63
+ **Remark.** The equivalence of two equations in Eqn. [\[eqn_equi\]](#eqn_equi){reference-type="ref" reference="eqn_equi"} is based on the recently established connection between heat equation and GNNs [@chamberlain2021grand; @chamberlain2021beltrami; @wang2021dissecting; @eliasof2021pde; @di2022graph], which reveal that the formulation of a GNN layer could be thought of as discretisations (that correspond to the left equation in Eqn. [\[eqn_equi\]](#eqn_equi){reference-type="ref" reference="eqn_equi"}) of the continuous diffusion process (that correspond to the right equation in Eqn. [\[eqn_equi\]](#eqn_equi){reference-type="ref" reference="eqn_equi"}) described by the heat equation. Furthermore, different definitions of Laplace-Beltrami operator $\Delta$ and schemes for solving Eqn. [\[heq\]](#heq){reference-type="ref" reference="heq"} could yield different GNNs (e.g., SGC [@wu2019simplifying], GAT [@velivckovic2018graph], GRAND [@chamberlain2021grand]). While it is unclear whether there exists such a definition of $\Delta$ for every GNN architecture, we write the operator as $\Delta(f_\theta, \mathcal G)$ to associate it with model $f_\theta$, and then use the analogy between GNN and heat equation as an analytical tool, in a similar manner with [@bodnar2022neural; @topping2021understanding; @wang2021dissecting; @thorpe2021grand; @chamberlain2021grand; @chamberlain2021beltrami], for studying the geometry property of GNNs. See more detailed justifications in Appendix [11](#app_justify){reference-type="ref" reference="app_justify"}.
64
+
65
+ In light of this connection, we consider a natural generalization of heat kernel for GNNs, termed as *neural heat kernel (NHK)* to highlight its difference with heat kernel in the thermodynamic context. In particular, a *single-layer* NHK is defined as a positive definite symmetric kernel function denoted as $\kappa^{(l)}_\theta(v_i,v_j)$, where the sub-script $\theta$ implies that it is associated with the architecture and parameters of the backbone GNN, and the super-script $(l)$ implies that it is specific to each layer, analogous to the role of continuous time $t$ in Eqn. [\[eqn_kernel\]](#eqn_kernel){reference-type="eqref" reference="eqn_kernel"}.
66
+
67
+ ::: theorem
68
+ **Theorem 1**. ***(Existence of Single-Layer NHK)** Suppose two expressions in Eqn. [\[eqn_equi\]](#eqn_equi){reference-type="eqref" reference="eqn_equi"} are equivalent (see Appendix [11](#app_justify){reference-type="ref" reference="app_justify"} for more discussions), then for any graph $\mathcal G$ and GNN model $f_\theta$, there exist a unique single-layer NHK function $\kappa^{(l)}_\theta(\cdot)$ such that for any node $v_i\in \mathcal V$ and $l > 0$, $$\begin{equation}
69
+ \label{eqn_nhkgnn}
70
+ \begin{split}
71
+ \mathbf h_i^{(l)} =\sum_{v_j\in \mathcal V} \kappa^{(l)}_\theta(v_i,v_j) \cdot \mathbf h_j^{(l-1)} \mu(v_j)
72
+ \end{split}
73
+ \end{equation}$$ where $\mathbf h_i^{(l)}\in \mathbb R^d$ denotes the feature of node $v_i$ at $l$-th layer, and $\mu$ is a measure over vertices that could be specified as the inverse of node degree $1/d_i$.*
74
+ :::
75
+
76
+ To push further, we can generalize NHK across multiple layers of GNN, termed as a *cross-layer NHK* $\kappa_\theta(v_i,v_j,l \mapsto l+k)$ (e.g., from $l$-th layer to $(l+k)$-th layer of GNN). Its existence could be induced recursively by the *semi-group identity property* of NHK concerning consecutive GNN layers.
77
+
78
+ ::: {#thm_semi .theorem}
79
+ **Theorem 2**. * **(Semigroup Identity Property of NHK)** The NHK satisfies the semigroup identity property: $\forall v_i, v_j \in \mathcal V$ and $l>0$, there exists a cross-layer NHK across two consecutive layers $$\begin{equation}
80
+ \begin{split}
81
+ \kappa_\theta(v_i, v_j, l\mapsto l+2)=
82
+ \sum_{v_k\in \mathcal V} \kappa_\theta^{(l+1)}(v_i, v_k) \kappa_\theta^{(l+2)}(v_k, v_j) d \mu(v_k)
83
+ \end{split}
84
+ \end{equation}$$*
85
+ :::
86
+
87
+ This theorem indicates that stacks of multiple GNN layers also constitute a valid kernel, i.e., $$\begin{equation}
88
+ \begin{split}
89
+ \mathbf h_i^{(l+k)}=\sum_{v_j\in \mathcal V} \kappa_\theta(v_i,v_j,l \mapsto l+k) \cdot \mathbf h_j^{(l)} \mu(v_j).
90
+ \end{split}
91
+ \end{equation}$$ Analogous to heat kernel as an unique characterization of the underlying space, NHK characterizes the geometric property of the latent graph manifold for GNNs. Additionally, NHK is dependent on GNN models through the definition of the associated Laplace-Beltrami operator $\Delta(f_\theta, \mathcal G)$, inheriting the expressiveness of neural networks and varying through the course of training. Intuitively, NHK can be thought of as a model-driven encoding for topological information, encapsulating the geometric knowledge learned by GNNs into a tractable functional form.
92
+
93
+ Consider the problem of distilling geometric knowledge, which involves an intelligent teacher model $f_{\theta^*}$, which is exposed to and pre-trained over the (relatively) *complete graph* $\tilde{\mathcal G} = (\tilde{\mathcal V}, \tilde{\mathcal E})$, and a student model $f_{\theta}$ that is exposed to the partial graph $\mathcal G = (\mathcal V, \mathcal E)$, where $\mathcal V \subseteq \tilde{\mathcal V}$ and $\mathcal E \subseteq \{\mathcal{V} \times \mathcal{V}\} \cap \tilde{\mathcal E}$. Our target is to train a student model (with the help of teacher model) that operates on $\mathcal G$ to be as competitive as models operating on $\tilde{\mathcal G}$ during inference. Since $\mathcal G$ is a sub-graph of $\tilde{\mathcal G}$, they should lie in the *same space* (i.e., latent manifold) governed by the underlying mechanism of data generation, and hence we expect student and teacher models to capture the *same geometric property* of this shared space. This leads to the principle of *Geometric Knowledge Distillation* (GKD): transfer the geometric knowledge of the intelligent teacher to the student such that the student can propagate features as if it is aware of the complete graph topology (see the example in Fig. [1](#fig_motiv){reference-type="ref" reference="fig_motiv"}).
94
+
95
+ To this end, we resort to *NHK matrices* on the teacher (resp. student) model over the complete (resp. partial) graph as instantiations of their geometric knowledge, denoted as $$\begin{equation*}
96
+ \begin{split}
97
+ \mbox{(Teacher)} \quad&\mathbf K_{\theta^*}(\tilde{\mathcal G}, l\mapsto l+k) = \{\kappa_{\theta^*}(v_i,v_j,l \mapsto l+k)\}_{|\tilde{\mathcal V}|\times|\tilde{\mathcal V}|},\\
98
+ \mbox{(Student)} \quad&\mathbf K_\theta({\mathcal G}, l\mapsto l+k) = \{\kappa_{\theta}(v_i,v_j,l \mapsto l+k)\}_{|{\mathcal V}|\times|{\mathcal V}|},\\
99
+ \end{split}
100
+ \end{equation*}$$ written compactly as $\mathbf K^{(l+1)}({\mathcal G})$ when $k=1$. The NHK matrix is a positive semi-definite symmetric matrix, and alike $\kappa$, is dependent on the GNN model $f_\theta$ and graph $\mathcal G$. Denote $\mathbf K_{\theta^*,\mathcal V}^{(l)}(\tilde{\mathcal G}) \in \mathbb R^{|\mathcal V|\times|\mathcal V|}$ as the sub-matrix of $\mathbf K_{\theta^*}^{(l)}(\tilde{\mathcal G})$ with row and column indices in $\mathcal V$. The distillation loss for GKD is $$\begin{equation}
101
+ \label{eqn-dis}
102
+ \begin{split}
103
+ \mathcal L_{dis}(\mathbf K_{\theta^*,\mathcal V}, \mathbf K_{\theta},l\mapsto l+ k)
104
+ = \mathrm{d}(\mathbf K_{\theta^*,\mathcal V}(\tilde{\mathcal G},l\mapsto l+k), \mathbf K_{\theta}({\mathcal G},l\mapsto l+k)),
105
+ \end{split}
106
+ \end{equation}$$ where $\mathrm{d}(\cdot,\cdot)$ is a similarity measure, for which we choose Frobenius distance as implementation, i.e., $$\begin{equation}
107
+ \label{eqn_simmeaure}
108
+ \mathrm{d}(\mathbf K_{\theta^*,\mathcal V}, \mathbf K_{\theta}) = \|(\mathbf K_{\theta^*,\mathcal V}-\mathbf K_{\theta})\odot \mathbf W\|^2_{\mathrm{F}},\quad \mathbf W_{v_i, v_j}=\left\{\begin{array}{rcl}
109
+ 1 & \mbox {if} & (v_i, v_j)\in \mathcal E \\
110
+ \delta & \mbox {if} & (v_i, v_j)\notin \mathcal E.
111
+ \end{array}\right.
112
+ \end{equation}$$ where $\mathbf W \in \mathbb R^{|\mathcal V|\times|\mathcal V|}$ is a weighting matrix to trade-off distillation loss with respect to different node pairs depending on their connectivity. For $k = 1$, the loss can be re-written as $\mathcal L^{(l+1)}_{dis}(\mathbf K_{\theta^*,\mathcal V}, \mathbf K_{\theta})$. Note that one can also specify different $k$ for teacher and student models in Eqn. [\[eqn-dis\]](#eqn-dis){reference-type="eqref" reference="eqn-dis"} in case when the teacher model is deeper.
113
+
114
+ Unfortunately, deriving explicit formulas for NHKs is prohibitively challenging due to introduction of non-linearity. To circumvent it, we propose two types of instantiations for GKD, i.e., non-parametric and parametric. The former considers explicit NHKs by making assumptions on the underlying space, and the latter learns NHK in a data-driven manner.
115
+
116
+ **Deterministic Kernel.** One instantiation of NHK is a *Gauss-Weierstrass kernel* in the form of Eqn. [\[eqn_gaus\]](#eqn_gaus){reference-type="eqref" reference="eqn_gaus"}, by assuming the underlying space is a Euclidean space. Since the distillation loss in Eqn. [\[eqn-dis\]](#eqn-dis){reference-type="eqref" reference="eqn-dis"} is a homogeneous function, we can remove its scaling factor and define NHK as $$\begin{equation}
117
+ \label{eqn-gaussian-kernel}
118
+ \mbox{(Gauss-Weierstrass NHK)} \quad\kappa_\theta(v_i, v_j, l\mapsto l+k)\triangleq \exp \left(-\frac{\|\mathbf h_i^{(l)}-\mathbf h_j^{(l)}\|_2^2}{4T}\right),
119
+ \end{equation}$$ where $T$ denotes the estimation of the accumulated time interval. Alternatively, we can use *Sigmoid kernel* and define non-parametric NHK as: $$\begin{equation}
120
+ \label{eqn-sigmoid-kernel}
121
+ \mbox{(Sigmoid NHK)} \quad\kappa_\theta(v_i, v_j, l\mapsto l+k)\triangleq \mathrm{tanh} \left(a\;\langle\mathbf h_i^{(l)},\mathbf h_j^{(l)}\rangle+b\right),
122
+ \end{equation}$$ where $a, b$ are positive constants depending on $l$ and $k$. It is a natural and intuitive choice as similarity measurement and empirically found as-well effective, albeit does not correspond to any named manifold to our knowledge.
123
+
124
+ **Randomized Kernel.** We can also define *Randomized kernel* based on the following theorem.
125
+
126
+ ::: theorem
127
+ **Theorem 3**. ***(Expansion of NHK)** Let $\left\{\varphi_{k'}\right\}_{k'=0}^{\infty}$ be orthonormal basis of eigenfunctions of $-\Delta(f_\theta, \mathcal G)$ with eigenvalues $0<\lambda_{0}\leq \lambda_{1} \leq \lambda_{2} \leq \ldots\;$, NHK allows the expansion: $$\begin{equation}
128
+ \label{eqn_kerdecomp}
129
+ \kappa_{\theta}(v_i, v_j, l\mapsto l+k)=\sum_{k'=0}^{\infty} e^{-\lambda_{k'} T} \varphi_{k'}(v_i)^\top \varphi_{k'}(v_j).
130
+ \end{equation}$$*
131
+ :::
132
+
133
+ Based on this result, we resort to the approximation of NHK by defining a randomized kernel in a similar form as Eqn. [\[eqn_kerdecomp\]](#eqn_kerdecomp){reference-type="eqref" reference="eqn_kerdecomp"}, leading to the following formulation of randomized NHK: $$\begin{equation}
134
+ \label{eqn_randfour}
135
+ \begin{split}
136
+ \mbox{(Randomized NHK)} \quad\kappa_\theta(v_i, v_j, l\mapsto l+k)\triangleq \frac{1}{m}\sum_{k'=0}^{m} e^{-\lambda_{k'} T} \left[\sigma\left(\boldsymbol{W}_{k'} \mathbf h_i\right)^\top \sigma\left(\boldsymbol{W}_{k'} \mathbf h_j\right)\right],
137
+ \end{split}
138
+ \end{equation}$$ where $\sigma\left(\boldsymbol{W}_{k'} \mathbf{h}_{i}\right)$ is used to proximate $\varphi_{k'}\left(v_i\right)$, $\boldsymbol{W}_{k'} = \left[\boldsymbol{\phi}_{1,k'}, \boldsymbol{\phi}_{2,k'}, \cdots, \boldsymbol{\phi}_{s,k'}\right]^{\top}$ is a transformation matrix, $\boldsymbol{\phi} \sim \mathcal{N}\left(\mathbf{0}, \boldsymbol{I}_{d}\right)$ is a $d$-dimensional random variable from Gaussian distribution. In fact, under certain choice of activation function $\sigma$, Eqn. [\[eqn_randfour\]](#eqn_randfour){reference-type="eqref" reference="eqn_randfour"} could approximate a diversity of kernels [@rahimi2007random; @cho2009kernel]. This design essentially enforces the alignment of teacher and student for arbitrary underlying manifold.
139
+
140
+ **Training Scheme.** We follow the standard training paradigm in KD literature [@hinton2015distilling; @gou2021knowledge]: the teacher is pre-trained by a supervised prediction loss involving all labeled nodes in $\tilde{\mathcal V}$. After teacher is well-trained, we fix $\theta^*$ and train the student model according to $$\begin{equation}
141
+ \label{eqn_trainstudent}
142
+ \theta = \arg\min_{\theta} \mathcal L_{pre}(\hat{\mathbf Y}_{\theta}, {\mathbf Y}) + \frac{\alpha}{L}\sum_{l=1}^{L}\mathcal L^{(l)}_{dis}(\mathbf K_{\theta^*,\mathcal V}, \mathbf K_{\theta}),
143
+ \end{equation}$$ where ${\mathbf Y}$ denotes ground-truth labels of labeled nodes in ${\mathcal V}$, and $\hat{\mathbf Y}_{\theta}$ denotes the predictions of student model $f_{\theta}$ on ${\mathcal G}$, $\mathcal L_{dis}$ is the distillation loss defined by Eqn. [\[eqn-dis\]](#eqn-dis){reference-type="eqref" reference="eqn-dis"}, $L$ denotes the total number of layers.
144
+
145
+ Inheriting the similar spirit of auto-encoding Bayes [@vae-iclr2014], we introduce a *variational inverse-NHK* that is independently parameterized, denoted as $\kappa_\phi^\dagger$, whose existence is guaranteed by the invertibility of NHK matrices. Together with $\kappa_\theta$, they define a symmetric form characterizing feature propagation: $$\begin{align}
146
+ \mbox{(Forward)}\quad&\mathbf h_i^{(l+k)}=\sum_{v_j\in \mathcal V} \kappa_\theta(v_i,v_j,l \mapsto l+k) \cdot \mathbf h_j^{(l)} \mu(v_j),\label{eqn_dual1}\\
147
+ \mbox{(Backward)}\quad&\mathbf h_i^{(l)}=\sum_{v_j\in \mathcal V} \kappa_\phi^\dagger(v_i,v_j,l+k \mapsto l) \cdot \mathbf h_j^{(l+k)} \mu(v_j). \label{eqn_dual2}
148
+ \end{align}$$ In practice, we follow existing kernel learning approaches [@wilson2016deep] and parameterize the inverse-NHK as $$\begin{equation}
149
+ \label{eqn_PGKD}
150
+ \kappa_\phi^\dagger(v_i, v_j, l+k\mapsto l) = g_\phi(\mathbf h_i^{(l+k)})^\top g_\phi(\mathbf h_j^{(l+k)}),
151
+ \end{equation}$$ where $g_\phi: \mathbb R^d \rightarrow \mathbb R^s$ is the associated learnable non-linear mapping. Given a pre-trained teacher model, distilling geometric knowledge boils down to 1) establishing equivalence of Eqn. [\[eqn_dual1\]](#eqn_dual1){reference-type="eqref" reference="eqn_dual1"} and Eqn. [\[eqn_dual2\]](#eqn_dual2){reference-type="eqref" reference="eqn_dual2"}, and 2) matching pseudo-inverse NHK matrices for teacher and student models (respectively denoted as $\mathbf K^\dagger_{\theta^*,\mathcal V}$ and $\mathbf K^\dagger_{\theta}$ with clear meanings), leading to the training scheme as follows.
152
+
153
+ **Training Scheme.** Based on Eqn. [\[eqn_dual2\]](#eqn_dual2){reference-type="eqref" reference="eqn_dual2"}, we can define a *reconstruction loss* with respect to the teacher model (similar applies to the student model) as $$\begin{equation}
154
+ \begin{split}
155
+ \mathcal L_{rec}(\mathbf H_{t}^{(l+k)}, \mathbf H_{t}^{(l)}) &= \|\mathbf K_{\theta^*}^\dagger \mathbf H_{t}^{(l+k)} - \mathbf H_{t}^{(l)}\|_F^2.
156
+ \end{split}
157
+ \end{equation}$$ Then, minimizing the reconstruction loss with fixed GNN model parameter $\theta$ amounts to optimizing the variational parameter $\phi$, and minimizing prediction and distillation losses given fixed $\phi$ amounts to optimizing the student model parameter $\theta$: $$\begin{align}
158
+ \quad \phi\gets &\arg\min_\phi \quad \mathcal L_{rec}\left(\mathbf H_{t}^{(l+k)}, \mathbf H_{t}^{(l)}\right) + \mathcal L_{rec}\left(\mathbf H^{(l+k)}, \mathbf H^{(l)}\right),\label{eqn_em1}\\
159
+ \theta\gets &\arg\min_\theta\quad \mathcal L_{pre}\left(\hat{\mathbf Y}_\theta, \mathbf Y\right) + \alpha \mathcal L_{dis}\left(\mathbf K_{\theta^*,\mathcal V}^\dagger, \mathbf K_{\theta^*,\mathcal V}^\dagger,l+k\mapsto l\right).\label{eqn_em2}
160
+ \end{align}$$ Applying two steps iteratively adds up to an EM-like algorithm for training the student model. In practice, we set $l+k$ as the last layer, and $l$ as the first layer to use as much information as possible. We justify the parametric GKD approach in Appendix. [10](#proof_vi){reference-type="ref" reference="proof_vi"} by showing it essentially explores the true NHK behind GNN.
2303.14368/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2303.14368/paper_text/intro_method.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Free-viewpoint rendering of a scene is an important problem often attempted under constrained settings: on subjects demonstrating simple motion carefully captured with multiple cameras [\[17,](#page-8-0) [19,](#page-8-1) [20\]](#page-8-2). However, photorealistic *free-viewpoint* rendering of moving humans captured from a *monocular* video still remains an unsolved challenging problem, especially with sparse views.
4
+
5
+ Neural radiance fields (NeRF) have emerged as a popular tool to learn radiance fields from images/videos for novel view-point rendering. Previous approaches assume multiple view-points and often fail on non-rigid human motions. Human-specific NeRFs have recently become popular for learning models using input videos [\[27,](#page-9-0) [40\]](#page-9-1). The current state-of-art approaches such as HumanNeRF [\[40\]](#page-9-1) have shown impressive progress in this domain. However, there remain several challenges. Firstly, approaches such as HumanNeRF [\[40\]](#page-9-1) utilize a pose prior and use a canonical configuration (*e.g*. T-pose) for optimization, which may be well outside the set of observed poses. The underlying optimization becomes challenging especially as the number of observed views become sparse. In contrast, we select a pose from the available set of poses as the canonical pose-configuration, similar to previous pose-free approaches such as D-NeRF [\[29\]](#page-9-2). This enables best of both worlds; it becomes easier to learn a motion field mapping due to smaller deformations while using a pose prior. In addition, having the canonical view in the training data provides a strong prior for the optimization of the canonical pose itself. Finally, it allows us to optimize the canonical configuration with our novel *pose-independent temporal deformation*. We demonstrate that this architectural change provides significantly better results compared to existing approaches [\[24,](#page-8-3) [40\]](#page-9-1).
6
+
7
+ In addition, approaches such as HumanNeRF [\[40\]](#page-9-1) depend on the estimated pose for the canonical configuration optimization. Errors in the initial pose estimation, for example, due to strong motion blur cause challenges in pose correction. The underlying assumption that the non-rigid motion is pose-dependent often fails in scenarios with complex clothing and accessories, hair styles, and large limb movements. Our proposed *pose-independent temporal deformation* helps to supplement the missing information in its pose-dependent counterpart.
8
+
9
+ To this end, we introduce FlexNeRF, a novel approach for jointly learning a *pose-dependent motion field* and *poseindependent temporal deformation* within the NeRF framework for modeling human motions. Moreover, we introduce a novel cycle consistency loss in our framework, further capitalizing on the fact that our canonical pose corresponds to one of the captured frames. The consistency regularizes the estimated deformation fields by mapping back and forth between each view and the canonical pose. Moreover, the information content of any frame in a motion sequence has a strong similarity to that of its neighbours. Hence, we propose to utilize this contextual information present in a consecutive set of frames to aid learning by
10
+
11
+ <sup>\*</sup>Part of the work was done while the author was an intern at Amazon.
12
+
13
+ <span id="page-1-0"></span>imposing a temporal consistency loss. We additionally regularize the training by adding a supplementary loss based on the segmentation masks. Our approach allows photorealistic rendering of a moving human even when sparse views are available, by supplementing the pose-dependent motion field with additional information during learning: (i) pose-independent temporal deformation with ample pixelwise correspondences beyond the (typically 24) pose pointcorrespondences, and (ii) consistency constraints/losses. In summary, our paper makes the following contributions:
14
+
15
+ - We propose a novel approach to learn poseindependent temporal deformation to complement the pose-dependent motion for modeling humans in video, using one of the views as the canonical view.
16
+ - We propose a novel cyclic-consistency loss to regularize the learned deformations.
17
+ - We propose a temporal-consistency loss to aid learning with contextual information present in neighbouring frames, as well as to maintain consistency across consecutive rendered frames.
18
+ - Our approach outperforms the state-of-the-art approaches, with significant improvement in case of sparse views.
19
+
20
+ # Method
21
+
22
+ Given a sequence of frames of a monocular video with a human manifesting complex motions, our goal is to achieve photo-realistic free-viewpoint rendering and reposing. We choose a frame as the canonical configuration (*e.g*. the midpoint of the motion sequence) and learn it via: a) posedependent (rigid and non-rigid) motion fields and b) poseindependent temporal deformations.
23
+
24
+ Given a canonical pose-configuration p <sup>c</sup> = (J c , Ω c ) and the observed pose p = (J, Ω), where Ω represents the local joint rotations and J represents the joint locations in 3D, we define a pose-guided motion field mapping between the observed and canonical spaces. We first compute the transformation Mk(p c , p), and hence the translation t<sup>k</sup> and rotation R<sup>k</sup> matrices between the joint coordinates in observed and canonical spaces, for a given body part k. Y (w<sup>i</sup> , ji) computes the exponent of the local joint rotation w<sup>i</sup> of the joint location j<sup>i</sup> using the Rodrigues's formula [\[35\]](#page-9-15),
25
+
26
+ $$Y(\omega_i, j_i) = \prod_{i \in \tau(k)} \begin{bmatrix} exp(\omega_i) & j_i \\ 0 & 1 \end{bmatrix}, \tag{1}$$
27
+
28
+ where τ (k) denotes the ordered set of parents of the k th local joint. Subsequently, we compute the corresponding
29
+
30
+ <span id="page-2-0"></span>![](_page_2_Figure_0.jpeg)
31
+
32
+ Figure 1. Overview of our approach. Pose-independent temporal deformation is used in conjunction with pose-dependent motion fields (rigid and non-rigid). We choose one of the input frames as the canonical view, allowing us to use cyclic-consistency for regularization.
33
+
34
+ translation t<sup>k</sup> and rotation R<sup>k</sup> matrices,
35
+
36
+ $$M_k(p^c, p) = Y(\omega_i^c, j_i^c) \{Y(\omega_i, j_i)\}^{-1} = \begin{bmatrix} R_k & t_k \\ 0 & 1 \end{bmatrix}.$$
37
+ (2)
38
+
39
+ Given the translation and rotation matrices, we compute the rigid deformation x<sup>R</sup> between the observed and canonical spaces by defining
40
+
41
+ $$\mathcal{L}(x) = \sum_{k=1}^{K} w_k^c (R_k x + t_k), \tag{3}$$
42
+
43
+ which represents the likelihood that the position x is a part of the subject. We obtain the set of blend weight volumes in the canonical space {w k c } K <sup>k</sup>=1, where K is the total number of 3D joint locations. To this end, starting from a constant random latent vector z, we generate the motion weight volume W<sup>c</sup> (x) = CNN<sup>θ</sup><sup>R</sup> (x; z) ∈ R <sup>4</sup> by optimizing the parameters θ<sup>R</sup> of the CNN<sup>θ</sup><sup>R</sup> [\[40\]](#page-9-1). We add a computed approximate Gaussian bone volume as a motion weight volume prior to the output of the last transposed convolution layer before activation. Subsequently, we compute the rigid deformation x<sup>R</sup> with the obtained L(x) and W<sup>c</sup> (x),
44
+
45
+ $$x_R = \frac{\sum_{k=1}^K w_k^c (R_k x^o + t_k)^2}{\mathcal{L}(x)}.$$
46
+ (4)
47
+
48
+ The non-rigid deformation between the observed and canonical spaces is then computed as a pose-guided offset δxNR to the rigid deformation xR. We feed the positional encoding τ (xR) to the non-rigid motion MLP as,
49
+
50
+ $$\delta x_{NR} = MLP_{\theta_{NR}}(\gamma(x_R); \Omega). \tag{5}$$
51
+
52
+ We follow the approach defined in [\[22\]](#page-8-9) to obtain the positional encoding τ (x) of the position x. The non-rigid motion MLP consists of six fully-connected layers with the positional encoding τ (xR) and the local joint rotations Ω (without global rotation) as the inputs with τ (xR) skipconnected to the fifth layer to generate the offset. Since the initial pose estimate p obtained from off-the-shelf techniques such as SPIN [\[12\]](#page-8-18) or VIBE [\[11\]](#page-8-19) can be erroneous, we perform a pose correction following [\[40\]](#page-9-1).
53
+
54
+ We strategically set the canonical configuration to an observed frame in the training set, allowing us to access observed (x o ) and canonical (x c ) positions as a source of information. Furthermore, having a common canonical anchor when learning a dynamic setting ensures that the scene is inter-connected across frames and no longer independent between time instances, which is intuitive and known to provide quality performance [\[29\]](#page-9-2). This grounding aids the model to learn preserving temporal consistency of the dynamic scene. Nevertheless, such an approach which optimizes a canonical time configuration does not work well alone for free-viewpoint rendering where we are required to render a 360<sup>0</sup> camera path with complex motion. Hence, we utilize a combined approach of pose-guided <span id="page-3-0"></span>and pose-independent (time-guided) canonical configuration optimization. Results in Sec. 5 show that the combined approach allows high quality photorealistic rendering with sparse views.
55
+
56
+ We compute the pose-independent temporal deformation between a point position in the observation space $x^o$ to the canonical space $x^c$ with a temporal deformation MLP, similar to D-NeRF [29]. This temporal deformation $\Delta x_T$ is defined by,
57
+
58
+ $$\Delta x_T = MLP_{\theta_{TD}}(\gamma(x^o), \gamma(x^c); (t^o, t^c)), \tag{6}$$
59
+
60
+ where $t^o$ is the observed time stamp defined by $(t^o = \tau(v^o_t))$ , and $t^c$ is the canonical time stamp defined by $(t^c = \tau(v^o_t))$ . $v_t \in R^5$ is a learnable vector representation initialised proportional to the frame sequence index of the monocular video.
61
+
62
+ In contrast to D-NeRF [29], we set the temporal vectors to be learnable due to several reasons. Even though the progression of frame sequence indices are linear, the progression of temporal information throughout a video is highly non-linear. For instance, there can be rapid motion between two consecutive frames in a video, whereas there can be no motion between another two consecutive frames in the same video. Hence, it is not intuitive to allocate a linear representation to the temporal vectors $\{v_t\}$ . Furthermore, albeit being a contrasting approach to ours, DyNeRF [13] presents strong evidence that trainable (latent) codes can better handle complex scene dynamics such as large deformations and topological changes. We heavily regularize the training of these temporal vectors in order to ensure that they are contained within practical limits.
63
+
64
+ The temporal deformation MLP, $MLP_{\theta_{TD}}$ consists of 8 fully connected layers with the positional encoded temporal vectors $\tau(v_t^o)$ and $\tau(v_t^c)$ , and the positional encoded point position vectors $\tau(x^o)$ and $\tau(x^c)$ as inputs. The observed encodings $\tau(v_t^o)$ and $\tau(x^o)$ are skip connected to the fifth layer to generate the deformation $\Delta x_T$ . Finally, we aggregate the pose-guided rigid motion $x_R$ , pose-guided non-rigid motion $\delta x_{NR}$ (as an offset to $x_R$ ), and the pose-independent temporal deformation $\Delta x_T$ to produce the predicted canonical configuration $\hat{x}^c$ ,
65
+
66
+ $$\hat{x}^{c} = \underbrace{(x_{R} + \delta x_{NR})}_{\text{pose-guided motion}} + \underbrace{\Delta x_{T}}_{\text{pose-independent temporal deformation}} \tag{7}$$
67
+
68
+ Having obtained the predicted canonical configuration $\hat{x}^c$ , we predict the RGB color c and the density $\sigma$ at a given spatial location. Rather than directly predicting $(c,\sigma)$ from the canonical space similar to the existing approaches [29, 40, 43], we propose to break the prediction process in two steps: a) transformation from canonical $(\hat{x}^c)$ to observed $(\hat{x}^o)$ space, and b) $(c,\sigma)$ prediction from $\hat{x}^o$ .
69
+
70
+ The proposed approach yields the opportunity to enforce a cyclic consistency constraint (observed $x^o \to \text{canonical}$ $\hat{x}^c \to \text{observed}$ $\hat{x}^o)$ on the output of the canonical to observed transformation MLP, $\hat{x}_o = MLP_{\theta_{CO}}(\gamma(\hat{x}^c))$ . Furthermore, having two separate specialized networks rather than one network to map from the rays in the canonical space to $(c,\sigma)$ in the observation space is more flexible and is empirically more effective as shown in Sec. 5.
71
+
72
+ The $MLP_{\theta_{CO}}$ has a similar architecture to $MLP_{\theta_{TD}}$ , without the temporal vectors as inputs. The subsequent scene representation MLP, $(c,\sigma)=MLP_{\theta_c}(\gamma(\hat{x}^o))$ has a similar architecture to the network proposed in [22]. Prior to feeding $\hat{x}^c$ and $\hat{x}^o$ to the corresponding networks, each vector is positional encoded.
73
+
74
+ We follow the volume rendering approach described in NeRF [22] by defining the expected alpha (density) mask A(r) and the expected color C(r) for a give ray r,
75
+
76
+ $$\mathcal{A}(r) = \sum_{i=1}^{D} \left\{ \prod_{j=1}^{i-1} (1 - \alpha_j) \right\} \alpha_i \tag{8}$$
77
+
78
+ $$C(r) = \sum_{i=1}^{D} \left\{ \prod_{j=1}^{i-1} (1 - \alpha_j) \right\} \alpha_i c(x_i)$$
79
+ (9)
80
+
81
+ $$\alpha_i = \mathcal{L}(x_i)\{1 - \exp(-\sigma(x_i)\Delta z_i)\}, \qquad (10)$$
82
+
83
+ where D is the number of samples, and $\Delta z_i$ is the interval between consecutive samples. We employ the same stratified sampling approach described in [22].
84
+
85
+ To further enhance the photorealism of the rendered images, we use a refinement network $\hat{I}_o = CNN_{\theta_{FT}}(C(r),\mathcal{A}(r))$ to add fine-grained details to the rendered image, similar to latent diffusion approaches [31]. The refinement network $CNN_{\theta_{FT}}$ consists three transposed convolution layers and outputs the final rendered image $\hat{I}_o$ .
86
+
87
+ The segmentation masks for the input frames can be obtained using an off-the-shelf segmentation network [9]. We use them to apply an additional loss to improve the density estimation. Note that rendering $\mathcal{A}(r)$ results in the predicted segmentation $\hat{M}=\mathcal{A}(r)$ , which is compared against the real segmentation mask M. This helps to eliminate the halo effects [28,40] and provide sharper boundaries. Empirically, we observed that thresholding the predicted segmentation mask, $\hat{M}=\mathcal{A}(r)H(\mathcal{A}(r),b)$ works better, where b is a threshold value and
88
+
89
+ $$H(\mathcal{A}(r), b) = \begin{cases} 1 \text{ if } \mathcal{A}(r) > b \\ 0 \text{ otherwise.} \end{cases}$$
90
+ (11)
91
+
92
+ <span id="page-4-1"></span><span id="page-4-0"></span>![](_page_4_Picture_0.jpeg)
93
+
94
+ Figure 2. Qualitative comparison of rendered novel views on the ZJU-MoCap dataset. Notice the higher quality of rendered images from our method on details such as faces, buttons on shirt, etc.
95
+
96
+ However, using a fixed threshold b makes learning difficult at the start of training. To ease learning, we make b a learnable parameter and re-define Mˆ = (A(r) + b)H(A(r), b), so that it is differentiable with respect to b. We observe that b goes to 0 as training progresses, as in the ideal case. Compared to previous approaches such as [\[10\]](#page-8-22), this does not require us to depend on estimated depths, which could themselves be erroneous due to complex nonrigid motions.
97
+
98
+ In this section we describe the loss functions used to learn the FlexNeRF model and discuss details with respect to optimization and ray-sampling.
99
+
100
+ NeRFs are typically trained with a combination of losses between the rendered and observed frames. In addition, FlexNeRF also uses a combination of segmentation loss, cyclic consistency loss and temporal consistency loss as defined below.
101
+
102
+ Segmentation Loss: We apply the BCE-Dice loss between the predicted and ground truth binary segmentation masks
103
+
104
+ $$\mathbb{L}_{S} = \frac{1}{N} \sum_{N} \left[ M \log \hat{M} + (1 - M) \log(1 - \hat{M}) \right] + \frac{2|M \cap \hat{M}|}{|M| + |\hat{M}|}, \tag{12}$$
105
+
106
+ where N is the number of pixels in the segmentation mask. Cyclic Consistency Loss: We introduce a cyclic consistency constraint on the canonical to observation space transformation, using Mean Squared Error (MSE) between xˆ o and x <sup>o</sup> defined by,
107
+
108
+ $$\mathbb{L}_{CCL} = \frac{1}{L} \sum_{i=1}^{L} (\hat{x}_i^o - x_i^o)^2, \tag{13}$$
109
+
110
+ where L is the number of positional samples.
111
+
112
+ <span id="page-5-2"></span><span id="page-5-1"></span>![](_page_5_Picture_0.jpeg)
113
+
114
+ Figure 3. Qualitative comparison of novel rendered views on SCF dataset (top two rows) and the People Snapshot dataset (bottom row) using sparse views. Our approach significantly improves the results.
115
+
116
+ <span id="page-5-0"></span>
117
+
118
+ | Dataset | Views | Method | $\mid$ LPIPS $\times 10^3 \downarrow$ | PSNR ↑ | SSIM ↑ |
119
+ |--------------------------|---------------------|------------------|---------------------------------------|--------|--------|
120
+ | | Sparse <sup>‡</sup> | HumanNeRF [40] | 39.27 | 27.65 | 0.8816 |
121
+ | PeopleSnapshot [2] | | Ours | 37.11 | 28.09 | 0.9003 |
122
+ | | Full | Neural Body [28] | 57.67* | 24.62 | 0.8490 |
123
+ | | | HumanNeRF [40] | 36.79 | 28.05 | 0.8984 |
124
+ | | | Ours | 35.63 | 28.77 | 0.9043 |
125
+ | ZJU-MoCap [7, 28] | Sparse <sup>‡</sup> | HumanNeRF [40] | 36.02 | 29.82 | 0.9597 |
126
+ | | | Ours | 31.68 | 30.18 | 0.9685 |
127
+ | | Full | Neural Body [28] | 52.28 | 29.07 | 0.9615 |
128
+ | | | HumanNeRF [40] | 31.72 | 30.24 | 0.9679 |
129
+ | | | Ours | 29.01 | 31.73 | 0.9765 |
130
+ | SCF Dataset <sup>†</sup> | Sparse <sup>‡</sup> | Neural Body [28] | 48.62 | 25.07 | 0.9131 |
131
+ | | | HumanNeRF [40] | 39.71 | 26.12 | 0.9366 |
132
+ | | | Ours | 34.26 | 29.55 | 0.9627 |
133
+
134
+ Table 1. Comparison of performance across benchmark datasets. \* refers to adjusted LPIPS from the values reported in [43] to fit the same scale as our experiments. $\dagger$ refers to the Self-Captured Fashion (SCF) dataset. $\ddagger$ indicates the model trained with sparse ( $\sim$ 40) views.
135
+
136
+ <span id="page-6-3"></span><span id="page-6-1"></span>![](_page_6_Figure_0.jpeg)
137
+
138
+ Figure 4. LPIPS metric comparison on ZJU-MoCap between HumanNeRF [40] and our method with decreasing number of views.
139
+
140
+ Temporal Consistency Loss (TCL): We identify that imposing temporal consistency constraints can be valuable at two instances: a) while rendering consecutive training frames $\{\hat{I}^o\}_{t=-k}^k$ and b) while applying temporal deformation from consecutive training frames to the canonical frame $\{\Delta x_T\}_{t=-k}^k$ . To this end, we employ the cycle-back regression consistency loss proposed in [6]. The cycle-back regression attempts to determine the temporal proximity of rendered frames or deformation vectors, and penalize the model if they are not in close temporal proximity. Given a rendered frame or a deformation vector u, and neighbors $\{v_k\}$ , we compute the similarity vector $\beta_k$ ,
141
+
142
+ $$\beta_k = \frac{\exp(-\|u - v_k\|^2)}{\sum_j \exp(-\|u - v_j\|^2)},$$
143
+ (14)
144
+
145
+ where $u, v_k \in \{v_k\}$ and $\beta$ is a discrete distribution of similarities over time. We impose a Gaussian prior on $\beta$ by minimizing the normalized square distance,
146
+
147
+ $$\mu = \sum k\beta_k \quad \sigma^2 = \beta_k (k - \mu)^2 \tag{15}$$
148
+
149
+ $$\mu = \sum_{k} k \beta_k \quad \sigma^2 = \beta_k (k - \mu)^2$$
150
+
151
+ $$\mathbb{L}_{TCL} = \frac{|i - \mu|^2}{\sigma^2} + \lambda log(\sigma),$$
152
+ (15)
153
+
154
+ where $\lambda$ is a regularization parameter. Finally, the rendering loss $\mathbb{L}_{rend}$ , the canonical loss $\mathbb{L}_{can}$ , and the overall loss $\ensuremath{\mathbb{L}}$ are defined as
155
+
156
+ $$\mathbb{L}_{rend} = \mathbb{L}_{LPIPS}(\hat{I}^o, I^o) + \mathbb{L}_{MSE}(\hat{I}^o, I^o)$$
157
+ (17)
158
+
159
+ $$+ \mathbb{L}_{TCL}(\{\hat{I}^o\}_{t=-k}^k)$$
160
+
161
+ $$\mathbb{L}_{can} = \mathbb{L}_{MSE}(\hat{x}^c, x^c) + \mathbb{L}_{TCL}(\{\Delta x_T\}_{t=-k}^k)$$
162
+ (18)
163
+
164
+ $$\mathbb{L} = \mathbb{L}_{rend} + \mathbb{L}_{can} + \mathbb{L}_{CCL} + \mathbb{L}_{S} \tag{19}$$
165
+
166
+ Delayed Modular Optimization: We follow a delayedoptimization approach similar to [40] to optimize the nonrigid motion, binary segmentation, and the refinement modules of our method. Optimizing these modules from the beginning yields lower performance as they rely on adequate inputs from the rest of the system. Hence, we freeze these modules initially, and unfreeze them gradually during the course of training.
167
+
168
+ <span id="page-6-2"></span>HumanNeRF Ours HumanNeRF Ours
169
+
170
+ ![](_page_6_Picture_15.jpeg)
171
+
172
+ Figure 5. Comparing the rendering of the canonical view for SCF (left) and ZJU-Mocap (right) datasets. Our approach is able to learn a higher quality canonical view.
173
+
174
+ Ray Sampling: Since LPIPS use a convolution-based approach to extract features, we use patch-based ray sampling following [32,40] instead of random ray sampling [22] from the whole image.
2304.06244/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-08-18T20:08:58.311Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36" version="21.6.3" etag="C2p_r7h69yRq56UJ7mM5" type="device">
2
+ <diagram name="Page-1" id="TeaUDsKyo5m_bkFhG2b1">
3
+ <mxGraphModel dx="955" dy="354" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="0" pageScale="1" pageWidth="850" pageHeight="1100" math="1" shadow="0">
4
+ <root>
5
+ <mxCell id="0" />
6
+ <mxCell id="1" parent="0" />
7
+ <mxCell id="2" value="" style="ellipse;whiteSpace=wrap;html=1;opacity=30;fillColor=#7EA6E0;" vertex="1" parent="1">
8
+ <mxGeometry x="121" y="486" width="131" height="50" as="geometry" />
9
+ </mxCell>
10
+ <mxCell id="3" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=#CC0000;opacity=30;rotation=-15;" vertex="1" parent="1">
11
+ <mxGeometry x="124.1012257255008" y="432.203752093746" width="103.09" height="30.35" as="geometry" />
12
+ </mxCell>
13
+ <mxCell id="4" value="$$\mathbf{z}^{(0)}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;" vertex="1" parent="1">
14
+ <mxGeometry x="132.0312257255008" y="447.003752093746" width="20" height="20" as="geometry" />
15
+ </mxCell>
16
+ <mxCell id="5" value="$$\mathbf{z}^{(1)}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;" vertex="1" parent="1">
17
+ <mxGeometry x="191.0312257255008" y="433.653752093746" width="20" height="20" as="geometry" />
18
+ </mxCell>
19
+ <mxCell id="6" value="" style="endArrow=none;html=1;rounded=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;fillColor=#a20025;strokeColor=#6F0000;" edge="1" source="8" target="7" parent="1">
20
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
21
+ <mxPoint x="152.59188589728058" y="459.09309192196616" as="sourcePoint" />
22
+ <mxPoint x="220.03122572550075" y="448.653752093746" as="targetPoint" />
23
+ </mxGeometry>
24
+ </mxCell>
25
+ <mxCell id="7" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=#FF8000;" vertex="1" parent="1">
26
+ <mxGeometry x="188.5312257255008" y="448.653752093746" width="3" height="3" as="geometry" />
27
+ </mxCell>
28
+ <mxCell id="8" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=#FF8000;" vertex="1" parent="1">
29
+ <mxGeometry x="143.0312257255008" y="460.003752093746" width="3" height="3" as="geometry" />
30
+ </mxCell>
31
+ <mxCell id="9" value="" style="endArrow=classicThin;html=1;rounded=0;strokeColor=#999999;endFill=1;" edge="1" source="15" parent="1">
32
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
33
+ <mxPoint x="191.0312257255008" y="451.653752093746" as="sourcePoint" />
34
+ <mxPoint x="214.0312257255008" y="523.653752093746" as="targetPoint" />
35
+ </mxGeometry>
36
+ </mxCell>
37
+ <mxCell id="10" value="" style="curved=1;endArrow=none;html=1;rounded=0;endFill=0;fillColor=#a20025;strokeColor=#A217FF;" edge="1" parent="1">
38
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
39
+ <mxPoint x="213.0312257255008" y="523.653752093746" as="sourcePoint" />
40
+ <mxPoint x="148.37122572550078" y="506.653752093746" as="targetPoint" />
41
+ <Array as="points">
42
+ <mxPoint x="203.0312257255008" y="516.653752093746" />
43
+ <mxPoint x="170" y="520" />
44
+ <mxPoint x="158.7812257255008" y="508.653752093746" />
45
+ <mxPoint x="147.7812257255008" y="506.653752093746" />
46
+ </Array>
47
+ </mxGeometry>
48
+ </mxCell>
49
+ <mxCell id="11" value="" style="endArrow=classicThin;html=1;rounded=0;strokeColor=#EA6B66;endFill=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;dashed=1;" edge="1" parent="1">
50
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
51
+ <mxPoint x="144.5312257255008" y="463.653752093746" as="sourcePoint" />
52
+ <mxPoint x="146.0312257255008" y="503.653752093746" as="targetPoint" />
53
+ </mxGeometry>
54
+ </mxCell>
55
+ <mxCell id="12" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=#67AB9F;" vertex="1" parent="1">
56
+ <mxGeometry x="145.0312257255008" y="504.653752093746" width="3" height="3" as="geometry" />
57
+ </mxCell>
58
+ <mxCell id="13" value="" style="endArrow=classicThin;html=1;rounded=0;strokeColor=#EA6B66;endFill=1;dashed=1;" edge="1" parent="1">
59
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
60
+ <mxPoint x="178.0312257255008" y="454.653752093746" as="sourcePoint" />
61
+ <mxPoint x="200" y="517" as="targetPoint" />
62
+ </mxGeometry>
63
+ </mxCell>
64
+ <mxCell id="14" value="" style="endArrow=classicThin;html=1;rounded=0;strokeColor=#EA6B66;endFill=1;dashed=1;" edge="1" target="15" parent="1">
65
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
66
+ <mxPoint x="191.0312257255008" y="451.653752093746" as="sourcePoint" />
67
+ <mxPoint x="214.0312257255008" y="523.653752093746" as="targetPoint" />
68
+ </mxGeometry>
69
+ </mxCell>
70
+ <mxCell id="15" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=#67AB9F;" vertex="1" parent="1">
71
+ <mxGeometry x="213.0312257255008" y="522.653752093746" width="3" height="3" as="geometry" />
72
+ </mxCell>
73
+ <mxCell id="16" value="$$\gamma(t)$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;" vertex="1" parent="1">
74
+ <mxGeometry x="160.0312257255008" y="434.003752093746" width="20" height="20" as="geometry" />
75
+ </mxCell>
76
+ <mxCell id="17" value="$$\hat \gamma(t) $$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;fontColor=#6600CC;" vertex="1" parent="1">
77
+ <mxGeometry x="165.64122572550082" y="516.643752093746" width="20" height="20" as="geometry" />
78
+ </mxCell>
79
+ <mxCell id="18" value="" style="endArrow=none;html=1;rounded=0;strokeColor=#000000;dashed=1;dashPattern=1 1;" edge="1" parent="1">
80
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
81
+ <mxPoint x="148.0312257255008" y="506.013752093746" as="sourcePoint" />
82
+ <mxPoint x="213.0312257255008" y="523.653752093746" as="targetPoint" />
83
+ </mxGeometry>
84
+ </mxCell>
85
+ <mxCell id="19" value="" style="endArrow=classicThin;html=1;rounded=0;strokeColor=#EA6B66;endFill=1;dashed=1;" edge="1" parent="1">
86
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
87
+ <mxPoint x="156.0312257255008" y="459.653752093746" as="sourcePoint" />
88
+ <mxPoint x="163.0312257255008" y="511.653752093746" as="targetPoint" />
89
+ </mxGeometry>
90
+ </mxCell>
91
+ <mxCell id="20" value="" style="endArrow=classicThin;html=1;rounded=0;strokeColor=#EA6B66;endFill=1;dashed=1;" edge="1" parent="1">
92
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
93
+ <mxPoint x="166.0312257255008" y="456.653752093746" as="sourcePoint" />
94
+ <mxPoint x="184" y="518" as="targetPoint" />
95
+ </mxGeometry>
96
+ </mxCell>
97
+ <mxCell id="21" value="$$g$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;" vertex="1" parent="1">
98
+ <mxGeometry x="196.0312257255008" y="459.653752093746" width="20" height="20" as="geometry" />
99
+ </mxCell>
100
+ <mxCell id="22" value="$$\hat{\mathbf{x}}^{(0)}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;" vertex="1" parent="1">
101
+ <mxGeometry x="128.0312257255008" y="492.653752093746" width="20" height="20" as="geometry" />
102
+ </mxCell>
103
+ <mxCell id="23" value="$$\hat{\mathbf{x}}^{(1)}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;" vertex="1" parent="1">
104
+ <mxGeometry x="219.1012257255008" y="509.653752093746" width="20" height="20" as="geometry" />
105
+ </mxCell>
106
+ <mxCell id="24" value="$$\mathcal{Z}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=6;" vertex="1" parent="1">
107
+ <mxGeometry x="217.0312257255008" y="422.003752093746" width="20" height="20" as="geometry" />
108
+ </mxCell>
109
+ <mxCell id="25" value="$$\hat{\mathcal{X}}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=6;" vertex="1" parent="1">
110
+ <mxGeometry x="227.0312257255008" y="484.653752093746" width="20" height="20" as="geometry" />
111
+ </mxCell>
112
+ <mxCell id="26" value="$$\hat{\mathbf{x}}^{(0.5)}$$" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=10;fontColor=#878787;" vertex="1" parent="1">
113
+ <mxGeometry x="175.1012257255008" y="497.643752093746" width="20" height="20" as="geometry" />
114
+ </mxCell>
115
+ </root>
116
+ </mxGraphModel>
117
+ </diagram>
118
+ </mxfile>
2304.06244/main_diagram/main_diagram.pdf ADDED
Binary file (14.8 kB). View file
 
2304.06244/paper_text/intro_method.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep-learning-based methods for data compression [54] have achieved increasingly strong performance on visual data compression, increasingly exceeding classical codecs in rate-distortion performance [4, 36, 52, 13, 50, 33]. However, their enormous computational complexity compared to classical codecs, especially required for decoding, is a road-block towards their wider adoption [35, 38]. In this work, inspired by the parallel between nonlinear transform coding and traditional transform coding [17], we replace deep convolutional decoders with extremely lightweight and shallow (and even linear) decoding transforms, and establish the R-D (rate-distortion) performance of neural image compression when operating at the lower limit of decoding complexity.
4
+
5
+ More concretely, our contributions are as follows:
6
+
7
+ ![](_page_0_Figure_9.jpeg)
8
+
9
+ Figure 1. R-D performance on Kodak v.s. decoding computation complexity as measured in KMACs (thousand multiply-accumulate operations) per pixel. The circle radius corresponds to the parameter count of the synthesis transform in each method (see Table. 1)
10
+
11
+ - We offer new insight into the image manifold parameterized by learned synthesis transforms in nonlinear transform coding. Our results suggest that the learned manifold is relatively flat and preserves linear combinations in the latent space, in contrast to its highly nonlinear counterpart in generative modeling [11].
12
+ - Inspired by the parallel between neural image compression and traditional transform coding, we study the effect of linear synthesis transform within a hyperprior architecture. We show that, perhaps surprisingly, a JPEG-like synthesis can perform similarly to a deep linear CNN, and we shed light on the role of nonlinearity in the perceptual quality of neural image compression.
13
+ - We give a theoretical analysis of the R-D cost of neural lossy compression in an asymptotic setting, which quantifies the performance implications of varying the complexity of encoding and decoding procedures.
14
+ - We equip our JPEG-like synthesis with powerful encoding methods, and augment it with a single hidden layer. This simple approach yields a new state-of-the-
15
+
16
+ art result in the trade-off between R-D performance and decoding complexity for nonlinear transform coding, in the regime of sub-50K FLOPs per pixel believed to be dominated by classical codecs.
17
+
18
+ Most existing neural lossy compression approaches are based on the paradigm of nonlinear transform coding (NTC) [6]. Traditional transform coding [23] involves designing a pair of analysis (encoding) transform f and synthesis (decoding) transform g such that the encoded representation of the data achieves good R-D (rate-distortion) performance. NTC essentially learns this process through data-driven optimization. Let the input color image be $\mathbf{x} \in \mathbb{R}^{H \times W \times 3}$ . The analysis transform computes a continuous latent representation $\mathbf{z} := f(\mathbf{x})$ , which is then quantized to $\hat{\mathbf{z}} = \lfloor \mathbf{z} \rceil$ and transmitted to the receiver under an entropy model $P(\hat{\mathbf{z}})$ ; the final reconstruction is then computed by the synthesis transform as $\hat{\mathbf{x}} := g(\hat{\mathbf{z}})$ . The hard quantization is typically replaced by uniform noise to enable end-to-end training [3]. We refer to [54] (Section 3.3.3) for the technical details.
19
+
20
+ Instead of orthogonal linear transforms in traditional transform coding, the analysis and synthesis transforms in NTC are typically CNNs (convolutional neural networks) [44, 3] or variants with residual connections or attention mechanisms [13, 25]. The (convolutional) latent coefficients $\mathbf{z} \in \mathbb{R}^{h,w,C}$ form a 3D tensor with C channels and a spatial extent (h,w) smaller than the input image. We denote the downsampling factor by s, i.e., s = H/h = W/w; this is also the "upsampling" factor of the synthesis transform.
21
+
22
+ To improve the bitrate of NTC, a hyperprior [5, 36] is commonly used to parameterize the entropy model $P(\hat{\mathbf{z}})$ via another set of latent coefficients $\mathbf{h}$ and an associated pair of transforms $(f_h,g_h)$ . The hyper analysis $f_h$ computes $\mathbf{h}=f_h(\hat{\mathbf{z}})$ at encoding time, and the hyper synthesis $g_h$ predicts the (conditional) entropy model $P(\hat{\mathbf{z}}|\hat{\mathbf{h}})$ based on the quantized $\hat{\mathbf{h}}=\lfloor\mathbf{h}\rceil$ . We adopt the Mean-scale Hyperprior from Minnen et al. [36] as our base architecture, which is widely used as a basis for other NTC methods [28, 13, 37, 25]. In this architecture, the various transforms are parameteried by CNNs, with GDN activation [3] being used in the analysis and synthesis transforms and ReLU activation in the hyper transforms. Importantly, the synthesis transform (g) accounts for over 80% of the overall decoding complexity (see Table 1), and is the focus of this work.
23
+
24
+ Given an image $\mathbf{x}$ to be encoded, instead of computing its discrete representation by rounding the output of the analysis transform, i.e., $\hat{\mathbf{z}} = \lfloor f(\mathbf{x}) \rfloor$ , Yang et al. [51] cast the encoding problem as that of variational inference, and propose to
25
+
26
+ infer the discrete representation that optimizes the per-data R-D cost. Their proposed method, SGA (Stochastic Gumbel Annealing), essentially solves a discrete optimization problem by constructing a categorical variational distribution $q(\mathbf{z}|\mathbf{x})$ and optimizing w.r.t. its parameters by gradient descent, while annealing it to become deterministic so as to close the quantization gap [51]. In this work, we will adopt their proposed standalone procedure and opt to run SGA at test time, essentially treating it as a powerful black-box encoding procedure for a given NTC architecture.
27
+
28
+ # Method
29
+
30
+ We begin with new empirical insight into the qualitative similarity between the synthesis transforms in NTC and traditional transform coding [17] (Sec. 3.1). This motivates us to adopt simpler synthesis transforms, such as JPEG-like block-wise linear transforms, which are computationally much more efficient than deep neural networks (Sec. 3.2). We then analyze the resulting effect on R-D performance and mitigate the performance drop using powerful encoding methods from the neural compression toolbox (Sec. 3.3).
31
+
32
+ Although the transforms in NTC are generally black-box deep CNNs, Duan et al. [17] showed that they in fact bear strong qualitative resemblance to the orthogonal transforms in traditional transform coding. They showed that the learned synthesis transform in various NTC architectures satisfy a certain separability property, i.e., a latent tensor can be decomposed spatially or across channels, then decoded separately, and finally combined in the pixel space to produce a reasonable reconstruction. Moreover, decoding "standard basis" tensors in the latent space produces image patterns resembling the basis functions of orthogonal transforms.<sup>1</sup>
33
+
34
+ Here, we obtain new insights into the behavior of the learned synthesis transform in NTC. We show that the manifold of image reconstructions is approximately flat, in the sense that straight paths in the latent space are mapped to approximately straight paths (i.e., naive linear interpolations) in the pixel space. Additionally, the learned synthesis transform exhibits an approximate "mixup" [55] behavior despite the lack of such explicit regularization during training.
35
+
36
+ Suppose we are given an arbitrary pair of images $(\mathbf{x}^{(0)}, \mathbf{x}^{(1)})$ , and we obtain their latent coefficients $(\mathbf{z}^{(0)}, \mathbf{z}^{(1)})$ using the analysis transform (we ignore the effect of quantization as in Duan et al. [17]). Let $\gamma:[0,1]\to\mathcal{Z}$ be the straight path in the latent space defined by the two latent tensors, i.e., $\gamma(t):=(1-t)\mathbf{z}^{(0)}+t\mathbf{z}^{(1)}$ . Using the synthesis transform g, we can then map the curve in the latent space to one in the space of reconstructed images, defined by
37
+
38
+ <sup>&</sup>lt;sup>1</sup>We note that performing Principal Component Analysis on small image patches also results in similar patterns; see Figure 13 in the Appendix.
39
+
40
+ ![](_page_2_Picture_0.jpeg)
41
+
42
+ Figure 2. Conceptual illustration of the image manifold parameterized by $\hat{\gamma}(t)$ (purple curve), obtained by decoding a straight path $\gamma(t)$ in the latent space. We show it does not significantly deviate from a straight path (dashed line) connecting its two end points.
43
+
44
+ ![](_page_2_Figure_2.jpeg)
45
+
46
+ Figure 3. Visualizing the 1-D manifold of image reconstructions $\{\hat{\gamma}(t)|t\in[0,1]\}$ (top row) and the linear interpolation between its two end points, $\{(1-t)\hat{\mathbf{x}}^{(0)}+t\hat{\mathbf{x}}^{(1)}|t\in[0,1]\}$ (bottom row).
47
+
48
+ $\hat{\gamma}(t) := g(\gamma(t))$ . We denote the two end-points of the curve by $\hat{\mathbf{x}}^{(0)} := g(\mathbf{z}^{(0)}) = \hat{\gamma}(0)$ and $\hat{\mathbf{x}}^{(1)} := g(\mathbf{z}^{(1)}) = \hat{\gamma}(1)$ . Instead of traversing the image manifold parameterized by g, we could also travel between the two end-points in a straight path, which we define by $\hat{\mathbf{x}}^{(t)} := (1-t)\hat{\mathbf{x}}^{(0)} + t\hat{\mathbf{x}}^{(1)}$ and is given by a simple linear interpolation in the pixel space. The setup is illustrated in Figure 2.
49
+
50
+ Fig. 3 visualizes an example of the resulting curve of images $\hat{\gamma}(t)$ (top row), compared to the interpolating straight path $\hat{\mathbf{x}}^{(t)}$ (bottom row), as t goes from 0 to 1. The results appear very similar, suggesting the latent coefficients largely carry local and mostly low-level information about the image signal. As a rough measure of the deviation between the two trajectories, Fig. 4a computes the MSE between $\hat{\gamma}(t)$ and $\hat{\mathbf{x}}^{(t)}$ at corresponding time steps, for pairs of random image crops from COCO [30]. The results (solid lines) indicate that the two curves do not align perfectly. However, since the parameterization of any curve is not unique, we get a better sense of the behavior of the manifold curve $\hat{\gamma}(t)$ by considering its length $L(\hat{\gamma})$ in relation to the length of the interpolating straight path $\|\hat{\mathbf{x}}^{(0)} - \hat{\mathbf{x}}^{(1)}\|$ . We compute the two lengths (the curve length can be computed using the Jacobian of q; see Appendix Sec. 7.4.1), and plot them for random image pairs in Fig. 4b. The resulting curve lengths fall very closely to the straight path lengths regardless of the absolute length of the curves, indicating that the curves globally follow nearly straight paths. Note that if g was linear (affine), then $\hat{\gamma}(t)$ and $\hat{\mathbf{x}}^{(t)}$ would perfectly overlap.
51
+
52
+ Additionally, inspired by *mixup* regularization [55], we
53
+
54
+ ![](_page_2_Figure_7.jpeg)
55
+
56
+ Figure 4. The effect of traversing the synthesis manifold, with end points defined by random image pairs. (a): Mean-squared error distance between the decoded curve $\hat{\gamma}(t)$ and straight paths in the image space (reconstructions $\hat{\mathbf{x}}^{(t)}$ and originals $\mathbf{x}^{(t)}$ ). (b): The length of the curve $\hat{\gamma}$ v.s. that of the interpolating straight path $\hat{\mathbf{x}}^{(t)}$ . The image pixel values are scaled to [-0.5, 0.5].
57
+
58
+ examine how well the synthesized curve $\hat{\gamma}(t)$ can reconstruct the linear interpolation of the two *ground truth* images, defined by $\mathbf{x}^{(t)} := (1-t)\mathbf{x}^{(0)} + t\mathbf{x}^{(1)}$ . Fig. 4a plots the reconstruction error for the same random image pairs in dashed lines, and shows that the synthesized curve $\hat{\gamma}(t)$ generally offers consistent reconstruction quality along the entire trajectory. Note that if g was linear (affine), then this reconstruction error would vary linearly across t.
59
+
60
+ The above observations form a stark contrast to the typical behavior of the decoder network in generative modeling, where different images tend to be separated by regions of low density under the model, and the decoder function varies rapidly when crossing such boundaries [12], e.g., across a linear interpolation of images in pixel space.
61
+
62
+ We obtained these results with a Mean-scale Hyperprior model [36] trained with $\lambda = 0.01$ , and we observe similar behavior at other bit-rates (with the curves $\hat{\gamma}$ becoming even "straighter" at higher bit-rates) and in various NTC architectures [4, 36, 37] (see Appendix Sec. 7.4 for more examples). Our empirical observations corroborate the earlier findings [17], and raise the question: Given the many similarities, can we replace the deep convolutional synthesis in NTC with a linear (affine) function? Our motivation is mainly computational: a linear synthesis can offer drastic computation savings over deep neural networks. This is not necessarily the case for an arbitrary linear (affine) function from the latent to image space, so we restrict ourselves to efficient convolutional architectures. As we show empirically in Sec. 4.3, a single JPEG-like transform with a large enough kernel size can emulate a more general cascade of transposed convolutions, while being much more computationally efficient. Compared to fixed and orthogonal transforms in traditional transform coding, learning a linear synthesis from data allows us to still benefit from end-to-end optimization. Further, in Sec. 4.2, we show that strategically incorporating a small amount of nonlinearity can significantly improve the R-D performance without much increase in computation complexity.
63
+
64
+ JPEG-like synthesis At its core, JPEG works by dividing an input image into $8\times 8$ blocks and applying block-wise linear transform coding. This can be implemented efficiently in hardware and is a key factor in JPEG's enduring popularity. By analogy to JPEG, we interpret the $h\times w\times C$ latent tensor in NTC as the coefficients of a linear synthesis transform. In the most basic form, the output reconstructions are computed in $s\times s$ blocks, similarly to JPEG. Specifically, the (i,j)th block reconstruction is computed as a linear combination of (learned) "basis images" $\mathbf{K}_c \in \mathbb{R}^{s\times s\times C_{out}}, c=1,...,C$ , weighted by the vector of (quantized) coefficients $\mathbf{z}_{i,j} \in \mathbb{R}^C$ associated with the (i,j)th spatial location:
65
+
66
+ $$\hat{B}_{i,j} = \sum_{c=1}^{C} \mathbf{z}_{i,j,c} \mathbf{K}_c. \tag{1}$$
67
+
68
+ Note that we recover the per-channel discrete cosine transform of JPEG by setting $s=8, C=64, C_{out}=1$ , and $\{\mathbf{K}_c, c=1,...,64\}$ to be the bases of the $8\times 8$ discrete cosine transform. Eq 1 can be implemented efficiently via a transposed convolution on $\mathbf{z}$ , using $\mathbf{K}$ as the kernel weights and s as the stride. In terms of MACs, the computation complexity of the JPEG-like synthesis then equals
69
+
70
+ $$M(\text{JPEG-like}) = C \times h \times w \times s^2 \times C_{out},$$
71
+ (2)
72
+
73
+ where $C_{out}=3$ for a color image. Note that for a given latent tensor and "upsampling" rate s, Eq. 2 gives the minimum achievable MACs by any non-degenerate synthesis transform based on (transposed) convolutions. As we see in Sec. 4.2, although the minimal JPEG-like synthesis drastically reduces the decoding complexity, it can introduce severe blocking artifacts since the blocks are reconstructed independently. We therefore allow overlapping basis functions with spatial extent $k \times k$ , where $k \ge s$ and k - s is the number of overlapping pixels; we compute each $k \times k$ blocks as in Eq. 1, then form the reconstructed image by taking the sum of the (overlapping) blocks. This corresponds to simply increasing the kernel size from (s,s) to (k,k) in the corresponding transposed convolution, and increases the $s^2$ factor in Eq. 2 to $k^2$ .
74
+
75
+ **Two-layer nonlinear synthesis** Despite its computational efficiency, the JPEG-like synthesis can be overly restrictive. Indeed, nonlinear transform coding benefits from the ability of the synthesis transform to adapt to the shape of the
76
+
77
+ data manifold [6]. We therefore introduce a small degree of nonlinearity in the JPEG-like transform. Many possibilities exist, and we found that introducing a single hidden layer with nonlinearity to work well. Concretely, we use two layers of transposed convolutions (conv\_1, conv\_2), with strides $(s_1, s_2)$ , kernel sizes $(k_1, k_2)$ , and output channels $(N, C_{out})$ respectively. At lower bit-rates, we found it more parameter- and compute-efficient to also allow a residual connection from $\mathbf{z}$ to the hidden activation using another transposed convolution conv\_res (see a diagram and more details in Appendix Sec. 7.1). Thus, given a latent tensor $\mathbf{z} \in \mathbb{R}^{h,w,C}$ the output is $g(\mathbf{z}) = \text{conv}_2(\xi(\text{conv}_1(\mathbf{z})) + \text{conv}_{\text{res}}(\mathbf{z}))$ , where $\xi$ is a nonlinear activation.
78
+
79
+ The MAC count in this architecture is then approximately
80
+
81
+ $$\begin{split} M(\text{2-layer}) &= C \times h \times w \times k_1^2 \times 2N \\ &+ N \times hs_1 \times ws_1 \times k_2^2 \times C_{out}. \end{split} \tag{3}$$
82
+
83
+ To keep this decoding complexity low, we use large convolution kernels $(k_1=13)$ with aggressive upsampling $(s_1=8)$ in the first layer, in the spirit of a JPEG-like synthesis, followed by a lightweight output layer with a smaller upsampling factor $(s_2=2)$ and kernel size $(k_2=5)$ . We use the simplified (inverse) GDN activation [28] for $\xi$ as it gave the best R-D performance with minor computational overhead. We discuss these and other architectural choices in Sec. 4.4.
84
+
85
+ Here, we analyze the rate-distortion performance of neural lossy compression in an idealized, asymptotic setting. Our novel decomposition of the R-D objective pinpoints the performance loss caused by restricting to a simpler (e.g., linear) decoding transform, and suggests reducing the inference gap as a simple and theoretically principled remedy.
86
+
87
+ Consider a general neural lossy compressor operating as follows. Let $\mathcal{Z}$ be a latent space, $p(\mathbf{z})$ a prior distribution over $\mathcal{Z}$ known to both the sender and receiver, and $g:\mathcal{Z}\to\hat{\mathcal{X}}$ the synthesis transform belonging to a family of functions $\mathcal{G}$ . Given a data point $\mathbf{x}$ , the sender computes an inference distribution $q(\mathbf{z}|\mathbf{x})$ ; this can be the output of an encoder network, or a more sophisticated procedure such as iterative optimization with SGA [51]. We assume relative entropy coding [16, 45] is applied with minimal overhead, so that the sender can send a sample of $\mathbf{z} \sim q(\mathbf{z}|\mathbf{x})$ with an average bit-rate not much higher than $KL(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))$ [19]. Given a neural compression method, which can be identified with the tuple $(q(\mathbf{z}|\mathbf{x}),g,p(\mathbf{z}))$ , its R-D cost on data distributed according to $P_{\mathbf{x}}$ thus has the form of a negative ELBO [19]
88
+
89
+ <sup>&</sup>lt;sup>2</sup>When the latent coefficients are sparse (which often occurs at low bitrates), this computation complexity can be further reduced by using sparse matrix/tensor operations. We leave this to future work.
90
+
91
+ $$\mathcal{L}(q(\mathbf{z}|\mathbf{x}), g, p(\mathbf{z})) := (4)$$
92
+
93
+ $$\lambda \mathbb{E}_{\mathbf{x} \sim P_{\mathbf{x}}, \mathbf{z} \sim q(\mathbf{z}|\mathbf{x})} [\rho(\mathbf{x}, g(\mathbf{z}))] + \mathbb{E}_{\mathbf{x} \sim P_{\mathbf{x}}} [KL(q(\mathbf{z}|\mathbf{x}) || p(\mathbf{z}))]$$
94
+
95
+ where λ ≥ 0 controls the R-D tradeoff, and ρ : X × X →ˆ [0, ∞) is the distortion function (commonly the MSE). Note that the encoding distribution q(z|x) appears in both the rate and distortion terms above. We show that the compression cost admits the following alternative decomposition, where the effects of p(z), g, and q(z|x) can be isolated:
96
+
97
+ $$\mathcal{L}(q(\mathbf{z}|\mathbf{x}), g, p(\mathbf{z})) := \tag{5}$$
98
+
99
+ $$= \underbrace{\mathcal{F}(\mathcal{G})}_{\text{irreducible}} + \underbrace{\left(\mathbb{E}_{\mathbf{x} \sim P_{\mathbf{X}}}[-\log \Gamma_{g, p(\mathbf{z})}(\mathbf{x})] - \mathcal{F}(\mathcal{G})\right)}_{\text{modeling gap}}$$
100
+
101
+ $$+ \underbrace{\mathbb{E}_{\mathbf{x} \sim P_{\mathbf{X}}}[KL(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z}|\mathbf{x})]}_{\text{inference gap}}. \tag{6}$$
102
+
103
+ The derivation and definition of various quantities are given in Sec. 7.2, and mirror a similar decomposition in lossless compression [56]; here we give a high-level explanation of the three terms. The first term represents the fundamentally irreducible cost of compression; this depends only on the intrinsic compressibility of the data P<sup>X</sup> [53] and the transform family G. The second term represents the excess cost of compression given our particular choice of decoding architecture, i.e., the prior p(z) and transform g, compared to the optimum achievable (the first term); we thus call it the *modeling gap*. Note that for each choice of (g, p(z)), the optimal inference distribution has an explicit formula, which allows us to write the R-D cost under optimal inference in the form of a negative log partition function (the − log Γ term). Finally, we consider the effect of suboptimal inference and isolate it in the third term, representing the overhead caused by a sub-optimal encoding/inference method q(z|x) for a given model (g, p(z)); we call it the *inference gap*.
104
+
105
+ Although the above result is derived in an asymptotic setting, it still gives us insight about the performance of neural lossy compression at varying decoder complexity. When we use a simpler synthesis transform architecture, we place restrictions on our transform family G, thus causing the first (irreducible) part of compression cost to increase. The modeling gap may or may not increase as a result,3 but we can always lower the overall compression cost by reducing the inference gap, without affecting the decoding computational complexity.
106
+
107
+ In this work, we explore two orthogonal approaches for reducing the inference gap, which can be further decomposed into an (1) *approximation gap* and (2) *amortization gap* [15]. Correspondingly, for a given decoding architecture, we propose to reduce (1) by using a more powerful analysis
108
+
109
+ transform, e.g., from a recent SOTA method such as ELIC [25], and reduce (2) by performing iterative encoding using SGA [51] at compression time.
2305.15074/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2305.15074/paper_text/intro_method.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The capabilities of large language models (LLMs) have been improving since the last decade on a plethora of tasks including reasoning. Most recently, GPT-4 demonstrates significant improvements over GPT-3 on tasks such as code-generation, arithmetic and commonsense reasoning [\(Bubeck](#page-9-0) [et al.,](#page-9-0) [2023\)](#page-9-0), exhibiting impressive performance on standard reasoning and STEM benchmarks such as GSM-8K [\(Cobbe et al.,](#page-9-1) [2021\)](#page-9-1), MATH [\(Hendrycks](#page-9-2)
4
+
5
+ <span id="page-0-0"></span>
6
+
7
+ $$R_1 \cdot \frac{h}{l} = ma \cdot \frac{l}{l} \sin 30^{\circ} \tag{3}$$
8
+
9
+ $$R_1 = R_2 \tag{4}$$
10
+
11
+ Figure 1: An example problem from JEEBENCH
12
+
13
+ [et al.,](#page-9-2) [2021b\)](#page-9-2), MMLU [\(Hendrycks et al.,](#page-9-3) [2021a\)](#page-9-3) and ScienceQA [\(Lu et al.,](#page-9-4) [2022\)](#page-9-4)
14
+
15
+ Rising capabilities of LLMs call for harder benchmarks. We introduce JEEBENCH, a benchmark consisting of 515 problems that require complex logical and mathematical reasoning on top of deep in-domain knowledge of pre-engineering level Physics, Chemistry and Mathematics. Problems have been curated from the past 8 editions of the Joint Entrance Examination (JEE)-Advanced exam, held annually in India as an entrance test to India's premier engineering institutes: the IITs. The exam is designed to be time-consuming, diffi-
16
+
17
+ <sup>∗</sup> equal contribution, † [work done while at IIT Delhi](#page-9-2)
18
+
19
+ cult, and has a low selection rate (approx. 5%).
20
+
21
+ The problems in the dataset require a complex interplay of employing multiple high-level domain specific concepts, grounding them into mathematical equations/constraints, followed by algebraic manipulation and arithmetic operations. Figure [1](#page-0-0) is a problem from the dataset along with an expert's solution. In this problem, the ideal solution involves the retrieval of the appropriate concepts: *the rules of static equilibrium*, grounding the concepts into mathematical equations for the specific problem instance, followed by solving the equations in order to find the final answer. Other instances of domain-specific concepts can be *Balancing of redox reactions* (Chemistry), *Current into a junction equals current out of the junction* (Physics) and *Integration by parts* (Mathematics). More such examples can be found in the Appendix [A.2.](#page-11-0)
22
+
23
+ We conduct a qualitative and quantitative study of contemporary open-source and proprietary LLMs on these problems and also highlight avenues for further research. Our analysis indicates that GPT-4 is unparalleled in performance compared to other models. It demonstrates long horizon reasoning and the ability to manipulate complex algebraic equations in quite a few problems. We observe that chain-of-thought prompting [\(Ko](#page-9-5)[jima et al.,](#page-9-5) [2022\)](#page-9-5) and self-consistency [\(Wang et al.,](#page-10-0) [2023b\)](#page-10-0), which are recent proposals to improve LLM performance, are indeed effective on our dataset.
24
+
25
+ We also explore Self-Critique [\(Madaan et al.,](#page-9-6) [2023;](#page-9-6) [Shinn et al.,](#page-9-7) [2023\)](#page-9-7), where an LLM (verifier) is instructed to improve the outputs of the same LLM (generator). We find that this approach is not helpful on JEEBENCH. The verifier is weak in spotting conceptual errors, and like the generator, is itself prone to hallucinations. It would be interesting to explore the class of problems where this approach of self-refinement is (not) helpful.
26
+
27
+ We further conduct a critical analysis of the limits of GPT-4's reasoning abilities, and highlight major areas that require considerable improvement. A detailed error analysis suggests that it frequently struggles in retrieving relevant concepts required to solve problems, and performing algebraic manipulation & arithmetic. Inability to perform even simple algebra highlights an important question: can we build LLMs faithful to mathematical logic?
28
+
29
+ Another important question is how to estimate GPT-4's performance in comparison to humans. The JEE Advanced Exam comes with the bane of negative marking for incorrectly answered questions. This makes the exam even more challenging, because in addition to advanced problem solving skills, it requires an accurate risk assessment and computing a good policy based on it. Our experiments demonstrate that when prompted with the marking scheme, GPT-4's performance actually drops. To mitigate this, we employ a simple method - *thresholding over self consistency*. Self consistency generates multiple responses for each question. Relative frequency in the set of responses can be considered as a proxy for confidence score of each option. Threshold on the confidence score can be tuned using a validation set. We find that GPT-4's score, after augmenting it this way, lies in the top 10-20 percentile of human scores in the 2023 edition of the exam.
30
+
31
+ Overall, we hope that this benchmark serves as a strong and reliable test-bed and fosters future research on problem solving with LLMs. Our code and dataset are available at [https://github.com/](https://github.com/dair-iitd/jeebench) [dair-iitd/jeebench](https://github.com/dair-iitd/jeebench).
2305.16988/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2305.16988/main_diagram/main_diagram.pdf ADDED
Binary file (20.8 kB). View file
 
2305.16988/paper_text/intro_method.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Causal effects are crucial for decision-making in many disciplines, such as marketing [@Varian.2016], medicine [@Yazdani.2015], and economics [@Angrist.1990]. For example, physicians need to know the treatment effects to personalize medical care, and governments are interested in the causal effects of policies on infection rates during a pandemic. In many such applications, randomized experiments are costly or infeasible, because of which causal effects must be estimated from observational data [@Robins.2000].
4
+
5
+ Estimating causal effects from observational data may lead to bias due to the existence of confounders, i.e., variables that affect both treatment and outcome [@Pearl.2009]. A remedy is to observe all confounders and thus to assume unconfoundedness (e.g., as in [@Curth.2021; @Shalit.2017; @Shi.2019]). However, in many practical applications, the assumption of unconfoundedness is violated. For example, electronic health records do not capture a patient's ethnic background, which is a known confounder in medicine [@Obermeyer.2019b]. In such cases, the causal effect is not identified from observational data, and unbiased estimation is thus impossible [@Hartford.2017].
6
+
7
+ A popular way to perform causal inference in the presence of unobserved confounders is ***causal sensitivity analysis***. Causal sensitivity analysis aims to derive bounds on the causal effect of interest under relaxations of the unconfoundedness assumption. Here, the strength of unobserved confounding is typically controlled by some sensitivity parameter $\Gamma$, which also determines the tightness of the bounds. In practice, one chooses $\Gamma$ through domain knowledge [@Cornfield.1959; @Jin.2023; @Zhao.2019] or data-driven heuristics [@Hsu.2013]. Then, one derives that the causal effect of interest lies in some informative region. For example, a suitable $\Gamma$ may enable -- despite unobserved confounding -- to infer the sign of the treatment effect, which is often sufficient for consequential decision-making [@Kallus.2019].
8
+
9
+ A common model for causal sensitivity analysis is the marginal sensitivity model (MSM) [@Jesson.2021; @Kallus.2019; @Kallus.2018c; @Tan.2006]. A benefit of the MSM is that it does not impose any kind of parametric assumptions on the data-generating process. However, the standard MSM is only applicable in settings where the treatment is a single binary variable. Different extensions have been proposed for continuous treatments [@Bonvini.2022; @Jesson.2022; @Marmarelis.2023] and for time-varying treatments and confounders [@Bonvini.2022]. However, these works are restricted to specific settings, while a unified framework for causal sensitivity analysis is still missing.
10
+
11
+ ::: wrapfigure
12
+ r0.6 ![image](figures/causal_graphs.pdf){width="60%"}
13
+ :::
14
+
15
+ **Contributions:**[^1] In this paper, we propose a *generalized marginal sensitivity model (GMSM)*. Our GMSM provides a unified framework for causal sensitivity analysis under unobserved confounding in various settings with multiple discrete, continuous, and time-varying treatments. Crucially, our GMSM includes existing models, such as those in [@Tan.2006] and [@Jesson.2022], as special cases. As a result, our GMSM enables a unified approach to deriving sharp bounds for a large class of causal effects. To do so, we bound a distribution shift in the unobserved confounders while performing an intervention on the treatments. As a result, we obtain sharp bounds for various causal effects, including (conditional) average treatment effects but also effects for mediation analysis and path analysis (see Fig. [\[fig:causal_graphs\]](#fig:causal_graphs){reference-type="ref" reference="fig:causal_graphs"}) and for distributional effects, which have not been studied under an MSM-type sensitivity analysis previously. We also show that, for binary treatments, our bounds coincide with recent optimality results for (conditional) average treatment effects under the MSM. Finally, we propose a scalable algorithm to estimate our sharp bounds from observational data and perform extensive computational experiments to show the validity of our bounds empirically.
16
+
17
+ # Method
18
+
19
+ We first formally define a general setting for causal sensitivity analysis that includes mediation and path analysis (Sec. [3.1](#sec:scm){reference-type="ref" reference="sec:scm"} and Sec. [3.2](#sec:sensitivity){reference-type="ref" reference="sec:sensitivity"}). We then propose our generalized marginal sensitivity model (GMSM) in Sec. [3.3](#sec:gmsm){reference-type="ref" reference="sec:gmsm"} and compare it with existing sensitivity models from the literature.
20
+
21
+ **Notation:** We write random variables as capital letters $X$ and their realizations in lowercase $x$. Bold letters $\mathbf{X}$ or $\mathbf{x}$ represent (random) vectors. If $\mathbf{X} = (X_1, \dots, X_\ell)$ is a sequence of random variables of length $\ell$, we denote $\Bar{\mathbf{X}}_{k} = (X_1, \dots, X_k)$ for $1 \leq k \leq \ell$. We denote probability distributions over $X$ as $\mathbb{P}^X$ where required. The probability mass function for a discrete $X$ is denoted as $\mathbb{P}(x) = \mathbb{P}(X = x)$. If $X$ is continuous, $\mathbb{P}(x)$ is the probability density function. We denote $\mathbb{P}(\cdot)$ as the corresponding probability mass/density function not evaluated at a specific $x$. Similarly, we write conditional probability mass functions/density functions as $\mathbb{P}(y \mid x) = \mathbb{P}(Y = y \mid X = x)$ and conditional expectations as $\mathbb{E}[Y \mid x] = \mathbb{E}[Y \mid X = x] = \int y \, \mathbb{P}(y \mid x) \mathop{}\!\mathrm{d}y$. We denote $\mathbb{P}(y \mid do(X = x))$ as the probability mass function/density function of $Y$ after performing the do-intervention $do(X = x)$ [@Pearl.2009]. Finally, we define $\mathbf{X}_Y = \mathit{pa}(Y) \cap \mathbf{X}$, where $\mathit{pa}(Y)$ denotes the parents of $Y$ in a given causal graph.
22
+
23
+ We formalize causal sensitivity analysis based on Pearl's structural causal model framework [@Pearl.2009].
24
+
25
+ ::: definition
26
+ **Definition 1**. A *structural causal model (SCM)* $\mathcal{M}$ is a tuple $\left(\mathbf{V}, \mathbf{U}, \mathcal{F}, \mathbb{P}^\mathbf{U} \right)$, where $\mathbf{V} = (V_1, \dots, V_k)$ are observed endogenous variables, $\mathbf{U}$ are unobserved exogenous variables determined outside of the model, $\mathcal{F} = \{f_{V_1}, \dots, f_{V_k}\}$ is a set of functions so that each $f_{V_i}$ maps a set of parents $\mathit{pa}(V_i) \subseteq \mathbf{V} \cup \mathbf{U}$ to $V_i$, and $\mathbb{P}^\mathbf{U}$ is a probability distribution on $\mathbf{U}$.
27
+ :::
28
+
29
+ Every SCM $\mathcal{M}$ induces unique directed graph[^2] $\mathcal{G}_{\mathcal{M}}$ on $\mathbf{V} \cup \mathbf{U}$ by drawing a directed edge from $V_1$ to $V_2$ if $V_1 \in \mathit{pa}(V_2)$ . In this paper, we assume that $\mathcal{G}_{\mathcal{M}}$ is acyclic, i.e., does not contain any directed cycle. Then, $\mathcal{M}$ induces a unique joint probability distribution $\mathbb{P}^{\mathbf{V} \cup \mathbf{U}}$ on $\mathbf{V} \cup \mathbf{U}$. Furthermore, $\mathcal{M}$ induces unique interventional distributions $\mathbb{P}^{\mathbf{V} \cup \mathbf{U}}_{do(\mathbf{A} = \mathbf{a})}$ when performing the do-intervention $do(\mathbf{A} = \mathbf{a})$ for some observed treatments $\mathbf{A} \subseteq \mathbf{V}$ [@Bareinboim.2022; @Pearl.2009].
30
+
31
+ We consider settings where we observe four distinct types of endogenous variables $\mathbf{V} = \{\mathbf{X}, \mathbf{A}, \Bar{\mathbf{M}}_\ell, Y\}$: observed confounders $\mathbf{X} \in \mathcal{X} \subseteq \mathbb{R}^{d_x}$, treatments $\mathbf{A} \in \mathcal{A} \subseteq \mathbb{R}^{d_a}$, discrete mediators $\Bar{\mathbf{M}}_\ell = (M_1, \dots, M_\ell)$ with $M_i \in \mathbb{N}$ for $1 \leq i \leq \ell$, and an outcome $Y \in \mathbb{R}$. In a medical setting, $\mathbf{X}$ might be patient characteristics (gender, age, medical history, etc.), $\mathbf{A}$ a medical treatment, $\Bar{\mathbf{M}}_\ell$ a change in diet, and $Y$ a variable indicating a health outcome. Possible causal graphs are shown in Fig. [\[fig:causal_graphs\]](#fig:causal_graphs){reference-type="ref" reference="fig:causal_graphs"}. We assume w.l.o.g. that $(M_1, \dots, M_\ell)$ are ordered causally, i.e., $\Bar{\mathbf{M}}_{i-1}$ are parents of $M_i$ for each $i \in \{2, \dots, \ell\}$. Given treatment interventions $\Bar{\mathbf{a}}_{\ell + 1} = (\mathbf{a}_1, \dots, \mathbf{a}_{\ell+1})$, we are interested in causal effects of the form $$\begin{equation}
32
+ \label{eq:query}
33
+ Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M}) = \sum_{\Bar{\mathbf{m}}_\ell \in \mathbb{N}^\ell} \mathcal{D}\left(\mathbb{P}^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, do(\mathbf{A} = \mathbf{a}_{\ell+1})\right) \prod_{i = 1}^\ell \mathbb{P}(m_i \mid \mathbf{x}, \Bar{\mathbf{m}}_{i-1} , do(\mathbf{A} = \mathbf{a}_i)),
34
+ \end{equation}$$ where $\mathcal{D}$ is some functional that maps the density $\mathbb{P}^Y(\cdot \mid \mathbf{x}, \mathbf{m}, do(\mathbf{A} = \mathbf{a}_{\ell+1}))$ to a scalar value and we sum over all possible realizations $\Bar{\mathbf{m}}_\ell$ of $\Bar{\mathbf{M}}_\ell$. We are also interested in averaged causal effects $\int_\mathcal{X} Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M}) \, \mathbb{P}(\mathbf{x}) \mathop{}\!\mathrm{d}\mathbf{x}$ or differences (Appendix [10](#app:average){reference-type="ref" reference="app:average"}). The effect $Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M})$ generalizes many common effects across different causal inference settings.
35
+
36
+ ::: example
37
+ **Example 1** (CATE, $\ell = 0$). If $\mathcal{D} = \mathbb{E}[\cdot]$ is the expectation functional, Eq. [\[eq:query\]](#eq:query){reference-type="eqref" reference="eq:query"} reduces to $Q(\mathbf{x}, \mathbf{a}, \mathcal{M}) = \mathbb{E}\left[Y \mid \mathbf{x}, do(\mathbf{A} = \mathbf{a})\right]$. When $\mathbf{A}$ is continuous, this is known as the conditional dose-response function [@Bica.2020c; @Jesson.2022]. For binary treatments $\mathbf{A} \in \{0,1\}$, the query $Q(\mathbf{x}, 1, \mathcal{M}) - Q(\mathbf{x}, 0, \mathcal{M})$ is known as the conditional average treatment effect (CATE) [@Curth.2021; @Yoon.2018], and its averaged version as the average treatment effect (ATE) [@Shi.2019; @vanderLaan.2006].
38
+ :::
39
+
40
+ ::: {#example:mediation .example}
41
+ **Example 2** (Mediation analysis, $\ell = 1$). If $\mathcal{D} = \mathbb{E}[\cdot]$ and $\Bar{\mathbf{M}}_1 = M$ is a single mediator, Eq. [\[eq:query\]](#eq:query){reference-type="eqref" reference="eq:query"} reduces to $Q(\mathbf{x}, \Bar{\mathbf{a}}_1, \mathcal{M}) = \sum_{m} \mathbb{E}\left[Y \mid \mathbf{x}, m, do(\mathbf{A} = \mathbf{a}_2)\right] \, \mathbb{P}(m \mid \mathbf{x}, do(\mathbf{A} = \mathbf{a}_1))$. If ${A} \in \{0,1\}$ is binary and the $M$--$Y$ relationship is unconfounded, we obtain the following causal effects studied in mediation analysis [@Pearl.2014]: $Q(\mathbf{x}, (\mathbf{a}_1=0,\mathbf{a}_2=1), \mathcal{M})-Q(\mathbf{x}, (\mathbf{a}_1=0,\mathbf{a}_2=0), \mathcal{M})$ is the (conditional) natural direct effect (NDE), and $Q(\mathbf{x}, (\mathbf{a}_1=1,\mathbf{a}_2=0), \mathcal{M})-Q(\mathbf{x}, (\mathbf{a}_1=0,\mathbf{a}_2=0), \mathcal{M})$ is the (conditional) natural indirect effect (NIE).
42
+ :::
43
+
44
+ In general, Eq. [\[eq:query\]](#eq:query){reference-type="eqref" reference="eq:query"} includes so-called *path-specific effects* if the relationship between mediators and outcome is unconfounded [@Avin.2005; @Correa.2021]. For example, by setting $\mathbf{a}_1 = 1$ and $\mathbf{a}_k = 0$ for all $2 \leq k \leq \ell+1$, we obtain the indirect effect that is passed through the mediator sequence $(M_1, \dots, M_\ell)$. Path-specific effects are important in various applications, including algorithmic fairness, where the aim is to mitigate effects through paths that are considered unfair [@Nabi.2019; @Nabi.2018]. Furthermore, we can set $\mathcal{D}$ to a quantile instead of using the mean in all the examples above. This results in *distributional* versions of CATE, NDE, NIE, and path-specific effects. For example, if the outcome distribution is skewed or contains outliers, practitioners might prefer the median or other quantiles over the mean due to their robustness properties [@Donoho.1983].
45
+
46
+ If all confounders between treatment and mediators and outcome were observed, we could (under additional assumptions) identify the causal effect $Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M})$ from the observational distribution $\mathbb{P}^\mathbf{V}$ on $\mathbf{V}$ by replacing all $do$-operations with conditional probabilities according to the backdoor criterion [@Pearl.2009]. However, under unobserved confounding, identifiability is not possible because SCMs $\mathcal{M}$ and $\mathcal{M}^\prime$ exist, which induce the same observational distribution $\mathbb{P}^\mathbf{V}$, but which result in different causal effects $Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M}) \neq Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M}^\prime)$ [@Padh.2023; @Xia.2023].
47
+
48
+ In the following, we formalize causal sensitivity analysis as maximizing/minimizing the causal effect $Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M})$ over all SCMs $\mathcal{M}$ that are compatible with a predefined *sensitivity model*. Similar approaches have been used for partial identification with instrumental variables [@Kilbertus.2020; @Padh.2023] and testing identifiability of counterfactuals [@Xia.2023; @Xia.2021], where all SCMs are considered that are compatible with the observational data. However, a sensitivity model must additionally restrict the joint distribution of both observed and unobserved variables in order to allow for informative bounds on the causal effect.
49
+
50
+ ::: {#def:sensitivity .definition}
51
+ **Definition 2** (Sensitivity model). Let $\mathbb{P}^\mathbf{V}$ denote the distribution of the observed variables $\mathbf{V} = (\mathbf{X}, \mathbf{A}, \Bar{\mathbf{M}}_\ell, Y)$. A *sensitivity model* $\mathcal{S}$ is a tuple $(\mathbf{U}, \mathcal{P})$, where $\mathbf{U}= (\mathbf{U}_{W})_{W}$ are unobserved confounders between $\mathbf{A}$ and $W \in \{M_1, \dots, M_\ell, Y\}$, respectively (see Fig. [\[fig:causal_graphs\]](#fig:causal_graphs){reference-type="ref" reference="fig:causal_graphs"}) and a family $\mathcal{P}$ of joint probability distributions on $\mathbf{V} \cup \mathbf{U}$, such that, for all $\mathbb{P} \in \mathcal{P}$, it holds that $\int \mathbb{P}(\mathbf{v}, \mathbf{u}) \mathop{}\!\mathrm{d}\mathbf{u} = \mathbb{P}^\mathbf{V}(\mathbf{v})$. We denote the set of all SCMs $\mathcal{M}$ *compatible* with $\mathcal{S}$ (i.e., that respect the causal graph, induce a distribution $\mathbb{P} \in \mathcal{P}$, and do not contain additional confounders) as $\mathcal{C}(\mathcal{S})$ (see Appendix [11](#app:compatibility){reference-type="ref" reference="app:compatibility"}).
52
+ :::
53
+
54
+ Our definition of sensitivity models excludes unobserved confounding between mediators and the outcome. This ensures that the causal effect from Eq. [\[eq:query\]](#eq:query){reference-type="eqref" reference="eq:query"} can be interpreted as a path-specific effect [@Avin.2005; @Pearl.2014]. We refer to Sec. [6](#sec:discussion){reference-type="ref" reference="sec:discussion"} for a detailed discussion.
55
+
56
+ Using the definitions above, we can define *causal sensitivity analysis* as the following partial identification problem: we aim to obtain bounds $Q^-(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S}) \leq Q^+(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S})$ so that $$\begin{equation}
57
+ \label{eq:bounds}
58
+ Q^+(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S}) = \sup_{\mathcal{M} \in \mathcal{C}(\mathcal{S})} Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M}) \text{ and }
59
+ Q^-(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S}) = \inf_{\mathcal{M} \in \mathcal{C}(\mathcal{S})} Q(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{M}).
60
+ \end{equation}$$ $Q^+(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S})$ is the maximal causal effect that can be achieved by any SCM that is compatible with the sensitivity mode $\mathcal{S}$ (and vice versa for $Q^-(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S})$). Hence, if the sensitivity model is valid, i.e., contains the ground-truth distribution over observed variables and unobserved confounders, we know that the ground-truth causal effect must be contained in the interval $[Q^-(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S}), Q^+(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S})]$. Bounds for average causal effects and effect differences follow immediately (see Appendix [10](#app:average){reference-type="ref" reference="app:average"}).
61
+
62
+ We introduce now the GMSM. We begin by providing the general definition and then show that this extends various marginal sensitivity models from existing literature.
63
+
64
+ ::: definition
65
+ **Definition 3** (GMSM). The *generalized marginal sensitivity model (GMSM)* is a sensitivity model $(\mathbf{U}, \mathcal{P})$, where $\mathcal{P}$ contains all $\mathbb{P}$ that satisfy the following sensitivity constraint: For each $W \in \{M_1, \dots, M_\ell, Y\}$, there exist bounds $s_W^-(\mathbf{a}, \mathbf{x}) \leq 1 \leq s_W^+(\mathbf{a}, \mathbf{x})$, so that, for all $\mathbf{u}_W$, $\mathbf{x}$, and $\mathbf{a}$ $$\begin{equation}
66
+ \label{eq:gmsm_def}
67
+ s_W^-(\mathbf{a}, \mathbf{x}) \leq \frac{\mathbb{P}(\mathbf{U}_W = \mathbf{u}_W \mid \mathbf{x}, \mathbf{a})}{\mathbb{P}(\mathbf{U}_W = \mathbf{u}_W \mid \mathbf{x}, do(\mathbf{A} = \mathbf{a}))} \leq s_W^+(\mathbf{a}, \mathbf{x}).
68
+ \end{equation}$$
69
+ :::
70
+
71
+ The GMSM bounds the distribution shift in the unobserved confounders $\mathbf{U}_W$ when performing the intervention $do(\mathbf{A} = \mathbf{a})$ instead of conditioning on $\mathbf{A} = \mathbf{a}$. This is a restriction on the strength of the effect the unobserved confounders $\mathbf{U}_W$ can have on the treatment $\mathbf{A}$. If $\mathbf{U}_W$ has no effect on $\mathbf{A}$, Eq. [\[eq:gmsm_def\]](#eq:gmsm_def){reference-type="eqref" reference="eq:gmsm_def"} holds with $s_W^-(\mathbf{a}, \mathbf{x}) = s_W^+(\mathbf{a}, \mathbf{x}) = 1$. Hence, the further $s_W^-(\mathbf{a}, \mathbf{x})$ and $s_W^+(\mathbf{a}, \mathbf{x})$ deviate from $1$, the larger is the effect from $\mathbf{U}_W$ on $\mathbf{A}$ that the GMSM allows for. We will often use a *weighted* GMSM, which expresses the bounds in terms of a sensitivity parameter.
72
+
73
+ ::: {#def:gmsm_weighted_def .definition}
74
+ **Definition 4** (Weighted GMSM). A *weighted GMSM* is a GMSM where $s_W^-(\mathbf{a}, \mathbf{x})$ and $s_W^+(\mathbf{a}, \mathbf{x})$ can be written as $s_W^-(\mathbf{a}, \mathbf{x}) = \frac{1}{(1 - \Gamma_W) q_W(\mathbf{a}, \mathbf{x}) + \Gamma_W}$ and $s_W^+(\mathbf{a}, \mathbf{x}) = \frac{1}{(1 - \Gamma_W^{-1}) q_W(\mathbf{a}, \mathbf{x}) + \Gamma_W^{-1}}$ for a sensitivity parameter $\Gamma_W \geq 1$ and a weight function $q_W(\mathbf{a}, \mathbf{x}) \in [0,1]$ for all $\mathbf{x}$ and $\mathbf{a}$.
75
+ :::
76
+
77
+ In a weighted GMSM, the sensitivity parameter $\Gamma_W$ captures the overall restriction on the unobserved confounding strength across individuals. If $\Gamma_W = 1$, no unobserved confounding is allowed, and unconfoundedness holds. For $\Gamma_W \to \infty$, the restriction is relaxed completely, and arbitrary confounding strength is allowed. The weight function $q_W(\mathbf{a}, \mathbf{x}) \in [0,1]$ offers several advantages. It allows to further restrict confounding for individuals with treatments $\mathbf{a}$ and covariates $\mathbf{x}$. As $q_W(\mathbf{a}, \mathbf{x}) \to 1$, confounding is more strongly restricted until unconfoundedness is reached. This is helpful in applications where prior knowledge about the confounding structure is available. As an example, consider a medical setting where we want to estimate the effect of new drug treatments on the risk of developing a certain disease, but we suspect that the data is confounded by the individual's genetic risk for the disease, which is not measured in the observational data. However, we also might know that genetic risk for the disease is not relevant for a specific combination of age and gender.
78
+
79
+ **Comparison with the MSM and its extensions:** In the following, we show that the weighted GMSM extends popular sensitivity models from the literature. Proofs are in Appendix [9](#app:msm_gmsm){reference-type="ref" reference="app:msm_gmsm"}. Note that existing sensitivity models do not consider settings with mediators (i.e., $\ell=0$), we thus can write $\Gamma = \Gamma_Y$ for the sensitivity parameter, $q(\mathbf{a}, \mathbf{x}) = q_Y(\mathbf{a}, \mathbf{x})$ for the weight function, and $\mathbf{U} = \mathbf{U}_Y$ for the unobserved confounders. For binary treatments $\mathbf{A} = A \in \{0,1\}$, the marginal sensitivity model (MSM) [@Tan.2006] is defined via $\frac{1}{\Gamma} \leq \frac{\pi(\mathbf{x})}{1 - \pi(\mathbf{x})} \frac{1 - \pi(\mathbf{x}, \mathbf{u})}{\pi(\mathbf{x}, \mathbf{u})} \leq \Gamma$, where $\pi(\mathbf{x}) = \mathbb{P}(A = 1 \mid \mathbf{x})$ denotes the observed propensity score and $\pi(\mathbf{x}, \mathbf{u}) = \mathbb{P}(A = 1 \mid \mathbf{x}, \mathbf{u})$ denotes the full propensity score. For continuous treatments $\mathbf{A}$, the continuous marginal sensitivity model (CMSM) [@Jesson.2022] is defined via $\frac{1}{\Gamma} \leq \frac{\mathbb{P}(\mathbf{a} \mid \mathbf{x}, \mathbf{u})}{\mathbb{P}(\mathbf{a} \mid \mathbf{x})} \leq \Gamma$. For longitudinal settings with time-varying observed confounders $\mathbf{X} = (\mathbf{X}_1, \dots, \mathbf{X}_T)$, unobserved confounders $\mathbf{U} = (\mathbf{U}_1, \dots, \mathbf{U}_T)$, treatments $\mathbf{A} = (\mathbf{A}_1, \dots, \mathbf{A}_T)$, we define the longitudinal marginal sensitivity model (LMSM) via $\frac{1}{\Gamma} \leq \prod_{t=1}^T \frac{\mathbb{P}(\mathbf{a}_t \mid \Bar{\mathbf{x}}_T, \Bar{\mathbf{u}}_t, \Bar{\mathbf{a}}_{t-1})}{\mathbb{P}(\mathbf{a}_t \mid \Bar{\mathbf{x}}_T, \Bar{\mathbf{a}}_{t-1})} \leq \Gamma$.
80
+
81
+ ::: {#lem:msm .lemma}
82
+ **Lemma 1**. *The MSM, CMSM and LMSM are special cases of the weighted GMSM by choosing the weight functions $q(\mathbf{a}, \mathbf{x}) = \mathbb{P}(\mathbf{a} \mid \mathbf{x})$ (MSM) and $q(\mathbf{a}, \mathbf{x}) = 0$ (CMSM, LMSM).*
83
+ :::
84
+
85
+ Lemma [1](#lem:msm){reference-type="ref" reference="lem:msm"} provides a *new interpretation of the MSM*: If the probability $\mathbb{P}(\mathbf{a} \mid \mathbf{x})$ is large, the MSM restricts the confounding strength because most of the randomness in the treatment is already explained by the observed confounders $\mathbf{x}$. We can extend this approach to arbitrary discrete treatments within the weighted GMSM framework. The CMSM and LMSM apply the same confounding restriction for all $\mathbf{a}$ and $\mathbf{x}$.
86
+
87
+ In this section, we derive sharp bounds under our GMSM. We first quantify the maximal shift in conditional distributions (Sec. [4.1](#sec:shift){reference-type="ref" reference="sec:shift"}). This allows us to derive an algorithm to compute explicit bounds (Sec. [4.2](#sec:bounds){reference-type="ref" reference="sec:bounds"}). Finally, we show how these bounds can be estimated from finite data (Sec. [4.3](#sec:estimation){reference-type="ref" reference="sec:estimation"}).
88
+
89
+ In the following, we provide some intuition before stating the main result. Let us consider a simple setting with treatment $\mathbf{A}$, a single unobserved confounder $\mathbf{U}_Y = U \in \mathbb{R}$, outcome $Y$, and GMSM bounds $s_Y^-(\mathbf{a})$ and $s_Y^+(\mathbf{a})$ (see Fig. [\[fig:intuition\]](#fig:intuition){reference-type="ref" reference="fig:intuition"}, left). Any SCM $\mathcal{M}$ that is compatible with $\mathcal{S}$ describes the relationship between $\mathbf{A}$, $U$, and $Y$ via a functional assignment $Y = f_Y(\mathbf{A}, U)$. For a fixed treatment $\mathbf{a}$, we denote $f_Y(\mathbf{a}, \cdot)$ as $f_\mathbf{a}$. We are interested in the *interventional density* $\mathbb{P}\left(y \mid do(\mathbf{A} = \mathbf{a})\right) = \int \mathbb{P}\left(y \mid \mathbf{a}, u \right) \mathbb{P}(u) \mathop{}\!\mathrm{d}u = {f_\mathbf{a}}_\# \mathbb{P}^{U}(y)$, where ${f_\mathbf{a}}_\# \mathbb{P}^{U}$ denotes the push-forward distribution induced by $f_\mathbf{a}$ on $\mathbb{P}^U$. However, we only have access to the *observational density* $\mathbb{P}\left(y \mid \mathbf{a}\right) = \int \mathbb{P}\left(y \mid \mathbf{a}, u \right) \mathbb{P}(u \mid \mathbf{a}) \mathop{}\!\mathrm{d}u = {f_\mathbf{a}}_\# \mathbb{P}^{U \mid \mathbf{a}}(y)$. Hence, for a fixed functional assignment $f_\mathbf{a}$, we can quantify the discrepancy between $\mathbb{P}\left(y \mid do(\mathbf{A} = \mathbf{a})\right)$ and $\mathbb{P}\left(y \mid \mathbf{a}\right)$ via the distribution shift between $\mathbb{P}^U$ and $\mathbb{P}^{U \mid \mathbf{a}}$.
90
+
91
+ Fig. [\[fig:intuition\]](#fig:intuition){reference-type="ref" reference="fig:intuition"} shows a toy example where $\mathbb{P}^{U\mid \mathbf{a}}$ is the uniform distribution on $[0,1]$, and the functional assignment $f_\mathbf{a}$ is the inverse standard normal CDF $\Phi^{-1}$, so that $\mathbb{P}\left(y \mid \mathbf{a}\right)$ is the standard normal probability density. We now want to the interventional density $\mathbb{P}\left(y \mid do(\mathbf{A} = \mathbf{a})\right)$ as much as possible, so that $F(y \mid a) \gg F(y \mid do(\mathbf{A} = \mathbf{a}))$ for the CDFs. To achieve this, the distribution $\mathbb{P}^U$ must put more probability mass on the right-hand side of the unit interval $[0,1]$ as compared to $\mathbb{P}^{U \mid \mathbf{a}}$ (see Fig. [\[fig:intuition\]](#fig:intuition){reference-type="ref" reference="fig:intuition"}, left).
92
+
93
+ :::: wrapfigure
94
+ r0.6
95
+
96
+ ::: center
97
+ ![image](figures/plot_intuition.pdf){width="60%"}
98
+ :::
99
+
100
+ []{#fig:intuition label="fig:intuition"}
101
+ ::::
102
+
103
+ Then, the functional assignment will also push more probability mass to the right of $\mathbb{P}(y \mid do(\mathbf{A} = \mathbf{a}))$ as compared to $\mathbb{P}\left(y \mid \mathbf{a}\right)$ (see Fig. [\[fig:intuition\]](#fig:intuition){reference-type="ref" reference="fig:intuition"}, right). However, the GMSM bounds the distribution shift between $\mathbb{P}^U$ and $\mathbb{P}^{U \mid \mathbf{a}}$ via $1/s_Y^+(\mathbf{a}) \leq \mathbb{P}(u) \leq 1/s_Y^-(\mathbf{a})$ (Eq. [\[eq:gmsm_def\]](#eq:gmsm_def){reference-type="eqref" reference="eq:gmsm_def"} using that $\mathbb{P}(u) = \mathbb{P}(u \mid do(\mathbf{A} = \mathbf{a})$). Intuitively, the maximal possible right shift under the GMSM should occur by choosing $\mathbb{P}(u)$ so that the bounds are attained, i.e., $\mathbb{P}(u) = \mathbbm{1}( u \leq c^+_{Y}) (1/s_{Y}^+) + \mathbbm{1}(u > c^+_{Y}) (1/s_{Y}^-)$ for some $c^+_{Y}$ that can be obtained via the normalization constraint $\int \mathbb{P}(u) \mathop{}\!\mathrm{d}u= 1$. The corresponding interventional density $\mathbb{P}_+\left(y \mid do(\mathbf{A} = \mathbf{a})\right)$ is defined via the push-forward (see Fig. [\[fig:intuition\]](#fig:intuition){reference-type="ref" reference="fig:intuition"}).
104
+
105
+ It turns out that the density $\mathbb{P}_+\left(y \mid do(\mathbf{A} = \mathbf{a})\right)$ is the *maximally right-shifted* interventional distribution that can be obtained under any SCM that is compatible with the GMSM. Furthermore, the above arguments can be generalized beyond the toy example from Fig. [\[fig:intuition\]](#fig:intuition){reference-type="ref" reference="fig:intuition"}.
106
+
107
+ ::: {#thrm:shift .theorem}
108
+ **Theorem 1**. *Let $\mathcal{S}$ be a GMSM with bounds $s_W^- = s_W^-(\mathbf{a}, \mathbf{x})$ and $s_W^+ = s_W^+(\mathbf{a}, \mathbf{x})$ for $W \in \{M_1, \dots, M_\ell, Y\}$. We define $c^+_W = \frac{(1 - s_W^-) s_W^+}{s_W^+ - s_W^-}$. If $W \in \mathbb{R}$ is continuous, we define the probability density function $$\begin{equation}
109
+ \label{eq:shifted_continuous}
110
+ \mathbb{P}_+(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a}) = \left\{ \begin{array}{lll}
111
+ (1/s_{W}^+) \, \mathbb{P}(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a}), & \textrm{if } F(w) \leq c^+_{W}, \\
112
+ (1/s_{W}^-) \, \mathbb{P}(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a}), & \textrm{if } F(w) > c^+_{W},\\
113
+ \end{array}\right.
114
+ \end{equation}$$ where $F(\cdot)$ is the CDF corresponding to $\mathbb{P}(\cdot \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a})$. If $W \in \mathbb{N}$ is discrete, we define the probability mass function $$\begin{equation}
115
+ \label{eq:shifted_discrete}
116
+ \mathbb{P}_+(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a}) = \left\{ \begin{array}{lll}
117
+ (1/s_{W}^+) \, \mathbb{P}(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a}) ,& \textrm{if } F(w) < c^+_{W}, \\
118
+ (1/s_{W}^-) \, \mathbb{P}(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a}), & \textrm{if } F(w - 1) > c^+_{W} ,\\
119
+ (1/s_{W}^+) \left(c^+_{W} - F(w - 1) \right) + (1/s_{W}^-) \left(F(w) - c^+_{W} \right), \hspace{-0.2cm} & \textrm{else.}\\
120
+ \end{array}\right.
121
+ \end{equation}$$ Let $F_+(\cdot)$ denote the conditional CDF corresponding to $\mathbb{P}_+(\cdot \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a})$ and let $F_-(\cdot)$ be the conditional CDF of $\mathbb{P}_-(\cdot \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a})$, which is defined by swapping signs (in $c_{W}$ and $s_{W}$). For any SCM $\mathcal{M}$, denote the CDF corresponding to $\mathbb{P}_\mathcal{M}(\cdot \mid \mathbf{x}, \mathbf{m}_W, do(\mathbf{A} = \mathbf{a}))$ as $F_\mathcal{M}(\cdot)$. Then, for all $w$, $$\begin{equation}
122
+ \label{eq:thrm_main}
123
+ F_+(w) \leq \inf_{\mathcal{M} \in \mathcal{C}(\mathcal{S})} F_\mathcal{M}(w) \quad \textrm{and} \quad F_-(w) \geq \sup_{\mathcal{M} \in \mathcal{C}(\mathcal{S})} F_\mathcal{M}(w).
124
+ \end{equation}$$ Assume now that $\mathbb{P}(\mathbf{u}_W \mid \mathbf{x}, do(\mathbf{A} = \mathbf{a})) = \mathbb{P}(\mathbf{u}_W \mid \mathbf{x})$. Then, the bounds are sharp (i.e., equality holds in Eq. [\[eq:thrm_main\]](#eq:thrm_main){reference-type="eqref" reference="eq:thrm_main"}) whenever $\mathbf{A}$ is discrete and it holds that $1/s_W^+ \geq \mathbb{P}(\mathbf{a}\mid \mathbf{x})$, or $\mathbf{A}$ is continuous.*
125
+ :::
126
+
127
+ ::: proof
128
+ *Proof.* See Appendix [8](#app:proofs){reference-type="ref" reference="app:proofs"}. ◻
129
+ :::
130
+
131
+ The result in Theorem [1](#thrm:shift){reference-type="ref" reference="thrm:shift"} does not depend on the distribution or dimensionality of unobserved confounders, and it does not depend on any specific SCM. As such, our sharp bounds are applicable to a wide class of causal effects, without restricting assumptions on the confounding structure beyond the sensitivity constraint of the GMSM.
132
+
133
+ In case $\mathcal{S}$ is a weighted GMSM with sensitivity parameters $\Gamma_W$ for all $W \in \{M_1, \dots, M_\ell, Y\}$, the quantiles $c^+_W$ and $c^-_W$ are of the particular simple form $c^+_W = {\Gamma_W}/(1 + \Gamma_W)$ and $c^-_W = 1/(1 + \Gamma_W)$. Furthermore, the discrete sharpness condition simplifies to $(1 - \Gamma_W^{-1}) q_W(\mathbf{a}, \mathbf{x}) + \Gamma_W^{-1} \geq \mathbb{P}(\mathbf{a}\mid \mathbf{x})$. In particular, the bounds are sharp whenever we choose a weighting function that satisfies $q_W(\mathbf{a}, \mathbf{x}) \geq \mathbb{P}(\mathbf{a}\mid \mathbf{x})$. The assumption $\mathbb{P}(\mathbf{u}_W \mid \mathbf{x}, do(\mathbf{A} = \mathbf{a})) = \mathbb{P}(\mathbf{u}_W \mid \mathbf{x})$ excludes the time-varying case (LMSM). Deriving sharp bounds for the LMSM is an interesting direction for future research.
134
+
135
+ We now leverage Theorem [1](#thrm:shift){reference-type="ref" reference="thrm:shift"} to obtain explicit solutions for the partial identification problem from Eq. [\[eq:bounds\]](#eq:bounds){reference-type="eqref" reference="eq:bounds"} with *monotone* $\mathcal{D}$ (see Appendix [8](#app:proofs){reference-type="ref" reference="app:proofs"}). This includes expectation and distributional effects.
136
+
137
+ ::: {#cor:bounds .corollary}
138
+ **Corollary 1** (Bounds without mediators). *If $\ell = 0$ and $\mathcal{D}$ is monotone, we obtain sharp bounds $$\begin{equation}
139
+ Q^+(\mathbf{x}, \mathbf{a}, \mathcal{S}) \leq \mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, \mathbf{a})\right) \quad \textrm{and} \quad Q^-(\mathbf{x}, \mathbf{a}, \mathcal{S}) \geq \mathcal{D}\left(\mathbb{P}_-^Y(\cdot \mid \mathbf{x}, \mathbf{a})\right),
140
+ \end{equation}$$ and sharpness holds under the same conditions as in Theorem [1](#thrm:shift){reference-type="ref" reference="thrm:shift"}.*
141
+ :::
142
+
143
+ ::: proof
144
+ *Proof.* See Appendix [8](#app:proofs){reference-type="ref" reference="app:proofs"}. ◻
145
+ :::
146
+
147
+ ::::: wrapfigure
148
+ L0.55
149
+
150
+ :::: minipage
151
+ ::: algorithm
152
+ $c^+_W \gets \frac{(1 - s_W^-) s_W^+}{s_W^+ - s_W^-}$ for $W \in \{M_1, \dots, M_\ell, Y\}$ $Q_{\ell+1}^+(\Bar{\mathbf{m}}_\ell) \gets \mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a}_{\ell+1})\right)$ for $\Bar{\mathbf{m}}_\ell \in supp(\Bar{\mathbf{M}}_\ell)$
153
+
154
+ $Q^+(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S}) \gets Q_1^+$
155
+ :::
156
+ ::::
157
+ :::::
158
+
159
+ If $\mathcal{D}$ is the expectation functional, $\mathbf{A}$ is binary, and $Y$ is continuous, this coincides with the optimality result from @Dorn.2022 [@Dorn.2022]. Hence, Corollary [1](#cor:bounds){reference-type="ref" reference="cor:bounds"} generalizes the result from @Dorn.2022 [@Dorn.2022] to distributional effects and arbitrary treatments (e.g., categorical, continuous, or time-varying). In their paper, @Dorn.2022 proved the sharpness of the bounds by using the Neyman-Pearson Lemma. In contrast, we take the more principled approach outlined in Sec. [4.1](#sec:shift){reference-type="ref" reference="sec:shift"}, which is applicable to more general settings.
160
+
161
+ In the following, we consider settings with mediators, i.e., we aim to derive bounds for the causal effect in Eq. [\[eq:query\]](#eq:query){reference-type="eqref" reference="eq:query"}. The idea is to first obtain outcome bounds $\mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, {\Bar{\mathbf{m}}_\ell}, \mathbf{a}_{\ell+1})\right)$ conditioned on all possible values ${\Bar{\mathbf{m}}_\ell} \in supp(\Bar{\mathbf{M}}_\ell)$ in the support of the mediators $\Bar{\mathbf{M}}_\ell$. Without unobserved confounding between treatments and mediators, we can obtain the upper bound $Q^+(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S})$ from Eq. [\[eq:query\]](#eq:query){reference-type="eqref" reference="eq:query"} by replacing $\mathcal{D}\left(\mathbb{P}^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, do(\mathbf{A} = \mathbf{a}_{\ell+1})\right)$ with $\mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a}_{\ell+1})\right)$. With unobserved confounding, we need to additionally take the distribution shift in the mediators into account. To maximize the causal effect, the shifted mediator distribution should put more probability mass on values $\Bar{\mathbf{m}}_\ell$ for which $\mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, {\Bar{\mathbf{m}}_\ell}, \mathbf{a}_{\ell+1})\right)$ is large. Hence, we can apply the maximal right-shift from Theorem [1](#thrm:shift){reference-type="ref" reference="thrm:shift"} to the mediator distributions in Eq. [\[eq:query\]](#eq:query){reference-type="eqref" reference="eq:query"}, but where the values $\Bar{\mathbf{m}}_\ell$ are permuted to order $\mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a}_{\ell+1})\right)$ in ascending order. We provide the details on our iterative procedure to compute the upper bound $Q^+(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S})$ in Algorithm [\[alg:bounds\]](#alg:bounds){reference-type="ref" reference="alg:bounds"}. The lower bound $Q^-(\mathbf{x}, \Bar{\mathbf{a}}_{\ell + 1}, \mathcal{S})$ can be computed analogously by swapping signs in $c_W$ and $s_W$.
162
+
163
+ ::: {#cor::bounds_mediators .corollary}
164
+ **Corollary 2**. *Under the Assumptions of Theorem [1](#thrm:shift){reference-type="ref" reference="thrm:shift"}, Algorithm [\[alg:bounds\]](#alg:bounds){reference-type="ref" reference="alg:bounds"} returns the sharp bounds from Eq. [\[eq:bounds\]](#eq:bounds){reference-type="eqref" reference="eq:bounds"}.*
165
+ :::
166
+
167
+ ::: proof
168
+ *Proof.* See Appendix [8](#app:proofs){reference-type="ref" reference="app:proofs"}. ◻
169
+ :::
170
+
171
+ In practice, we only have access to an empirical distribution $\mathbb{P}_n^\mathbf{V}$ of sample size $n$ instead of the full observational distribution $\mathbb{P}^\mathbf{V}$. We thus obtain estimates $\hat{\mathbb{P}}(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a})$ of the conditional density or probability mass functions $\mathbb{P}(w \mid \mathbf{x}, \mathbf{m}_W, \mathbf{a})$ for all $W \in \{M_1, \dots, M_\ell, Y\}$. If $W$ is discrete, this reduces to a standard (multi-class) classification problem, and the estimated class probabilities can be plugged into Algorithm [\[alg:bounds\]](#alg:bounds){reference-type="ref" reference="alg:bounds"}. If $W=Y$ is continuous, we can use arbitrary conditional density estimators to obtain $\hat{\mathbb{P}}(y \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a})$. We propose an importance sampling approach to estimate $\mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a})\right)$ assuming that we can sample from from our estimated density $\hat{\mathbb{P}}^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a})$ (see Appendix [12](#app:est){reference-type="ref" reference="app:est"} for details and derivations). If $\mathcal{D}$ is the expectation functional, our estimator is $$\begin{equation}
172
+ \label{eq:estimation}
173
+ \resizebox{.8\hsize}{!}{$\widehat{\mathcal{D}\left(\mathbb{P}_+^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a})\right)} = \frac{1}{\hat{s}_Y^+ k}\sum_{i=1}^{\lfloor k c^+_Y \rfloor} y_i + \frac{1}{\hat{s}_Y^-k}\sum_{i=\lfloor k c^+_Y \rfloor + 1}^{k} y_i, \quad \text{where} \quad (y_i)_{i=1}^k \sim \hat{\mathbb{P}}^Y(\cdot \mid \mathbf{x}, \Bar{\mathbf{m}}_\ell, \mathbf{a})$}.
174
+ \end{equation}$$ We also provide importance sampling estimators for distributional effects in Appendix [12](#app:est){reference-type="ref" reference="app:est"}. We can also obtain empirical confidence intervals by using the same bootstrap procedure as described in [@Jesson.2022].
175
+
176
+ **Implementation:** We use feed-forward neural networks with softmax activation function to estimate discrete probability mass functions. For densities, we use conditional normalizing flows [@Winkler.2019] (neural spline flows [@Durkan.2019]), which are universal density approximators and allow for sampling. We perform training using the Adam optimizer [@Kingma.2015]. We also perform extensive hyperparameter tuning in our experiments. Implementation and hyperparameter tuning details are in Appendix [13](#app:hyper){reference-type="ref" reference="app:hyper"}.
2307.14367/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2307.14367/paper_text/intro_method.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Understanding proteins' function is a central problem in biological sciences, as proteins are the fundamental elements of almost all biological functions. Accurate prediction of proteins' function is essential for understanding biological systems as well as for various applications, such as drug discovery, enabling researchers to identify and target specific proteins that play critical roles in disease pathways [@HA2021394]. Traditionally, proteins' functions prediction has been approached through classification methods, assigning predefined labels to proteins based on their characteristics [@10.1093/bioinformatics/btz595]. However, this approach often oversimplifies the complexity of proteins' functionality, limiting the depth of our understanding. To overcome this limitation, we propose a novel view on proteins' functions prediction based on reformulating the task using free-text proteins' descriptions instead of relying on predefined labels. The rapid progress in transformer-based models has brought a massive revolution to the field of Natural Language Processing (NLP). These models have demonstrated impressive language generation capabilities, allowing them to perform a wide range of NLP tasks with remarkable performance, including text completion, translation, sentiment analysis and question-answering [@transformers; @radford2019language-gpt2; @brown2020language]. On the other hand, Graph Neural Networks(GNNs) have emerged as a powerful tool for modeling graph-structured data, capturing the intricate relationships between different elements in a graph [@kipf2017semi; @reiser2022graph]. However, the integration of GNNs and transformers faces various challenges, such as effectively handling the heterogeneity of data representations, therefore the field is still in its early stages. Despite this, the potential benefits of leveraging both GNNs and transformers for graph-to-text applications, such as predicting the functional properties of proteins are substantial. To that end, we develop a novel multimodal framework, **Prot2Text**, that can generate detailed and accurate descriptions of proteins' functions in free text. We effectively integrate GNNs and Large Language Models (LLMs), to encompass both structural and sequential information of the protein's 3D structure and amino acid's sequence respectively. The encoder-decoder architecture forms the backbone of our model, with the encoder component employing a Relational Graph Convolution Network (RGCN) [@schlichtkrull2018modeling] to process the proteins' graphs and the ESM protein language model [@lin2023evolutionary-esm2] to encode the proteins' sequences. The decoder component utilizes a pretrained GPT-2 model to generate detailed proteins' descriptions. To train our multimodal model, we compile a dataset of proteins extracted from SwissProt, a comprehensive collection of protein annotations obtained from the UniProt database [@uniprot2015uniprot]. This dataset encompasses a vast number of proteins, each annotated with its corresponding function or description. In addition to the textual information, we obtain the 3D structure representation of the proteins from AlphaFold [@varadi2022alphafold]. We further release this curated dataset to the public, allowing other researchers to use it for benchmarking and further advancements in the field. Code, data and models are publicly available[^1]. Our main contributions can be summarized as follows:
4
+
5
+ - We introduce the **Prot2Text** framework, a novel multimodal approach for generating proteins' functions in free text. Our model combines both GNNs and ESM to encode the protein in a fused representation while a pretrained GPT-2 decodes the protein's text description.
6
+
7
+ - We propose various baselines for protein text generation and demonstrate that the integration of both graph and sequence protein information leads to better generation capabilities.
8
+
9
+ - We further release a comprehensive multimodal protein dataset, which includes $256,690$ protein structures, sequences, and textual function descriptions. Researchers can leverage this dataset to benchmark and compare their models, thereby driving advancements in the field and enabling for a more robust and standardized evaluation of proteins' functions prediction methods in free text format.
10
+
11
+ # Method
12
+
13
+ In this section, we present our proposed multimodal framework, **Prot2Text**, for generating protein function descriptions in free text.
14
+
15
+ <figure id="fig:arch" data-latex-placement="t">
16
+ <embed src="figures/Prot2Text.drawio.pdf" />
17
+ <figcaption><strong>Architecture of the proposed Prot2Text framework for predicting protein function descriptions in free text.</strong> The model leverages a multimodal approach that integrates protein sequence, structure, and textual annotations. The Encoder-Decoder framework forms the backbone of the model, with the encoder component utilizing a relational graph convolution network (RGCN) to process the protein graphs, and an ESM model to process the protein sequence. A cross-attention mechanism facilitates the exchange of relevant information between the graph-encoded and the sequence-encoded vectors, creating a fused representation synthesizing the structural and textual aspects. The decoder component employs a pre-trained GPT-2 model, to generate detailed and accurate protein descriptions from the fused protein representation. By combining the power of Graph Neural Networks and Large Language Models, Prot2Text enables a holistic representation of protein function, facilitating the generation of comprehensive protein descriptions.</figcaption>
18
+ </figure>
19
+
20
+ Upon obtaining the 3D proteins' structures using AlphaFold, we proceed to represent the proteins as a heterogeneous graph $G = (V, E, R)$, where $V = [N] := \{1, ..., N\}$ is the set of vertices representing the amino acids of the proteins, $E \subseteq V \times V$ is the set of edges representing various interactions between the nodes and $R$ is a set of different edge interactions. Each node $u$ is associated with a feature vector $\vx_u \in \R^d$, encompassing relevant information such as local structural features, and physicochemical properties of the associated amino acids. This enables the graph to retain fine-grained information critical to the protein's structure and function.
21
+
22
+ To model the diverse interactions and relationships between amino acids, we introduce different types of edges connecting the nodes. Therefore, each edge $i=(v,u)$ is associated with an edge type $\ve_i \in R$. Sequential edges are employed to connect adjacent nodes in the protein sequence, effectively representing the sequential order of amino acids and capturing the linear arrangement of the protein's primary structure. This sequential information is crucial for understanding the folding patterns and functional motifs within the protein. Additionally, we utilize spatial edges to establish connections between nodes that are in close spatial proximity within the 3D structure of the protein. These edges play a pivotal role in encoding the protein's tertiary structure and folding patterns, enabling us to capture the intricate spatial arrangements of amino acids within the protein's core. We further extend the graph construction to include hydrogen bond interactions as an additional edge type. Hydrogen bonds are fundamental non-covalent interactions that are of paramount importance in stabilizing protein structures and enabling specific molecular recognition events. Through the integration of the different edge types, our comprehensive protein graph provides a more holistic and detailed depiction of the protein's structure while capturing both short and long-range interactions.
23
+
24
+ To encode the protein graph $G$ into a vector $\vh_G \in \R^{d_{out}}$, we employ a Relational Graph Convolutional Neural Network(RGCN) [@schlichtkrull2018modeling], which effectively considers the various edge types present in the graph in the message-passing mechanism. We denote the neighborhood of type $r$ of a vertex $u$ by $\mathcal{N}_r(u)$ such that $\mathcal{N}_r(u)$ = $\{v : (v, u) \in E_r\}$, where $E_r$ is the set of edges with $r$ edge type. In layer $k$ of the GNN, we update the node representations as follows: $$\begin{equation}
25
+ \label{eq:rgcn}
26
+ \vx^{k}_i = \sigma\left(\mW_{\textrm{root}}^k \cdot
27
+ \vx_i^{k-1} + \sum_{r \in \mathcal{R}} \sum_{j \in \mathcal{N}_r(i)}
28
+ \frac{1}{|\mathcal{N}_r(i)|} \mW_r^k \cdot \vx_j^{k-1}\right),
29
+ \end{equation}$$ where $\mW_{\textrm{root}}^k$ represents the learnable weight matrix for the root transformation in layer $k$, $\mW_r^k$ denotes the learnable weight matrix of layer $k$ for relation $r$ and $\sigma(\cdot)$ is an element-wise activation function such as ReLU. This formulation allows nodes to update their representations by incorporating information from neighboring nodes based on the specific edge types, capturing the structural and relational dependencies within the protein graph. To obtain the graph representation from the node representations of the last layer $K$ of the GNN, we apply a mean-pooling layer as follows: $$\begin{equation}
30
+ \vh_G = \frac{1}{N} \sum_{i=1}^N \vx_i^K
31
+ \end{equation}$$ The resulting vector $\vh_G$ serves as an informative encoded representation of the protein graph, capturing the essential structural and relational characteristics. This representation plays a crucial role in the subsequent text generation process, where it will be utilized to generate detailed and accurate protein functions.
32
+
33
+ To encode the protein sequence $P_S$, we used ESM2-35M [@lin2023evolutionary-esm2] as our base model. ESM2 is a protein language model that uses a transformer-based architecture and an attention mechanism to learn the interaction patterns between pairs of amino acids in the input sequence. This allows the ESM model to capture amino acid sequence evolutionary information about proteins and their properties. In order to achieve uniform representation dimensions for all modalities within the spatial domain, a projection layer is applied after the last hidden layer of the ESM model. This layer functions as a projection layer that transforms the individual amino acid representations, derived from the ESM embedding dimension, into the graph embedding dimension $d_{out}$. As a result, a matrix denoted as $\mH^0_{S} \in \R^{N,d_{out}}$ is formed, containing the amino acid representations: $$\begin{equation}
34
+ \mH^0_{S} = ESM(P_S)\mW_{p}
35
+ \end{equation}$$ where $\mW_{p}$ is a trainable matrix.
36
+
37
+ To obtain the final protein encoding, we utilize a fusion block that combines the representation of each amino acid inside the matrix $\mH^0_S$ with the graph representation vector $\vh_G$. The fusion process involves a simple element-wise addition of the two representations, followed by a projection layer. This fusion block enables the integration of information from both the sequence and the graph representations in a straightforward manner. Thus, allowing each amino acid to be contextually enriched with information from the graph representation. Additionally, a normalization layer is applied after each fusion block to maintain stable training and further enhance the learning process. Specifically, for each amino acid representation in $\mH^k_S$, and the graph representation $\vh_G$, the fusion block computes the combined representation $\mH^{k+1}_S$ as follows: $$\begin{equation}
38
+ \mH^{k+1}_{S} = \left(\mH^k_{S}+ \mathbf{1}_n\vh_{G}\mW^k_{V}\right)\mW^k_{O},
39
+ \end{equation}$$ where $\mW^{k}_{V}$ and $\mW^{k}_{O}$ are trainable matrices in fusion block $k$ and $\textbf{1}_{n}$ is a vector of ones of size n (length of the amino acid sequence).\
40
+ By using this fusion block multiple times in the architecture (four times in this case), the model can capture complex interactions and dependencies between the sequence and graph representations, leading to an effective and contextually enriched encoding of the protein data. The fusion block could be seen as a special case of the transformers cross-attention block when the the input from the encoder represents only one token.
41
+
42
+ We employed the transformer decoder architecture for generating protein descriptions. We initialized the main components of the decoder, namely the text embedding matrix, self-attention, and language modeling head, with the pretrained weights of GPT-2. By doing so, we leveraged the GPT-2 model's capacity to grasp the underlying textual semantics. We forward the protein representation obtained from the protein encoder as input to the multi-head cross-attention module within the transformer decoder. This interaction enabled the model to effectively incorporate context from the protein representation, contributing to the generation of coherent and meaningful protein descriptions. We adopted the identical vocabulary and tokenizer from the GPT-2 model, with the introduction of two unique special tokens. These additional tokens serve as essential markers, enabling the model to discern the precise boundaries of each protein description within the input text. In the training phase, we employed Causal Language Modeling (CLM) as the training objective to optimize our model. Causal Language Modeling involves training the model to predict the next token in a sequence given the preceding tokens. This unidirectional prediction process ensures that the model generates text in a causal manner, without access to future tokens. The maximum length of each description is 256 tokens.
2310.05057/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff