Eric03 commited on
Commit
fed8bfc
·
verified ·
1 Parent(s): 9ac5595

Add files using upload-large-folder tool

Browse files
2003.11562/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="www.draw.io" modified="2020-02-21T13:10:22.686Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36" etag="wajUStUpNMZcPi-WcRft" version="12.7.3" type="device"><diagram id="KxYzZVuo8jexJQMiVpaW" name="Page-1">7Vvfd6I6EP5reKwHEqL4WK3dvefePdtzvfuj+4YQJadIPDGuun/9TSQoGNrKbgpopQ8lA0zITL75Jhmx4HC++cD8RfSJhji2gB1uLHhnAeBAhMQ/Kdmmkp7npoIZI6G66SAYk19YCW0lXZEQLws3ckpjThZFYUCTBAe8IPMZo+vibVMaF3td+DOsCcaBH+vSbyTkUSr1kH2Qf8RkFmU9O7a6Mvezm5VgGfkhXedEcGTBIaOUp2fzzRDH0niZXdLn7p+5un8xhhN+ygM3P75O7r8Bl0SPX36NHuMvn4P5DUi1/PTjlRrwYPTvf7vXXz7hUJzEfjJbSSMJWerd3WD4NrMQo6skxLIT24KDdUQ4Hi/8QF5dizkhZBGfx6LliNMpTbhyMkCyTeJ4SGPKdrrgFMk/IV9yRp9w7kp3dygNOXl6CLkaCGYcb561kLO3u5iwmM4xZ1txi3oAZL5TcxV4qr0+eN7N7olyXu8pma8m22yv+uAPcaJcUsE9ruYeCw2G/4wtdPeCH5wT/HBk9+kUBEGZ3cPupIu6huwLi/aF/TL76uaFb2VepJv32Ko4CW9lGBGtIPaXSxIUDZk+gEMtirxqlNygUcmgMxnDsc/Jz6L6MkuoHh4oER0fbI6esXmmYklXLMDqqXz4eE0ROlLEfTbDXFO0c8x+2L/vK0/zVSS8coYg8FoGgv6fg0AMnW2/Swbo2LCbCR7zV+82iiDS1jbfesCMiLFgpoRtxxS0TWGq3+/0+rkDNQqxLOfKTQVh8jPEGGwb0TjOFWRVQWaKuKDbLpB19amABp9ux383kNQdJdP397kkW6Xpjm0Ik23jPad3xWRFTLqmiA+2i/iATnyh6Gt2htTnto36wJX6KsPMFPW57aK+zPO5qcBp6G/PEWZtYzMArzCrCDNkis3clrFZaYY5Hj2c564hah2jXRPHylAzxWhIMJrXKLqgns/8lSxWXJsCYsbzotOLMElogo8wpUR+TGaJnDnCedLFA4kfEvjxrbowJ2EouynFa7Eek1/Jlaz47N1hBqgOOHYyOgmo4K2ACi+7fLKvkTYVCF0DqX3bI5cg906Rzvcb+JV3e3VVbr2xy9UT8MsoojQPBQPpd+uh0DcHBV1V3VDQueEyah3NQ+EdVNWhOVYoUVU3FErWi0Zx4AW4HAcTD8m9rjepLzSPAwPLxNbjwBwllKiqGwf6T0wupQrQPBgM/Mqk7WBwzZFCiaqawZDZ5AL36hsHA3oH62bXHDOUqKobDPq6+ZJ21JsHxDtYPSNz7FCiqm5A6KvnLV1ZYMgjLGwwlG8mBtPpvICOal8LOKAtCwunp6NlL6tlX7t7woLtNbjogUccZSZ/pl6AN4Sn9SukWo/Kp/L8ULuSjULpKn3IAW5tRS9gp2h4dT43xpW2Ux6Pq6+hirUxx+vXGha6epa/Cwt2JF5AuCGSXxFFwn54Kd96wejEn5CYcD2tbF/17CU0GAgzzilhpl9nmOnpS4DPK34tdVq7z/yOEPtmpU7RPHwjmAL18KUlHP0P</diagram></mxfile>
2003.11562/main_diagram/main_diagram.pdf ADDED
Binary file (23.3 kB). View file
 
2003.11562/paper_text/intro_method.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\mathbf{X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\mathbf{X})$ which is $$\begin{equation}
4
+ \label{cond}
5
+ P(\mathbf{X})=\prod_{i=1}^{T} p\left(x_{i} | x_{1}, \ldots, x_{i-1}\right) ,
6
+ \end{equation}$$ were $(x_{1}, \ldots, x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens $$\begin{equation}
7
+ \label{ppl}
8
+ PPL(\mathbf{X})= P(\mathbf{X})^{-1/T}.
9
+ \end{equation}$$ In our two approaches we use transformer based architectures: BERT and Transformer-XL as mentioned before. Calculating the auto-regressive $P(\mathbf{X})$ for the transformer-XL is quite straight-forward as the model is unidirectional but it doesn't factorize the same way for a bi-directional model like BERT.
10
+
11
+ BERT's bi-directional context poses a problem for us to calculate an auto-regressive joint probability. A simple fix could be that we mask all the tokens $\mathbf{x}_{>i}$ and we calculate the conditional factors as we do for an unidirectional model. By doing so though, we loose upon the advantage of bi-directional context the BERT model enables. We propose an approximation of the joint probability as,
12
+
13
+ $$\begin{equation}
14
+ \label{approx}
15
+ P(\mathbf{X}) \approx \prod_{i=1}^{T} p\left(x_{i} | x_{1}, \ldots, x_{i-1}, x_{i+1}, \ldots, x_{T}\right).
16
+ \end{equation}$$ This type of approximations has been previously explored with Bi-directional RNN LM's [@inproceedings] but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability.
17
+
18
+ The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) [@10.5555/177910.177914] for subword tokenization [@DBLP:journals/corr/SennrichHB15].For example the rare word \"unaffable\" to be split up into more frequent subwords such as \[\"un\", \", \]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table [1](#Tab:markings){reference-type="ref" reference="Tab:markings"}) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme.
19
+
20
+ ::: {#Tab:markings}
21
+ subword marking Example
22
+ ------------------------- --------------------
23
+ left+right-marked (+m+) two slipp+ +er+ +s
24
+ left-marked (+m) two slipp +er +s
25
+
26
+ : Two methods of marking subword units such that the original sentence 'two slippers' is reconstructed
27
+ :::
28
+
29
+ []{#Tab:markings label="Tab:markings"}
30
+
31
+ The Next Sentence Prediction (NSP) is a binary classification task which predicts whether two segments follow each other in the original text. This pre-training task was proposed to further improve the performance on downstreaming tasks, like Natural Language Inference(NLI) but in reality removing the NSP loss matches or slightly improves the downstream task performance [@DBLP:journals/corr/abs-1907-11692]. In this paper, we have omitted the NSP task from the BERT pre-training procedure and changed the input from a SEGMENT-PAIR input to a SINGLE SEGMENT input. As seen in (Fig [1](#fig:BERT_label){reference-type="ref" reference="fig:BERT_label"})
32
+
33
+ <figure id="fig:BERT_label" data-latex-placement="t">
34
+ <img src="bert.png" />
35
+ <figcaption>BERT-Original sentence ’how are you doing today’</figcaption>
36
+ </figure>
37
+
38
+ Transformer-XL introduced the notion of recurrence in self-attention by caching the hidden state sequence to compute the hidden states of a new segment. It also introduces a novel relative positional embedding scheme and both of them combined address the issue of fixed context lengths. Transformer-XL as mentioned is a unidirectional deep transformer architecture, therefore the perplexity can be calculated as (Eq [\[ppl\]](#ppl){reference-type="ref" reference="ppl"}). The only change is in the input format, were we use sub-word units rather than whole word units as Finnish is morphologically richer than English.
39
+
40
+ The Finnish text data used for the language modeling task is provided by [@ftc-korp_en]. The dataset consists mainly of newspapers and books of around 144 million word tokens and 4.2 million unique tokens. We use a Morfessor 2.0 [@smit2014morfessor] using the basic unsupervised Morfessor Baseline algorithm [@10.1145/1187415.1187418] with a corpus weight parameter ($\alpha$) of 0.001. We have a vocabulary of 34K subword tokens for the left+right-marked (+m+) markings and 19K subword tokens for the left-marked (+m) markings. We also pre-process the data to remove any punctuation marks such that we can use the same data with an ASR system. The input is one sentence per line and we shuffle the sentences at each epoch. The data is randomly divided into- training dataset and a validation dataset. The test dataset consists of 2850 Finnish news articles obtained from the Finnish national broadcaster YLE.
2203.12560/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.12560/paper_text/intro_method.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Society is rapidly becoming more aware of the human footprint on the world's climate. Overwhelming evidence shows that climate change has both short-term and longterm effects on almost every aspect of our lives [\[27\]](#page-9-0). Using simulations and global climate metrics, it is nowadays possible to observe changes at a global scale, like the rising sea levels or changes of the gulf stream. In contrast, precise predictions of local changes are much harder to obtain. Common examples include land use by agriculture, deforestation, flooding, wildfires, growth of urban areas, and transportation infrastructure. It is of critical importance to monitor such local changes since these are the factors that ultimately exacerbate the global climate crisis.
4
+
5
+ Satellite images are a powerful tool in this context to track local changes to the environment in specific regions. Observing change at a local scale requires two conditions: high frequency of satellite observations and pixel-precise understanding of the observed surface. Existing datasets often fail to provide these conditions. Whenever pixelwise annotations are provided, only static images can be used [\[43\]](#page-9-1) or the revisit frequency is limited to once a year [\[14,](#page-8-0) [36\]](#page-9-2). Datasets with coarser annotations have either an irregular [\[11\]](#page-8-1) or monthly revisit frequency [\[38\]](#page-9-3). As an example of land changes, in 2020, 46km<sup>2</sup> of the rainforest in Brazil were destroyed every day [\[29\]](#page-9-4). This suggests that if we analyze the satellite images of that area once per month, we potentially miss deforestation of the equivalent of the city of Los Angeles, California. As Brazil alone has
6
+
7
+ <sup>\*</sup> Authors share first authorship. † Authors share senior authorship. ‡ Corresponding author: xiaoxiang.zhu@dlr.de.
8
+
9
+ <span id="page-1-0"></span>millions of square kilometers of forest, automatic methods are required to detect these and other kinds of land changes. Current pixel-precise automatic methods are predominantly based on deep learning and thus require annotated data to learn.
10
+
11
+ In this work, we present *DynamicEarthNet*, a time-series satellite imagery dataset with daily revisits of 75 local regions across the globe. The dataset comprises consistent, occlusion-free daily observations with multi-spectral imagery over the span of two years (2018-2019). We further provide annotated monthly semantic segmentation labels. The main focus is to segment and detect changes in the development of general land use and land cover (LULC). Specifically, we focus on the following LULC classes: impervious surfaces, water, soil, agriculture, wetlands, snow & ice, and forest & other vegetation.
12
+
13
+ In comparison to semantic segmentation on standard computer vision benchmarks, satellite imagery is subject to various additional challenges. Most prominently, labeled areas in satellite images typically have very intricate shapes that are significantly more complex than everyday objects. We show that well-performing methods [\[10,](#page-8-2) [32\]](#page-9-5) on standard vision benchmarks do not necessarily transfer well to this domain. Furthermore, common segmentation metrics are not optimal for quantifying the performance on the task of semantic change segmentation. We alleviate this issue by proposing a new evaluation protocol that captures the essence of semantic change segmentation. *DynamicEarth-Net* and the proposed evaluation protocol encourage the development of more specialized algorithms that can handle the particular challenges of daily time-series satellite imagery. In summary, our contributions are as follows:
14
+
15
+ - We present a large-scale dataset of multi-spectral satellite imagery with daily observations of 75 separate areas of interest around the globe.
16
+ - We provide dense, monthly annotations of 7 land use and land cover (LULC) semantic classes.
17
+ - We propose a novel evaluation protocol that models two central properties of semantic change segmentation: binary change and semantic segmentation.
18
+ - We evaluate multiple baseline approaches on our data for the task of detecting semantic change. We show how the time-series nature of our data can be leveraged for optimal performance.
19
+
20
+ # Method
21
+
22
+ Let $\mathbf{x} \in \mathbb{R}^{T \times H \times W \times 4}$ be an input time-series of satellite images consisting of T frames with a spatial size of $H \times W$ and 4 input channels (RGB + near-infrared). For each such time-series, we further provide semantic annotations $\mathbf{y} \in \mathcal{C}^{T \times H \times W}$ that assign each pixel in $\mathbf{x}$ to one of the
23
+
24
+ 7 LULC classes $\mathcal{C} := \{0, \dots, 6\}$ defined in Sec. 3.2. Given two consecutive frames at times t and t+1, we can define the binary change $\mathbf{b} \in \{0,1\}^{(T-1)\times H\times W}$ as a binary labeling of all pixels for which the ground-truth semantic label changes:
25
+
26
+ $$\mathbf{b}_{t,i,j} := \begin{cases} 1, & \text{if } \mathbf{y}_{t,i,j} \neq \mathbf{y}_{t-1,i,j}, \\ 0, & \text{else.} \end{cases}$$
27
+ (1)
28
+
29
+ When evaluating semantic change segmentation, both the binary change map $\hat{\mathbf{b}}$ and the semantic map $\hat{\mathbf{y}}$ need to be predicted. This requires methods to answer which pixels change and what class do these pixels change to.
2212.00921/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2212.00921/paper_text/intro_method.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Neural models trained using the empirical risk minimization principle (ERM) are highly accurate on average; yet they consistently fail on rare or atypical examples that are unlike the training data. Such models may end up relying on spurious correlations (between labels and task-independent features), which may reduce empirical loss on the training data but do not hold outside the training distribution (Koh et al., 2021; Hashimoto et al., 2018). Figure 1 shows examples of such correlations in the MultiNLI and CelebA datasets. Building models that gracefully handle degradation under distributional shifts is important for robust optimization, domain generalization, and fairness (Lahoti et al., 2020; Madry et al., 2017). When the correlations are known and training data can be partitioned into dominant and rare groups, group distributionally robust optimization (G-DRO, Sagawa et al., 2019) can efficiently minimize the worst (highest) expected loss over groups and improve performance on the rare group. A key limitation of G-DRO is the need for a pre-defined partitioning of training data based on a known spurious correlation; but such correlations may be unknown, protected or expensive to obtain. In this paper, we present AGRO—Adversarial Group discovery for Distributional Robust Optimization—an end-to-end unsupervised optimization technique that *jointly* learns to find error-prone training groups and minimize expected loss on them.
4
+
5
+ Prior work on group discovery limits the space of discoverable groups for tractability. For example, Wu et al. (2022) use prior knowledge about the task to find simple correlations—e.g. presence of negation in the text is correlated with the *contradiction* label (Figure 1). However, such task-specific approaches do not generalize to tasks with different and/or unknown (types of) spurious correlations.
6
+
7
+ <sup>∗</sup>Work done during internship at Allen Institute of Artificial Intelligence. Correspondence to: Bhargavi Paranjape <bparan@cs.washington.edu>
8
+
9
+ ![](_page_1_Figure_1.jpeg)
10
+
11
+ Figure 1: Groups discovered by AGRO on CelebA image classification dataset and MultiNLI sentence pair classification dataset.
12
+
13
+ Approaches using generalizable features are semi-supervised (Sohoni et al., 2020; Liu et al., 2021) in that they assume access to group information on a held-out dataset. However, obtaining supervision for group assignments is costly and can lead to cascading pipeline errors. In contrast, AGRO is completely unsupervised and end-to-end while making no assumptions about the nature of the task and availability of additional supervision.
14
+
15
+ To address these challenges in AGRO, we construct a new parameterized *grouper* model that produces a soft distribution over groups for every example in the training data and is jointly trained with the task model. We introduce two key contributions to train this model. Firstly, the grouper model does not make task-specific assumptions about its inputs. Instead, it relies on computationally extracted features from the ERM model including: (a) predictions and mistakes of the ERM model on training and validation instances, and (b) pretrained dataset-agnostic representations and representations fine-tuned on the task data. Secondly, AGRO jointly optimizes the task model and grouper model. We formulate a zero-sum game between the grouper model that assigns instances to groups and the robust model which seeks to minimize the worst expected loss over the set of inferred groups. Specifically, while G-DRO optimizes the robust model to minimize the worst group-loss1 , the grouper model adversarially seeks a probabilistic group assignment such that the worst group-loss is maximized.
16
+
17
+ On four datasets in the WILDS benchmark (Koh et al., 2021) (MultiNLI, CivilComments, CelebA, and Waterbirds), AGRO simultaneously improves performance on *multiple* worst-groups2 corresponding to previously characterized spurious correlations, compared to ERM and G-DRO with known group assignment. AGRO also improves worst-group performance over prior approaches that find spurious correlations and groups by 8% on average, establishing a new SOTA for such methods on two of the WILDS datasets. On natural language inference, sentiment analyses, paraphrase detection and common-object classification (COCO), AGRO improves robustness to uncharacterized distributional shifts compared to prior approaches, as demonstrated by gains in out-of-distribution datasets for these tasks. Ablations on different parts of the framework underscore the need for a generalizable feature space and end-to-end optimization. We develop a novel annotation task for humans to analyze the discovered AGRO groups—distinguishing group members from random examples and perturbing them to potentially change model predictions. We find that humans can identify existing and previously unknown features in AGRO groups that lead to systematic model errors and are potentially spurious, such as the correlation between antonyms and contradiction in MultiNLI, or the correlation between hats, sunglasses and short hair with non-blondes in CelebA. Our code and models are public3 .
18
+
19
+ # Method
20
+
21
+ **Problem Setup** We consider the typical image/text classification problem of predicting labels $y \in \mathcal{Y}$ from input $x \in \mathcal{X}$ . Training data $\mathbb{D}$ is drawn from the joint distribution $P(\mathcal{X}, \mathcal{Y})$ . In state-of-the-art classification models, inputs are typically encoded using multi-layered pretrained transformers (Vaswani et al., 2017; He et al., 2021; Bao et al., 2021) We use g(x) and h(x) to represent input encoding before and after fine-tuning these encoders on $\mathbb{D}$ .
22
+
23
+ **ERM principle** Given a model family $\Theta$ and a loss function $l:\Theta\times\mathcal{X}\times\mathcal{Y}\to\mathbb{R}_+$ , empirical risk minimization aims to find a model $\theta\in\Theta$ that minimizes empirical loss over data drawn i.i.d from the empirical distribution P. $\hat{\theta}_{ERM}:=\mathrm{argmin}_{\theta\in\Theta}\mathbb{E}_{(x,y)\sim P}[l(x,y;\theta)]$ . While ERM models achieve high accuracy on i.i.d. evaluation data on average, they often underperform when the test distribution shifts significantly from the training distribution (Madry et al., 2017). ERM also underperforms on a biased sample of the test set where a spurious correlation is absent (Koh et al., 2021).
24
+
25
+ **G-DRO for Robust optimization** Group-DRO (Sagawa et al., 2019) is a category of distributionally robust optimization (DRO, Duchi et al., 2016), where training distribution P is assumed to be a mixture of m groups, and each training point (x, y) comes from one group $g \in \mathcal{G}$ . G-DRO minimizes the empirical worst-group risk $\hat{\mathcal{R}}(\theta)$ i.e. worst (highest) empirical loss over m groups:
26
+
27
+ $$\hat{\theta}_{DRO} := \arg\min_{\theta \in \Theta} \{ \hat{\mathcal{R}}(\theta) := \max_{g \in \mathcal{G}} \mathbb{E}_{(x,y) \sim p(x,y|g)}[l(x,y;\theta)] \},$$
28
+
29
+ where $p(x,y|g) \forall g$ is the empirical distribution of training data over groups. With sufficient regularization over $\theta$ , G-DRO enables training models that are robust to test-time worst-group distributional shifts. In practice, prior work adopts the efficient and stable online greedy algorithm (Oren et al., 2019)
30
+
31
+ to update $\theta$ . Specifically, worst-group risk $\hat{\mathcal{R}}(\theta) := \max_{q \in \mathcal{Q}} \mathbb{E}_{g \sim q,(x,y) \sim p(x,y|q)}[l(x,y;\theta)]$ . The uncertainty set Q are categorical group distributions that are $\alpha$ -covered by the empirical distribution over groups $p_{train}$ i.e. $p_{train}(g) := |\mathbb{1}(g_i = g) \forall i \in \mathbb{D}|/|\mathbb{D}|$ and $Q = \{q : q(g) \leq p_{train}(g)/\alpha \ \forall g\}$ . Effectively, this algorithm up-weights the sample losses by $\frac{1}{\alpha}$ which belong to the $\alpha$ -fraction of groups that have the worst (highest) losses, where $\alpha \in (0,1)$ . Details of this algorithm are presented in Appendix A.2. G-DRO is limited to scenarios where both spurious correlations and group memberships are known, which is often not the case.
32
+
33
+ Algorithm 1: Online greedy algorithm for AGRO
34
+
35
+ ```
36
+ \begin{array}{l} \textbf{Data:} \ \alpha; \ m: \ \text{Number of groups. Minimum} \ w \ \text{and maximum} \ W \ \text{weights on group loss; EMA: Expected moving average for } r = 0.2, \dots, R \ \textbf{do} \\ & \quad \text{Initialize historical average group losses} \ \hat{L}^{(0)}; \ \text{historical estimate of group probabilities} \ \hat{p}^{train(0)}; \\ & \quad \text{for } t = 1, \dots, T_1 \ \textbf{do} \\ & \quad \text{$\triangleright$ For group } g \in 1 \dots m: \hat{L}^{(t)}(g) \leftarrow \text{EMA}(\sum_{i=1}^{|B|} \frac{P_{i,g}^{(t)}}{P_{i,g}^{(t)}} l(x_i, y_i; \theta^{(t-1)}), \hat{L}^{(t-1)}(g)); \\ & \quad \hat{p}^{train(t)}(g) \leftarrow \text{EMA}(\sum_{i=1}^{|B|} \frac{P_{i,g}^{(t)}}{|B|}, \hat{p}^{train(t-1)}); \\ & \quad \text{$\triangleright$ Sort } \hat{p}^{train(t)} \text{ in order of } \textbf{decreasing} \ \hat{L}^{(t)}, \text{ top } \alpha\text{-fraction groups in } A: q^{(t)}(g) = \frac{1}{\alpha} \text{ if } g \in A \text{ else } w; \\ & \quad \text{$\triangleright$ Update model parameters } \theta: \theta^{(t)} = \theta^{(t-1)} - \eta \Delta(\sum_{g=1}^m q^{(t)}(g) \sum_{i=1}^{|B|} P_{i,g}^{(t)} l(x_i, y_i, \theta^{(t-1)})) \\ & \quad \textbf{end} \\ & \quad \text{for } t = 1, \dots, T_2 \ \textbf{do} \\ & \quad \text{$\triangleright$ For group } g \in 1 \cdot m: P_{i,g}^{(t-1)} = p(g|f_i; \phi^{(t-1)}), L^{(t)}(g) \leftarrow \sum_{i=1}^{|B|} P_{i,g}^{(t-1)} l(x_i, y_i; \theta^{(t)}), \\ & \quad p^{train(t)}(g) \leftarrow \sum_{i=1}^{|B|} \frac{P_{i,g}^{(t-1)}}{|B|}; \\ & \quad \text{$\triangleright$ Sort } p^{train(t)} \text{ in order of } \textbf{decreasing } L^{(t)}, \text{ top } \alpha\text{-fraction groups in } A: q^{(t)}(g) = \alpha \text{ if } g \in A \text{ else } W; \\ & \quad \text{$\triangleright$ Update model parameters } \phi: \phi^{(t)} = \phi^{(t-1)} + \eta \Delta(\sum_{g=1}^m q^{(t)}(g) \sum_{i=1}^{|B|} P_{i,g}^{(t-1)} l(x_i, y_i, \theta^{(t)})) \\ & \quad \textbf{end} \\ & \quad \textbf{end} \\ & \quad \textbf{end} \\ & \quad \textbf{end} \\ & \quad \textbf{end} \\ & \quad \textbf{end} \\ & \quad \textbf{end} \\ \end{cases}
37
+ ```
38
+
39
+ Error slice discovery DOMINO (Eyuboglu et al., 2022) shows that systematic mistakes made by models due to reliance on spurious correlations can be exposed via clustering of the model's representations X, predictions $\hat{Y}$ and reference labels Y. DOMINO learns an error-aware mixture model over $\{X,Y,\hat{Y}\}$ via expectation maximization, finding m error clusters over the evaluation set. Such a clustering can potentially be used for group assignment since the examples in a cluster are coherent (i.e. united by a human-understandable concept and model prediction) and suggestive of a specific feature that the model exploits for its predictions. However, overparameterized neural models often perfectly fit the training data, resulting in zero errors (Zhang et al., 2021), i.e. $Y = \hat{Y}$ .
40
+
41
+ AGRO combines insights from group distributionally robust optimization and error slice discovery, introducing a novel end-to-end framework to accomplish both objectives. We formalize the problem of group discovery for robustness, where g is an un-observed latent variable to be inferred during the training process. We replace discrete group memberships $\mathbbm{1}(g_i=g)$ ; $g\in\mathcal{G}$ with a soft-group assignment, i.e. a probability distribution over groups $P_{i,g}:=p(g|f_i;\phi)$ ; $g\in\mathcal{G}$ . $P_{i,g}$ refers to the probability that the i-th example belongs to the g-th group and is realized by a neural network q with parameters $\phi$ . This enables modeling co-occurring spurious features and overlapping groups. The input to the grouper model q is a high-dimensional feature vector $f_i\in\mathcal{F}$ that can potentially encode the presence of spurious correlations in example i (described in section 3.3).
42
+
43
+ AGRO jointly trains the parameters of the robust task model, $\theta$ , and the grouper model, $\phi$ , by setting up a zero-sum game between the robust model and the grouper model. The optimization occurs over R alternating game rounds—the **primary round** that updates the task model $\theta$ and the **adversary round** that updates the grouper parameters $\phi$ . Parameters that are not optimized in a given round are frozen. In the primary round, the optimizer seeks to minimize the worst-group loss (exactly the G-DRO objective), while in the adversary round, it seeks to find a group assignment that maximizes the worst-group loss. In doing so, the adversary finds a group assignment that is maximally informative for the primary, since it produces a loss landscape over groups that is highly uneven. This forces the primary model, in the next round, to aggressively optimize the worst empirical loss to even out the landscape. With $p_{train}(g) := \sum_{i \in \mathbb{D}} P_{i,g}/|\mathbb{D}|$ and uncertainty set Q is defined as before, the AGRO optimization objective is:
44
+
45
+ $$\hat{\theta}, \hat{\phi}_{AGRO} := \mathrm{argmin}_{\theta \in \Theta} \{ \mathrm{argmax}_{\phi \in \Phi} \{ \hat{\mathcal{R}}(\theta) := \max_{q \in \mathcal{Q}(\phi)} \mathbb{E}_{g \sim q, (x,y) \sim p(x,y|q)} [l(x,y;\theta)] \} \}, \qquad (1)$$
46
+
47
+ **Primary Round** In round r, the primary classifier finds the best parameters $\theta$ that minimize the worst-group loss based on the current dynamic group assignments provided by $\phi$ in round r-1. Updates to $\theta$ are similar to the online greedy updates used by Oren et al. (2019) i.e. up-weight the loss of $\alpha$ fraction of groups with the highest loss, then minimize this weighted loss.
48
+
49
+ Adversary Round In round r+1, the adversarial grouper updates $\phi$ and learns a soft assignment of groups that *maximizes* the loss of the worst-group (highest loss group). In practice, we adopt the converse of the greedy updates made in the primary round, i.e. down-weight the loss of $\alpha$ fraction of groups with the highest loss, and then maximize this weighted loss.
50
+
51
+ For stable optimization, we iterate over $T_1$ minibatches of training data to update $\theta$ in the $r^{\text{th}}$ round and iterate over $T_2$ minibatches to update $\phi$ in the $r+1^{\text{th}}$ round. Algorithm 1 presents the pseudo-code for AGRO. Implementation details, along with hyper-parameter values for $T_1, T_2, \alpha, m$ and R, are described in Appendix A.2. In the first primary round, we start with a random group assignment (i.e. random initialization for $\phi$ ), which amounts to training an ERM model. $\theta$ is initialized with a pretrained transformer-based encoder and MLP classifier. In the first adversary round, we adopt a different initialization for $\phi$ , which is explained in Section 4. The grouper model $\phi$ takes as input a feature vector $f_i \in \mathcal{F}$ for each training example $x_i$ . Next, we describe our choice of $\mathcal{F}$ .
52
+
53
+ Most prior work uses some knowledge about the task (Wu et al., 2022; Gururangan et al., 2018), data collection process (Poliak et al., 2018) or metadata information (Koh et al., 2021) for group discovery. Instead, we develop an unsupervised end-to-end group discovery method that does not rely on any task-specific knowledge. We do so using general purpose features $f_i \in \mathcal{F}$ that can potentially indicate the presence or absence of a spurious correlations.
54
+
55
+ **ERM features and errors** We use features from weaker ERM models (i.e., under-trained, smaller capacity ones), which have also been used in prior work for automatic group discovery Creager et al. (2021); Sohoni et al. (2020). DOMINO (Eyuboglu et al., 2022) additionally clusters model representations *and* errors to generate coherent error slices on held-out data. However, overparameterized models often perfectly fit training data. To estimate model errors on training data, we apply K-fold cross-validation to get model predictions on held-out training instances in every fold. Specifically, for a training example $x_i$ assigned to the k-th fold's held-out set, we compute the following features: the model's fine-tuned representations $h(x_i)$ , the reference label $y_i$ , and the vector of prediction probabilities over labels $\{p(\hat{y}_i|x_i;\theta_k)\forall \hat{y}_i \in \mathcal{Y}\}$ , where $\theta_k$ is the fold-specific ERM classifier.
56
+
57
+ **Pretrained features** Recent work has shown that large transformer-based models trained on webscale data learn general features for images which improve on out-of-distribution data (Wortsman et al., 2022a;b). Pretrained models are less likely to encode dataset-specific features in their representations. Following this observation, we also include pretrained representations $g(x_i)$ of the input $x_i$ . In sum, the group discovery features $f_i$ for a training example $x_i$ are: the representations $g(x_i)$ and $h(x_i)$ , the label $y_i$ and the probabilities $\{p(\hat{y_i}|\theta_k)\forall \hat{y_i}\in\mathcal{Y}\}$ .
2303.09914/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2303.09914/paper_text/intro_method.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Face recognition (FR) has been widely used in identity authentication because of its convenience. However, face recognition-based authentication systems are threatened by face spoofing attacks [@ramachandra2017presentation; @kong2022digital; @yu2022deep]. To protect FR systems from spoofing attacks, Face Anti-Spoofing (FAS) techniques are deployed to detect spoofing faces and reject malicious attempts. Although recent FAS methods based on deep learning and neural networks achieve exquisite accuracy in intra-domain testing, the performance of existing methods heavily relies on the diversity of the training data and degrades severely if there are domain shifts between the training and testing data domains [@li2018unsupervised]. The cross-domain problem hence becomes the most challenging issue of state-of-the-art FAS research.
4
+
5
+ <figure id="fig:my_label" data-latex-placement="t">
6
+ <embed src="DCL_FAS-Configuration.pdf" style="width:47.0%" />
7
+ <figcaption>The rehearsal-free DCL consists of a sequence of learning sessions. The FAS model is initially trained on a large-scale base domain and then continually adapts to new domains in the following continual sessions. For each continual session, only a few data of new domain is available for training and previous data is NOT available. After the DCL, the model is tested on all previous domains and extra unseen domains.</figcaption>
8
+ </figure>
9
+
10
+ To tackle the cross-domain problem, domain generalization [@MADDG-CVPR-2019; @SSDG-CVPR-2020] and adaption [@li2018unsupervised; @huang2022adaptive] techniques for FAS have been extensively studied in recent years. Domain generalization-based methods aim to develop a generalized FAS model with training data from multiple source domains. Despite improving the generalization to some extent, they are still far from satisfaction in unseen domains. Besides, domain adaptation-based methods utilize target domain data for model adaptation. Although the target domain performance can be significantly improved, the benefit is at the cost of expensive target data collection. Moreover, it is even impractical to collect sufficient data at a static point of time since the domain shifts are caused by constantly changing factors like illuminations and attack types.
11
+
12
+ In real-world scenarios, the deployed FAS systems constantly encounter new data from various domains. The new data will be collected and become available for model training gradually. Completely retraining a model from scratch with old and new data have both efficiency and privacy issues. Although fine-tuning the base model with the new data only is more efficient, the past knowledge will be overwritten after fine-tuning, and the performance on previous data decreases dramatically, *i.e.* catastrophic forgetting [@perez2020learning]. To adapt models efficiently, continual learning methods for FAS have been proposed in recent works [@perez2020learning; @iccv2021_dcl_fas]. To alleviate the catastrophic forgetting, both methods [@perez2020learning; @iccv2021_dcl_fas] utilize replay buffers to store previous data for rehearsal while fine-tuning with new data. However, the use of replay buffers causes extra storage burdens. Even worse, the previous data is not always available for storage and transfer since face data contains identity information.
13
+
14
+ In this work, we tackle the FAS problem under the rehearsal-free Domain Continual Learning (DCL) setting. Unlike existing work [@iccv2021_dcl_fas], where the FAS model is expected to learn sequentially from data of novel attack types, the aim of our work is enabling the FAS model continually evolve with the data from constantly varying domains. Due to efficiency and privacy issues, previous data is not allowed to be stored and accessed for rehearsal and only a few (low-shot) new data available for continual learning, which are different from previous works [@perez2020learning; @iccv2021_dcl_fas]. [We first evaluate the baseline method under the DCL setting and obtain the below interesting observations from experiments: catastrophic forgetting usually occurs when the new coming data dataset has large domain gaps from previous ones, and a model with better unseen domain generalization performance usually forgets less previous domain knowledge. Motivated by above the observations, we propose to address the DCL-FAS problem from the aspect of generalization.]{style="color: black"}
15
+
16
+ [During continual sessions, a small amount of data could lead to overfitting, bring poor generalization performance and catastrophic forgetting. To update models continually and efficiently, we introduce Efficient Parameter Transfer Learning (EPTL) paragdim for the DCL-FAS and utilize Adapters [@houlsby2019parameter; @huang2022adaptive] for Vision Transformer (ViT) [@dosovitskiy2020image]. By using the adapters, [@houlsby2019parameter], ViT models can be efficiently adapted even with low-shot training data. However, we find that vanilla adapters consisting of linear layers cannot satisfy the need of extracting fine-grained features for the FAS task. Hence, we replace the vanilla Linear Adapter with our proposed Dynamic Central Difference Convolutional Adapter (DCDCA), which empowers ViT with image-specific inductive bias by convolution and extracts fine-grain features with adaptive central difference information [@CDCN-CVPR-2020]. Unlike [@CDCN-CVPR-2020] where the ratio of central difference information is fixed for all layers, the ratio in our designed DCDCA is self-adaptive to new data domains, which is more suitable in the DCL setting. Besides, to further improve the generalization performance, we optimize DCDCA with contrastive regularization, and reduce forgetting during optimization by our proposed proxy prototypes the contrastive regularization (PPCR). Without the access previous data, our PPCR utilize previous data knowledge extracted from the class centroids of previous tasks, which are approximated by model weights of the fully-connected layers, instead of previous data.]{style="color: black"}
17
+
18
+ Our contributions include: ***1)*** We formulate and tackle the FAS problem in a more practical scenario: low-shot and rehearsal-free Domain Continual Learning (DCL). In each continual learning session, only a few new data is available for training and no previous data is accessible; ***2)*** We design the Dynamic Central Difference Convolutional Adapter (DCDCA) to efficiently adapt ViT-based models in continual domains and capture intrinsic live/spoof cues; ***3)*** We propose the Proxy Prototype Contrastive Regularization (PPCR) to further improve the generalization and alleviate the forgetting of FAS models during rehearsal-free DCL; ***4)*** We design two practical protocols to evaluate both anti-forgetting and generalization capacities of FAS models under DCL settings, with up to 15 public datasets covering both 2D and 3D attacks. We find that the proposed DCDCA and PPCR can significantly improve generalization while forgetting less over baselines on these two DCL protocols.
19
+
20
+ ![The architecture of the proposed ViT-DCDCA. The Dynamic Central Difference Convolutional Adapter (DCDCA) is able to extract the fine-grained central difference information for intrinsic live/spoof representation. Only the DCDCA and the MLP head are updated during training. 'MHSA' and 'MLP' denote multi-head self-attention and multi-layer perceptron, respectively.](DCDCA.pdf){#fig:DCDCA width="88%"}
21
+
22
+ # Method
23
+
24
+ Given the observation from experiments that a model that generalizes well can usually have less catastrophic forgetting (see Sec. [\[sec:Analysis1\]](#sec:Analysis1){reference-type="ref" reference="sec:Analysis1"} and  [4.3](#sec:Analysis2){reference-type="ref" reference="sec:Analysis2"}), to achieve the goals of DCL-FAS: generalize more and forget less, we propose Dynamic Central Difference Convolutional Adapter (DCDCA), which adapts ViT with dynamic central difference information during the continual learning.
25
+
26
+ **Fine-tuning ViT with adapter.** ViT [@dosovitskiy2020image] consists of a stack of transformer blocks, and each block comprises a Multi-Head Self Attention (MHSA) layer and Multilayer Perceptron (MLP) layers to extract features. By ignoring the skip connection and Norm layer, the inference procedure can be expressed as $$\begin{equation}
27
+ \vspace{-0.4em}
28
+ out = \text{MLP}_\mathcal{W}(\text{MHSA}_\mathcal{W}(x)),
29
+ \end{equation}$$ where $x$ is the input token, and $\mathcal{W}$ represents the parameters of the transformer, $out$ is the output token. Although ViT has strong feature representation capability, there are a large number of parameters to update when fine-tuning the ViT to a downstream task. It usually requires a large amount of data and training time. Recent studies of parameter-efficient transfer learning (PETL) on transformers [@huang2022adaptive; @vpt; @convpass] show that inserting adapter layers is an efficient way to fine-tune ViT. Such PETL paradigm is named as ViT-Adapter. Vanilla ViT-Adapter usually has extra linear layers $\mathcal{A}$ inserted into transformer layers. As such, the inference of a ViT-Adapter is expressed as $$\begin{equation}
30
+ \vspace{-0.4em}
31
+ out = \mathcal{A}(\text{MLP}_\mathcal{W}( \mathcal{A} (\text{MHSA}_\mathcal{W}(x))).
32
+ \end{equation}$$ When using a ViT model for a new downstream task, parameters of the pretrained ViT backbone ($\mathcal{W}$) are fixed, and only the parameters of the inserted adapter layers ($\mathcal{A}$) are updated. As $\mathcal{A}$ takes up a small ratio of parameters compared to the entire ViT, PETL requires only a small amount of training data and applies to DCL-FAS where a limited new domain data is available in the continual sessions.
33
+
34
+ **Fine-tuning with DCDCA.** Inspired by central difference convolution (CDC) [@CDCN-CVPR-2020; @NASFAS-TPAMI-2020] that extracts more robust feature representation for FAS by integrating local descriptors with convolution operation, we propose the Dynamic Central Difference Convolution Adapter (DCDCA) to introduce the locality inductive bias for ViT and extract fine-grained information with CDC [@CDCN-CVPR-2020]. As illustrated in Fig. [2](#fig:DCDCA){reference-type="ref" reference="fig:DCDCA"}, the DCDCA is embedded in the ViT backbone as a residual bottleneck connection [@convpass; @ResNet]. During the continual learning, only the DCDCA and the classification MLP head are updated, while the other pretrained layers are fixed.
35
+
36
+ Specifically, 2D convolutional layers inside the DCDCA are utilized to provide the locality inductive bias. To fit the convolution operation, the 1D flattened image token from the ViT backbone is reshaped back to a 2D structure for processing. Then, the reshaped 2D token is forwarded to a stack of convolutional layers for feature extraction. To extract features for subtle live/spoof discrimination, we use CDC to extract fine-grained contextual info from neighbor visual tokens. The output $y(p_0)$ is defined as $$\begin{equation}
37
+ \label{eq-cdc}\small
38
+ \vspace{-0.4em}
39
+ \begin{aligned}
40
+ y(p_0) =& \theta \underbrace{\sum_{p_n\in \mathcal{R}}\omega(p_n)\cdot (x(p_0+p_n) - x(p_0))}_{\text{central difference convolution}} + \\
41
+ & \underbrace{(1 - \theta) \sum_{p_n\in \mathcal{R}} \omega(p_n)\cdot x(p_0+p_n)}_{\text{vanilla convolution}},
42
+ \end{aligned}
43
+ \end{equation}$$ where $\omega$ is the convolutional kernel, $p_0$ is the center token of a 2D token map and $\mathcal{R}$ denotes the neighbor tokens around the token $p_0$. $\theta$ is the ratio of central difference information, which is empirically set as $0.7$ for all layers in [@CDCN-CVPR-2020]. However, using a united and fixed $\theta$ in all CDC layers is sub-optimal for DCL-FAS from two perspectives. First, the proportion of central difference information in features should be layer-specific because the semantic information and grain fineness of features are different among hierarchical layers. Second, in the continual learning scenario, the data domains change dynamically thus the contribution of central difference cues should be dynamically adapted as well. Therefore, we parameterize the $\theta$ of DCDCA as learnable variables that are self-adaptable to different layers and continual learning sessions.
44
+
45
+ To increase generalization capability, we propose to treat Eq. [\[eq-cdc\]](#eq-cdc){reference-type="ref" reference="eq-cdc"} as a type of feature transformation [@FWT], which transforms vanilla convolution feature with a scaling factor $\Theta$ sampled from a learnable Gaussian Distribution $\text{N}(\mu, \sigma^2)$. Since the sampling would stop the gradient backward propagation, we utilize the re-parameterization skill of Gaussian distribution that $$\begin{equation}
46
+ \vspace{-0.4em}
47
+ \begin{aligned}
48
+ \Theta\sim \text{N}(\mu, \sigma^2) \iff \Theta=\mu+\sigma\cdot\epsilon, \epsilon\sim \text{N}(0, 1)
49
+ \end{aligned}.
50
+ \end{equation}$$ During training, $\mu$ and $\sigma$ are updated to sample $\Theta$, and we use $\theta=\text{Sigmoid}(\Theta)$ in Eq [\[eq-cdc\]](#eq-cdc){reference-type="ref" reference="eq-cdc"}, where $\text{Sigmoid}$ is used to constrain the output in $[0,1]$. During testing, randomness is removed, and $\Theta=\mu$. We also compare our domain-aware dynamic $\theta$ estimation method with other learnable $\theta$ strategies in *Appendix*.
51
+
52
+ ![Illustration of Proxy Prototype Contrastive Regularization (PPCR). The top shows when learning on a new domain without prototypes, the new features might shift away from the previous prototypes, and previous knowledge is forgotten. The bottom shows our PPCR regularizes the new features to be clustered near the previous prototypes and forget less previous knowledge.](PPCR.pdf){#fig:ppcr width="90%"}
53
+
54
+ To learn more generalized models, supervised contrastive loss [@SupCon] is adapted for network optimization. Considering that the distributions of real face samples are relatively similar than spoofing ones [@SSDG-CVPR-2020], all real face samples are regarded as one cluster while spoofing face samples are divided into 2D attacks and 3D mask attacks. Therefore, the loss for optimization is expressed as $$\begin{equation}
55
+ \label{eq-supcon}\small
56
+ \vspace{-0.2em}
57
+ \begin{aligned}
58
+ \mathcal{L}_{Con} &=\mathcal{L}_{Con}^{\mathcal{C}^1}+ \mathcal{L}_{Con}^{\mathcal{C}^{2}} + \mathcal{L}_{Con}^{\mathcal{C}^{3}}, \\
59
+ \mathcal{L}_{Con}^{\mathcal{C}^1} &= \sum_{i\in\mathcal{C}^1} \frac{-1}{|\mathcal{C}^1|}\sum_{j\in \mathcal{C}^1, j\ne i}log \frac{\text{exp}(z_i\cdot z_j)}{\sum_{a\in\mathcal{C}^{2}\cap\mathcal{C}^{3}} \text{exp}(z_i, z_a)}, \\
60
+ \mathcal{L}_{Con}^{\mathcal{C}^{2}} &= \sum_{i\in\mathcal{C}^{2}} \frac{-1}{|\mathcal{C}^{2}|}\sum_{j\in \mathcal{C}^{2}, j\ne i}log \frac{\text{exp}(z_i \cdot z_j)}{\sum_{a\in\mathcal{C}^1\cap\mathcal{C}^{3}} \text{exp}(z_i, z_a)}, \\
61
+ \mathcal{L}_{Con}^{\mathcal{C}^{3}} &= \sum_{i\in\mathcal{C}^{3}} \frac{-1}{|\mathcal{C}^{3}|}\sum_{j\in \mathcal{C}^{3}, j\ne i}log \frac{\text{exp}(z_i \cdot z_j)}{\sum_{a\in\mathcal{C}^1\cap\mathcal{C}^{2}} \text{exp}(z_i, z_a)}
62
+ \end{aligned}
63
+ \end{equation}$$ where $\mathcal{C}^1$, $\mathcal{C}^{2}$, and $\mathcal{C}^{3}$ denote the set of sample indices of real face, 2D attack and 3D attack examples respectively, $|\mathcal{C}^k|$ denotes the number of samples in $\mathcal{C}^k$, $z_i$ denotes the feature from the last transformer layer of a sample $i$.
64
+
65
+ After the features of the same class are aligned and clustered, the clusters' centroids are set as prototypes. When continually learning with a new data domain, FAS model will forget the previous knowledge if the features of new domain data are far away from the old prototypes as illustrated in the upper part of Fig. [3](#fig:ppcr){reference-type="ref" reference="fig:ppcr"}. Recent research of source-free model transfer [@liang2020we] shows that model weight can provide knowledge of the source training data and the linear classifier $\mathbb{f}$ of a model is equivalent to the prototype in supervised contrastive learning [@eccv2022-sourcefree-liu]. Therefore, we propose the Proxy Prototype Contrastive Regularization (PPCR) to reduce forgetting during continual learning without accessing previous data. We set proxy prototypes $\mathbb{f}$ as the anchors in contrastive training and regularize clustering with previous prototypes to make the previous knowledge less forgotten, as illustrated in the bottom part of Fig. [3](#fig:ppcr){reference-type="ref" reference="fig:ppcr"}. We define the linear classifier weight as $\mathbb{f}=\{f^1, f^2, f^3\}$, where $f^1$, $f^2$, and $f^3$ are the weights and the proxy prototypes of the classes of real face, 2D attack, and 3D attack, respectively. Then, we define the final loss for optimization as $$\begin{equation}
66
+ \label{eq-overall}
67
+ \mathcal{L} = \mathcal{L}_{CE} + \lambda \mathcal{L}_{Con},
68
+ \end{equation}$$ where $\mathcal{L}_{CE}$ is the cross-entropy loss, and $\lambda$ is a constant scaling factor to balance two terms. Finally, the overall algorithm for the DCL-FAS with our proposed PPCR is described in Algorithm [\[algo\]](#algo){reference-type="ref" reference="algo"}.
69
+
70
+ ::: table*
71
+ :::
72
+
73
+ ::: algorithm
74
+ An ImageNet pretrained ViT backbone $\mathcal{W}=\{\mathcal{W}_b, \mathbb{f}\}$;\
75
+ Insert DCDCA modules $\mathcal{A}$ to the backbone;\
76
+ Train the network on the base dataset, and only $\mathcal{A}$and $\mathbb{f}$ are updated;\
77
+
78
+ Output: the optimized $\mathcal{A}$ and $\mathbb{f}$
79
+ :::
2402.02429/paper_text/intro_method.md ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The ability to swiftly learn and generalize to new tasks is a hallmark of human intelligence. In pursuit of this high-level artificial intelligence (AI), the paradigm of meta-reinforcement learning (RL) proposes to train AI agents in a trial-and-error manner by interacting with multiple external environments. In order to quickly adapt to the unknown, the agents need to integrate prior knowledge with minimal experience (namely the context) collected from the new tasks or environments, without over-fitting to the new data. This meta-RL mechanism has been adopted in many applications such as games , robotics and drug discovery .
4
+
5
+ However, for data collection, classical meta-RL usually requires enormous online explorations of the environments , which is impractical in many safety-critical scenarios such as healthcare , autonomous driving and robotic manipulation . As a remedy, offline RL enables agents to learn from logged experience only, thereby circumventing risky or costly online interactions.
6
+
7
+ Recently, offline meta-RL (OMRL) has emerged as a novel paradigm to significantly extend the applicability of RL by "killing two birds in one stone": it builds powerful agents that can quickly learn and adapt by meta-learning, while leveraging offline RL mechanism to ensure a secure and efficient optimization procedure. In the context of classical supervised or self-supervised learning, which is de facto offline, OMRL is reminiscent of the multi-task learning , meta-training and fine-tuning of pre-trained large models. We envision it as a cornerstone of RL foundation models in the future.
8
+
9
+ Along the line of OMRL research, context-based offline meta-reinforcement learning (COMRL) is a popular paradigm that seeks optimal meta-policy conditioning on the context of Markov Decision Processes (MDPs).
10
+
11
+ Intuitively, the crux of COMRL lies in learning effective task representations, hence enabling the agent to react optimally and adaptively in various contexts. To this end, one of the earliest COMRL algorithms FOCAL proposes to capture the structure of task representations by distance metric learning. From a geometric perspective, it essentially performs clustering
12
+
13
+ by repelling latent embeddings of different tasks while pulling together those from the same task, therefore ensuring consistent and distinguishable task representations.
14
+
15
+ Despite its effectiveness, FOCAL is reported to be vulnerable to context shifts , i.e., when testing on out-of-distribution (OOD) data (\cref{fig:dataset_visual}). Such problems are particularly challenging for OMRL,
16
+
17
+ since any context shift incurred at test time
18
+
19
+ can not be rectified in the fully offline setting, which may result in severely degraded generalization performance . To alleviate the problem, follow-up works such as CORRO reformulates the task representation learning of COMRL as maximizing the mutual information $I(\bm{Z}; \bm{M})$ between the task variable $\bm{M}$ and its latent $\bm{Z}$. It then approximates $I(\bm{Z}; \bm{M})$ by an InfoNCE contrastive loss, where the positive and negative pairs are conditioned on the same state-action tuples $(\bm{s}, \bm{a})$.
20
+
21
+ Inspired by CORRO, a recently proposed method CSRO introduces an additional mutual information term between $\bm{Z}$ and $(\bm{s}, \bm{a})$. By explicitly minimizing it
22
+
23
+ along with the FOCAL objective, CSRO is demonstrated to achieve the state-of-the-art (SOTA) generalization performance on various MuJoCo benchmarks.
24
+
25
+ Contributions\, In this paper, following the recent development and storyline of COMRL, we present a Unified Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning (UNICORN) encompassing pre-existing methods. We first prove that the objectives of FOCAL, CORRO and CSRO operate as the upper bound, lower bound of $I(\bm{Z}; \bm{M})$ and their linear interpolation respectively, which provides a nontrivial theoretical unification of these methods.
26
+
27
+ Second, by the aforementioned insight and an analysis of the COMRL causal structures, we shed light on how CORRO and CSRO improve context-shift robustness compared to their predecessors by trading off causal and spurious correlations between $\bm{Z}$ and input data $\bm{X}$.
28
+
29
+ Lastly, by examining eight related meta-RL methods (\cref{table:algos}) concerning their objectives and implementations, we highlight the potential design choices of novel algorithms offered by our framework. As examples, we investigate two instantiated algorithms, one supervised and the other self-supervised, and demonstrate experimentally that they achieve competitive in-distribution and exceptional OOD generalization performance on a wide range of RL domains, OOD settings, data qualities and model architectures. Our framework provides a principled roadmap to novel COMRL algorithms by seeking better approximations/regularizations of $I(\bm{Z}; \bm{M})$, as well as new implementations to further combat context shift.
30
+
31
+ # Method
32
+
33
+ [ht]
34
+ \vskip 0.2in
35
+ \vspace{-\baselineskip}
36
+
37
+ \centerline{\includegraphics[width=\textwidth, trim={.33cm 0cm .33cm 0cm},clip]{Figures/dataset_visual.pdf}}
38
+ \vspace{-\baselineskip}
39
+ \caption{Context shift of COMRL in Ant-Dir. Left: Given a task $M^i$ specified by a goal direction (dashed line), the RL agent is trained on data generated by a variety of behavior policies trained on the same task $M^i$ (red). At test time, however, the context might be collected by behavior policies trained on different tasks $\{M^j\}$ (blue), causing a context shift of OOD behavior policies (\cref{sec:ood_experiments}). Middle: Against OOD context, UNICORN (red) is more robust than baselines such as FOCAL (green) in terms of navigating the Ant robot towards the right direction. Right: Besides behavior policy, the task distribution (e.g., goal positions in Ant) can induce significant context shift (\cref{sec:task_ood}), which is also a challenging scenario for COMRL models to generalize.}
40
+
41
+ \vskip -0.2in
42
+ \vspace{-\baselineskip}
43
+
44
+ We start with a formal definition of task representation learning in COMRL:
45
+
46
+ [Task Representation Learning]
47
+ Given an input context variable $\bm{X}\in\mathcal{X}$ and its associated task/MDP random variable $\bm{M}\in\mathcal{M}$, task representation learning in COMRL aims to find a sufficient statistics $\bm{Z}$ of $\bm{X}$ with respect to $\bm{M}$.
48
+
49
+ In pure statistical terms, Definition implies that an ideal representation $\bm{Z}$ is a mapping of $\bm{X}$ that captures the mutual information $I(\bm{X}; \bm{M})$. We therefore define the following dependency structures in terms of directed graphical models:
50
+
51
+ [Causal Decomposition]
52
+ The dependency graphs of COMRL are given by \cref{fig:DAG}, where $\bm{X}_b$ and $\bm{X}_t$ are the behavior-related $(\bm{s}, \bm{a})$-component and task-related $(\bm{s}', r)$-component of the context $\bm{X}$, with $\bm{X} = (\bm{X}_t, \bm{X}_b)$.
53
+
54
+ [15]{r}{0.5\linewidth}
55
+ \centering
56
+ \vspace{-\baselineskip}
57
+ \includegraphics[width=.5\columnwidth, trim={3cm 17.5cm 3cm 1.8cm},clip]{Figures/DGM2.drawio.pdf}
58
+ \vspace{-\baselineskip}
59
+ \caption{Graphical Models of COMRL.}
60
+
61
+ For the first graph, $\bm{M}\rightarrow\bm{X}\rightarrow\bm{Z}$ forms a Markov chain, which satisfies $I(\bm{Z}; \bm{M}|\bm{X})=0$. To interpret the second graph, given an MDP $M\sim\bm{M}$, the state-action component of $\bm{X}$ is primarily captured by the behavior policy $\pi_\beta$: $\bm{s}\sim\mu_{\pi_\beta}(\bm{s}), \bm{a}\sim\pi_\beta$. The only exception is when tasks differ in initial state distribution $\rho_0$ or transition dynamics $T$, in which case the state variable $\bm{S}$ also depends on $\bm{M}$. We therefore define it as the behavior-related component, which should be weakly causally related (dashed lines) to $\bm{M}$ and $\bm{Z}$. Moreover, when $\bm{X}_b$ is given, $\bm{X}_t$ is fully characterized by the transition function $T: (\bm{s}, \bm{a})\rightarrow \bm{s}'$ and reward function $R: (\bm{s}, \bm{a})\rightarrow r$ of $M$, which should be strongly causally related (solid lines) to $\bm{M}$ and $\bm{Z}$ and therefore be defined as the task-related component.
62
+
63
+ Mathematically, we find that rewriting $\bm{X} = (\bm{X}_t, \bm{X}_b)$ induces a causal decomposition of the mutual information $I(\bm{Z}; \bm{X})$ by the chain rule :
64
+
65
+ \vspace*{-\baselineskip}
66
+
67
+ I(\bm{Z}; \bm{X}) = I(\bm{Z}; \bm{X}_t|\bm{X}_b) + I(\bm{Z}; \bm{X}_b).
68
+
69
+ We thereby name $I(\bm{Z}; \bm{X}_t|\bm{X}_b)$ and $I(\bm{Z}; \bm{X}_b)$ the primary and lesser causality in our problem respectively. With the setup above, we present the central theorem of this paper:
70
+
71
+ [Central Theorem]
72
+ Let $\equiv$ denote equality up to a constant, then
73
+
74
+ \underbrace{I(\bm{Z}; \bm{X}_t|\bm{X}_b)}_{\textup{primary causality}} \quad \le \quad I(\bm{Z}; \bm{M}) \quad \le
75
+ \quad I(\bm{Z}; \bm{X}_t|\bm{X}_b) + I(\bm{Z}; \bm{X}_b)=
76
+ \underbrace{I(\bm{Z}; \bm{X})}_{\textup{primary + lesser causality}}
77
+
78
+ holds up to a constant, where
79
+
80
+ - $\mathcal{L}_{\textup{FOCAL}} \equiv -I(\bm{Z}; \bm{X})$.
81
+ - $\mathcal{L}_{\textup{CORRO}} \equiv -I(\bm{Z}; \bm{X}_t|\bm{X}_b)$.
82
+ - $\mathcal{L}_{\textup{CSRO}} \ge -\left((1-\lambda)I(\bm{Z}; \bm{X}) + \lambda I(\bm{Z}; \bm{X}_t|\bm{X}_b)\right)$.
83
+
84
+ \vspace*{-\baselineskip}
85
+
86
+ See Appendix .
87
+ \vspace{-8pt}
88
+
89
+ \cref{thm:central} reveals several key observations. Firstly, the FOCAL and CORRO objectives operate as an upper and a lower bound of $I(\bm{Z}; \bm{M})$ respectively. Since one would like to maximize $I(\bm{Z}; \bm{M})$
90
+ according to \cref{def:task_representation_learning}, CORRO, which maximizes the lower bound $I(\bm{Z}; \bm{X}_t|\bm{X}_b)$, can effectively optimize $I(\bm{Z}; \bm{M})$ with theoretical assurance. However, FOCAL which maximizes the upper bound $I(\bm{Z}; \bm{X}_t, \bm{X}_b)$ provides no guarantee for $I(\bm{Z}; \bm{M})$. By Eq. , maximizing the FOCAL objective may instead significantly elevate the lesser causality $I(\bm{Z}; \bm{X}_b)$, which is undesirable since it contains spurious correlation between the task representation $\bm{Z}$ and behavior policy $\pi_\beta$.
91
+
92
+ This explains why FOCAL is less robust to context shift compared to CORRO.
93
+
94
+ Secondly, CSRO as the latest COMRL algorithm among the three, inherently optimizes a linear combination of the FOCAL and CORRO objectives. In the $0\le\lambda\le1$ regime, the CSRO objective becomes a convex interpolation of the upper bound $I(\bm{Z}; \bm{X})$ and the lower bound $I(\bm{Z}; \bm{X}_t|\bm{X}_b)$ of $I(\bm{Z}; \bm{M})$, which in essence, enforces a trade-off between the causal ($\bm{Z}$ with $T$, $\rho_0$) and spurious ($\bm{Z}$ with $\pi_{\beta}$) correlation contained in $I(\bm{Z}; \bm{X}_b)$. This accounts for the improved performance of CSRO compared to FOCAL and CORRO.
95
+
96
+ By providing a unified view of pre-existing COMRL algorithms, \cref{thm:central} opens up avenues for novel algorithmic implementations by seeking alternative approximations of the true objective $I(\bm{Z}; \bm{M})$. To demonstrate the impact of our proposed UNICORN framework, we discuss two instantiations as follows:
97
+
98
+ Supervised UNICORN\, $I(\bm{Z}; \bm{M})$ can be re-expressed as
99
+
100
+ \vspace*{-\baselineskip}
101
+
102
+ I(\bm{Z}; \bm{M}) &= H(\bm{M}) - H(\bm{M}|\bm{Z}) \equiv - H(\bm{M}|\bm{Z})\nonumber \\
103
+ &= \mathbb{E}_{\bm{z}}\mathbb{E}_{M\sim p(M|\bm{z})}\left[\log p(M|\bm{z})\right] = -\mathbb{E}_{\bm{z}}\left[H(\bm{M}|\bm{Z}=\bm{z})\right].
104
+
105
+ \vspace*{-\baselineskip}
106
+
107
+ where $H(\cdot)$ is entropy. Since in practice, each $\bm{z}^i$ of sample $\bm{x}^i$ is collected within a specific task $M^i$, minimizing the parameterized entropy $H_{\bm{\theta}}(\bm{M}|\bm{Z}=\bm{z}^i)$ is equivalent to finding an optimal function $p_{\bm{\theta}}(M|\bm{z})$ which correctly assigns the ground-truth label $M^i$ to $\bm{z}^i$, i.e., optimizing $p_{\bm{\theta}}(M|\bm{z})$ towards a delta function $\delta(M-M^i)$ for continuous $\bm{M}$ or an indicator function $\mathbbm{1}(M=M^i)$ for discrete $\bm{M}$. This implies that
108
+
109
+ \vspace*{-\baselineskip}
110
+
111
+ \underset{\theta}{\arg\min} \,H_{\bm{\theta}}(\bm{M}|\bm{Z}=\bm{z}^i) = \underset{\theta}{\arg\max} \log p_{\bm{\theta}}(M^i|\bm{z}^i).
112
+
113
+ \vspace*{-\baselineskip}
114
+
115
+ Suppose a total of $n_M$ training tasks $\{M^i\}_{i=1}^{n_M}$ are drawn from the task distribution $p(M)$ with the task label $M$ given for meta-training. Under this supervised scenario, by substituting Eq. into , we have
116
+
117
+ \vspace*{-\baselineskip}
118
+
119
+ \underset{\theta}{\arg\max}\,I(\bm{Z}; \bm{M}) &= \underset{\theta}{\arg\max}\,\mathbb{E}_{\bm{z}}\mathbb{E}_{M}\left[\delta(M-M^i)\log p_{\bm{\theta}}(M^i|\bm{z})\right]\nonumber\\
120
+ &\simeq \underset{\theta}{\arg\max}\,\mathbb{E}_{\bm{z}}\left[\sum_{i=1}^{n_M}\mathbbm{1}(M^i=M)\log p_{\bm{\theta}}(M^i|\bm{z})\right],
121
+
122
+ \vspace*{-\baselineskip}
123
+
124
+ which is precisely the negative cross-entropy loss $H(\bm{M}, P(\bm{M}|\bm{X}))$ for $n_M$-way classification with feature $\bm{z}$ and classifier $p_{\bm{\theta}}$. We therefore define the objective of supervised UNICORN as
125
+
126
+ \vspace*{-\baselineskip}
127
+
128
+ \mathcal{L}_{\textup{UNICORN-SUP}} &\coloneqq H(\bm{M}, P(\bm{M}|\bm{X}))\nonumber \\
129
+ &= -\mathbb{E}_{\bm{x}, \bm{z}\sim q_{\bm{\phi}}(\bm{z}|\bm{x})}\left[\sum_{j=1}^{n_M}\mathbbm{1}(M^i=M)\log p_{\bm{\theta}}(M^i|\bm{z})\right].
130
+
131
+ \vspace*{-\baselineskip}
132
+
133
+ Note that $\mathcal{L}_{\textup{UNICORN-SUP}}$ is convex and operates as a finite-sample approximation of $-I(\bm{Z}; \bm{M})$, for which we derive the following bound:
134
+
135
+ [Concentration bound for supervised UNICORN]
136
+ Denote by $\hat{I}(\bm{Z}; \bm{M})$ the empirical estimate of $I(\bm{Z}; \bm{M})$ by $n_M$ tasks, $\bar{I}(\bm{Z}; \bm{M})$ the expectation, then with probability at least $1-\delta$,
137
+
138
+ \vspace*{-\baselineskip}
139
+
140
+ \left|\hat{I}(\bm{Z}; \bm{M})-\bar{I}(\bm{Z}; \bm{M})\right| \le \sqrt{\frac{\textup{Var}(H(\bm{Z}|\bm{M}))}{n_M\delta}}.
141
+
142
+ \vspace*{-\baselineskip}
143
+
144
+ See Appendix .
145
+ \vspace{-8pt}
146
+
147
+ [17]{r}{0.5\linewidth}
148
+ \vspace*{-\baselineskip}
149
+ \vspace*{-\baselineskip}
150
+ \centering
151
+ \includegraphics[width=.5\textwidth, trim={0.5cm 10.5cm 8cm 0cm},clip]{Figures/UNICORN.drawio-v2.pdf}
152
+ \vspace{-\baselineskip}
153
+ \vspace{-5pt}
154
+ \caption{Meta-learning procedure of UNICORN-SS. The supervised variant UNICORN-SUP simply replaces the decoder by a classifier $p_{\bm{\theta}}(M|\bm{z})$ and optimize a cross-entropy loss instead of $\mathcal{L}_{\textup{recon}}$ and $\mathcal{L}_{\textup{FOCAL}}$.}
155
+
156
+ Theorem bounds the finite-sample estimation error of the empirical risk $\hat{I}(\bm{Z}; \bm{M})$ with $n_M$ task instances drawn from the real task distribution $p(\bm{M})$. The supervised UNICORN has the merit of directly estimating and optimizing the real objective $I(\bm{Z}; \bm{M})$, which requires explicit knowledge of the task label $M^i$ and a substantial amount of task instances according to Theorem . For better trade-off of computation and performance, we choose to sample 20 training tasks for all RL environments in our experiments.
157
+
158
+ Self-supervised UNICORN\, In practice, offline RL datasets may often be collected with limited knowledge of the task specifications or labels. In this scenario, previous works implement self-supervised learning to obtain effective representation $\bm{Z}$, such as the contrastive-based FOCAL/CORRO to optimize $I(\bm{Z}; \bm{X})$/$I(\bm{Z}; \bm{X}_t|\bm{X}_b)$ respectively; or generative approaches like VariBAD /BOReL to reconstruct the trajectories $\bm{X}$ by variational inference, which is equivalent to maximizing $I(\bm{X}_t;\bm{Z},\bm{X}_b)$. By Theorem , these methods optimize a relatively loose upper/lower bound of $I(\bm{Z}; \bm{M})$, which can be improved by a convex combination of the two bounds:
159
+
160
+ \vspace*{-\baselineskip}
161
+
162
+ I(\bm{Z}; \bm{M}) \approx \alpha I(\bm{Z}; \bm{X}) + (1-\alpha) I(\bm{Z}; \bm{X}_t|\bm{X}_b),
163
+
164
+ \vspace*{-\baselineskip}
165
+
166
+ where $0\le\alpha\le 1$ is a hyperparameter. Implementing each term in Eq. allows ample design choices, such as the contrastive losses in \cref{eqn:FOCAL,eqn:CORRO,eqn:club} or autoregressive generation via Decision Transformer or RNN . For demonstration, in this paper we employ a contrastive objective $\mathcal{L}_{\textup{FOCAL}}$ as in \cref{eqn:FOCAL} for estimating $I(\bm{Z}; \bm{X})$ while approximate $I(\bm{Z}; \bm{X}_t|\bm{X}_b)$ by reconstruction. By the chain rule:
167
+
168
+ \vspace{-\baselineskip}
169
+
170
+ I(\bm{Z}; \bm{X}_t|\bm{X}_b) &= I(\bm{X}_t; \bm{Z}, \bm{X}_b) - I(\bm{X}_t; \bm{X}_b)\nonumber\\
171
+ &\equiv I(\bm{X}_t; \bm{Z}, \bm{X}_b),
172
+
173
+ \vspace{-\baselineskip}
174
+
175
+ since $I(\bm{X}_t; \bm{X}_b)$ is a constant when $\bm{X}_t$ and $\bm{X}_b$ are drawn from a fixed distribution as in offline RL. Moreover, by definition of mutual information:
176
+
177
+ \vspace{-\baselineskip}
178
+
179
+ I(\bm{X_t};\bm{Z},\bm{X_b}) &= \mathbb{E}_{\bm{x_t}, \bm{x_b}, \bm{z}}\left[\log \frac{p(\bm{x_t}|\bm{z}, \bm{x_b})}{p(\bm{x_t})}\right]\nonumber\\
180
+ &\equiv \mathbb{E}_{\bm{x_t}, \bm{x_b}, \bm{z}}\left[\log p(\bm{x_t}|\bm{z}, \bm{x_b})\right]\nonumber\\
181
+ &\ge \mathbb{E}_{\bm{x_t,x_b}, \bm{z}\sim q_{\bm{\phi}}(\bm{z}|\bm{x_t}, \bm{x_b})}\left[\log p_{\bm{\theta}}(\bm{x_t}|\bm{z}, \bm{x_b})\right],
182
+
183
+ \vspace{-\baselineskip}
184
+
185
+ which induces a generative objective $\mathcal{L}_{\textup{recon}}\coloneqq -I(\bm{X}_t; \bm{Z}, \bm{X}_b)$ by reconstructing $\bm{X}_t$ with a decoder network $p_{\bm{\theta}}(\cdot|\bm{z},\bm{x_b})$ conditioning on $\bm{Z}$ and $\bm{X_b}$. As a result, the proposed unsupervised UNICORN objective can be rescaled as Eq. :
186
+
187
+ \vspace*{-\baselineskip}
188
+
189
+ \mathcal{L}_{\textup{UNICORN-SS}} \coloneqq \mathcal{L}_{\textup{recon}} + \frac{\alpha}{1-\alpha} \mathcal{L}_{\textup{FOCAL}}.
190
+
191
+ \vspace*{-\baselineskip}
192
+
193
+ The influence of the hyper-parameter $\frac{\alpha}{1-\alpha}$ is shown in Appendix .
194
+
195
+ We illustrate our learning procedure in \cref{fig:UNICORN} with pseudo-code in \cref{alg:Framwork1,alg:Framwork2}. A holistic comparison of our proposed algorithms with related contextual meta-RL methods is shown in \cref{table:algos}. The extra KL divergence in methods like VariBAD and PEARL can be interpreted as the result of a variational approximation to an information bottleneck that constrains the mutual information between $\bm{Z}$ and $\bm{X}$, which we found unnecessary in our offline setting (see ablation in \cref{tab:label-KL}.)
196
+ Behavior regularized actor critic is employed to tackle the bootstrapping error for downstream offline RL implementation.
197
+
198
+ [tb!]
199
+ \centering
200
+
201
+ \caption{Comparison between UNICORN instantiations and related existing contextual meta-RL methods. For clarity, "Representation Learning Objective" only lists the loss functions of $\bm{Z}$ that are independent of the downstream RL tasks. Note that $I(\bm{Z}; \bm{X}_t|\bm{X}_b)\equiv I(\bm{X}_t;\bm{Z},\bm{X}_b)$ holds only for offline RL.}
202
+ \vskip 0.04in
203
+
204
+ \setlength{\tabcolsep}{4pt}
205
+ {max width=\textwidth}
206
+ {l|c|c|c|c}
207
+ \toprule
208
+ Method & Setting & Representation Learning Objective & Implementation & Context $\bm{X}$ \\ \midrule\midrule
209
+
210
+ UNICORN-SUP & Offline & $I(\bm{Z}; \bm{M}) $ & Predictive & Transition \\
211
+
212
+ UNICORN-SS & Offline & $\alpha I(\bm{Z}; \bm{X}) + (1-\alpha)I(\bm{X}_t; \bm{Z}, \bm{X}_b) $ & Contrastive+Generative & Transition \\
213
+
214
+ FOCAL & Offline & $I(\bm{Z}; \bm{X})$ & Contrastive & Transition\\
215
+ CORRO & Offline & $I(\bm{Z}; \bm{X}_t|\bm{X}_b)$ & Contrastive & Transition\\
216
+ CSRO & Offline & $(1-\lambda)I(\bm{Z}; \bm{X}) + \lambda I(\bm{Z}; \bm{X}_t|\bm{X}_b)$ & Contrastive & Transition\\
217
+ GENTLE & Offline & $I(\bm{X}_t; \bm{Z}, \bm{X}_b)$ & Generative & Transition\\
218
+ BOReL & Offline & $I(\bm{X}_t; \bm{Z}, \bm{X}_b) - D_{\textup{KL}}(q_{\phi}(\bm{Z}|\bm{X})||p_{\theta}(\bm{Z}))$
219
+ & Generative & Trajectory\\
220
+ \midrule
221
+ VariBAD & Online & $I(\bm{X}_t; \bm{Z}, \bm{X}_b) - D_{\textup{KL}}(q_{\phi}(\bm{Z}|\bm{X})||p_{\theta}(\bm{Z}))$ & Generative & Trajectory\\
222
+ PEARL & Online & $- D_{\textup{KL}}(q_{\phi}(\bm{Z}|\bm{X})||p_{\theta}(\bm{Z}))$ & N/A & Transition\\
223
+ \midrule
224
+ ContraBAR & Offline\&Online & $I(\bm{Z}; \bm{X}_t|\bm{A})$ & Contrastive & Trajectory\\
225
+ \bottomrule
226
+
227
+ \vspace*{-\baselineskip}
228
+
229
+ \vspace*{-\baselineskip}
230
+
231
+ \vspace*{-\baselineskip}
2405.08969/main_diagram/main_diagram.pdf ADDED
Binary file (87.5 kB). View file
 
2406.11820/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2406.11820/paper_text/intro_method.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Image-text matching is a fundamental computer vision problem that aims to measure the semantic correspondence between an image and a text. Such correspondence can be used for image retrieval given a text description, or text retrieval provided an image query, both of which are important in various computer vision applications (*e.g.*, weakly supervised problems [18, 19]). The problem is inherently challenging due to the ambiguous nature of the image and text modalities [6, 46]. For example, an image can depict a complicated situation that a multitude of different captions
4
+
5
+ <span id="page-0-0"></span>![](_page_0_Figure_7.jpeg)
6
+
7
+ Figure 1. **Illustration of CORA.** CORA has a dual-encoder architecture, consisting of one encoder that embeds the input image and one encoder that embeds the text caption scene graph into a joint embedding space. (Best viewed in color and zoomed in.)
8
+
9
+ can describe, whereas a single caption is too abstract and can semantically apply to multiple images. Various studies have been proposed and can be categorized into two main directions: (1) the unimodal dual encoder and (2) the cross-attention approach.
10
+
11
+ In the dual-encoder framework, two modality-independent encoders embed the image and text caption separately into a joint embedding space. In this space, a similarity function such as a dot product can measure the image-text similarity. This strategy is also referred to as the global alignment approach, as the goal is to holistically represent an image (or text) as a single embedding. Due to their simplicity and low computational cost (*e.g.*, retrieving an image given a text query can be done via a vector-matrix multiplication with the cached embeddings), such methods are more widely adopted for real-world retrieval databases.
12
+
13
+ The second approach, cross-attention network, constitutes the majority of recent work. Instead of embedding each modality separately, cross-modality attention is adopted to locally align fine-grained visual cues of an image (image regions) with textual cues of a caption (word tokens), from which the overall correspondence score is aggregated. While this approach outperforms dual encoder in terms of power, it presents a substantial computational challenge. Upon receiving a text (or image) query, every image vs. text query pair must be processed through the cross-attention model to determine their similarity scores. This requirement renders the method impractical for retrieval systems managing large databases due to its extensive computational demands. This work focuses on the dual-encoder
14
+
15
+ <span id="page-1-0"></span>approach and shows that our dual-encoder proposal even outperforms the SOTA cross-attention networks.
16
+
17
+ Existing approaches use a text sequence model (*e.g*., GRU [\[7\]](#page-8-3), LSTM [\[15\]](#page-8-4)) to encode the text caption. A text usually contains an extensive range of semantic information, such as object categories, attributes of objects, and relations between objects. Attributes describe appearance of objects [\[22,](#page-8-5) [36,](#page-9-1) [38,](#page-9-2) [39,](#page-9-3) [44\]](#page-9-4), while relations describe how objects interact with one another [\[56\]](#page-9-5). Forcing a text sequence model to learn to parse a caption into different levels of semantics is challenging, especially in the low data regime. For example, by design, a sequence model that simply processes a caption from left to right (GRU, LSTM) may find it challenging to determine which attributes belong to an object and which objects participate in a relation. Numerous works have shown that Transformer-based text sequence models (BERT [\[8\]](#page-8-6)) can produce good structural parsing of a sentence [\[14\]](#page-8-7), however, these models must be trained on large amounts of data. Nevertheless, it has been shown in [\[3\]](#page-8-8) that even the CLIP text encoder [\[42\]](#page-9-6) in Stable Diffusion [\[40,](#page-9-7) [43\]](#page-9-8) still exhibits incorrect object-attribute binding (*i.e*., pair an attribute with the wrong object in the sentence) despite having been trained on large datasets. Therefore, it becomes desirable to have a text embedding model that can capture the semantic relations between concepts accurately.
18
+
19
+ In this work, instead of a sequence model, we propose representing a caption as a scene graph of object and attribute nodes connected by relation edges. An example of a scene graph is illustrated in Fig. [1,](#page-0-0) where we show that semantic structures such as object-attribute and object-object pairings are already organized. To this end, we propose our Composition model for Object Relations and Attributes, CORA, a dual-encoder model for image-text matching. On the image side, we re-use GPO [\[4\]](#page-8-9) which is a SOTA pooling operator for image-text matching to embed the image as a vector. On the text side, we propose to use a graph attention network [\[2,](#page-8-10) [48\]](#page-9-9) with strong relational inductive bias to produce a holistic scene graph embedding for the caption. Scene graph-based approaches have been previously explored in [\[25,](#page-8-11) [28,](#page-8-12) [30,](#page-8-13) [51\]](#page-9-10) for image-text matching, but they all employ expensive cross-attention. In addition to the margin-based triplet ranking loss [\[10\]](#page-8-14) adopted by prior work, we propose a contrastive loss to guide CORA in making alignment at both the holistic image-caption level and the local image-object entity level. The proposed loss helps make training more stable and result in better downstream retrieval accuracy, as well as additionally acquires CORA with the image-object entity retrieval capability.
20
+
21
+ Our model is evaluated on two image-text retrieval benchmarks, Flickr30K and MS-COCO, where it outperforms SOTA dual-encoder and expensive cross-attention methods. Our paper makes the following contributions:
22
+
23
+ • We propose CORA, a dual encoder for image-text match-
24
+
25
+ - ing that uses a graph attention network instead of a sequence model to produce scene graph embedding for a caption.
26
+ - We propose using contrastive loss that trains the model to make global alignment (image-caption) and local alignment (image-object entity), resulting in more stable training, better retrieval accuracy, and image-object retrieval capability.
27
+ - Our model CORA achieves SOTA retrieval performance on Flickr30K and MS-COCO, two prominent benchmarks for image-text retrieval.
28
+
29
+ # Method
30
+
31
+ This section describes our Composition model for Object Relations and Attributes. We first describe the overall framework in Sec. 3.1, then present in Sec. 3.2 how we perform visual embedding on the input image, how we parse the text caption into a scene graph and extract text features for each node in the graph. In Sec. 3.3, we describe how we can embed this scene graph into the joint embedding space with the image using the graph attention network. Finally, training objectives are detailed in Sec. 3.4.
32
+
33
+ We begin by describing the overall framework of CORA, which is illustrated in Fig. 2. The model consists of two encoders: a visual encoder $f^{\mathcal{V}}$ that takes in an input image $\mathbf{x}$ and produces the image embedding vector $v = f^{\mathcal{V}}(\mathbf{x}) \in \mathbb{R}^D$ ; and a text encoder $f^{\mathcal{T}}$ that takes in the text caption $\mathbf{y}$ and produces its embedding $t = f^{\mathcal{T}}(\mathbf{y}) \in \mathbb{R}^D$ in the joint D-dimensional embedding space. Instead of embedding the text caption directly, we first parse it into a scene graph using a parser $\phi^{\text{SG}}$ , then apply a graph attention network $f^{\mathcal{G}}$ to embed this scene graph. Our text embedding formulation therefore can be rewritten as $t = f^{\mathcal{G}}(\phi^{\text{SG}}(\mathbf{y}))$ .
34
+
35
+ The similarity score between the image and the text caption is defined as the cosine similarity between their embed-
36
+
37
+ dings v and t:
38
+
39
+ $$sim(\mathbf{x}, \mathbf{y}) = \frac{v^{\mathrm{T}}t}{\|v\|\|t\|}.$$
40
+ (1)
41
+
42
+ The dual-encoder is efficient for image-text retrieval. In the context of image retrieval, all image embeddings can be computed and cached in advance. When a text query arrives, it only needs to be embedded with $f^{\mathcal{G}}(\phi^{\text{SG}}(.))$ , then a simple vector-matrix multiplication is sufficient to retrieve all nearest neighbor images of the query.
43
+
44
+ **Visual feature extractor**. Given an input image x, we follow convention from prior work to use the pre-trained bottom-up detection model BUTD [1]. With this model, the top-36 most confident salient regions in x are detected, along with their visual features $\{x_k \in \mathbb{R}^{2048}\}_{k=1}^{N_{\mathcal{V}}}, N_{\mathcal{V}} =$ 36. The detection model used here is a Faster R-CNN with ResNet-101 backbone [13], pre-trained on Visual Genome [22]. We also transform the region features with an FC layer so that they have the same dimensions as the joint embedding space: $x_k \in \mathbb{R}^D$ . Furthermore, we also apply multi-head self-attention to contextualize the region features against one another. Then, in order to perform feature aggregation on this set to obtain a holistic representation for the input image $v = f^{\mathcal{V}}(\mathbf{x}) \in \mathbb{R}^D$ , we implement $f^{\mathcal{V}}$ using GPO [4] which is a SOTA pooling operator for image-text matching. Essentially, GPO learns to generate the best pooling coefficient for every visual region, which is better than naively applying mean pooling over the visual feature set.
45
+
46
+ Scene graph parser. Formally, we implement a textual scene graph parser that can construct a graph G = (V, E)given a text caption y, where $V = O \cup A$ denotes the set of object nodes O and attribute nodes A, and $E = E_{OA} \cup E_{OO}$ represents the set of object-attribute edges $E_{OA}$ and objectobject relation edges $E_{OO}$ . Example of a scene graph is illustrated in Fig. 1. We implement a scene graph parser based on [45, 53], using the syntactical dependency parser from the spaCy library [16]. We develop rules to extract object nouns (e.g., construction worker), adjective and verb attributes (e.g., salmon-colored, sitting), verb relations (e.g., person-jump over-fence, dog-wear-costume), and preposition relations (e.g., flag-above-building). Existing scene graph parsers [45, 53] are developed upon inferior language toolkits, thus often misdetect concepts (e.g., those consisting of multiple word tokens are not detected). The implementation of our parser is made publicly available.
47
+
48
+ **Semantic concept encoder**. We denote the set of object nodes $O = \{o_i\}$ , attribute nodes $A = \{a_i\}$ , and object-object relation edges $E_{OO} = \{r_{ij}\}$ . These concepts are still in text format that need to be encoded into vector representation. As these concepts often consist of multiple word tokens (e.g., pair of shoes, jump over), we use a text se-
49
+
50
+ <span id="page-3-3"></span><span id="page-3-1"></span>![](_page_3_Figure_0.jpeg)
51
+
52
+ Figure 2. Overview of CORA. a) CORA consists of (1) an image encoder that detects and extracts the salient regions' features from the input image, contextualizes them through a multi-head self-attention, then aggregates them into a single image embedding through the GPO [\[4\]](#page-8-9) pooling operator, (2) a text encoder that first parses the input text into a scene graph where all semantic information is readily organized, then two graph attention networks Object-Attribute GAT and Object-Object GAT are used to encode this graph into the same joint space with the image. The red arrow denotes the edge of the active role, while the yellow arrow is for the passive role in the relation (refer to Sec. [3.3.2\)](#page-4-1). b) The semantic concept encoder that uses GRU or BERT to encode each semantic concept in the graph corresponding to the object, attribute nodes and relation edges.
53
+
54
+ quence model as a phrase encoder to encode all semantic concepts. To demonstrate the generalizability of our method across different language features, we implement this semantic concept encoder using Bi-GRU [\[7\]](#page-8-3) and BERT [\[8\]](#page-8-6). For Bi-GRU, given an L-word semantic concept, we use the GloVe [\[37\]](#page-9-19) word embedding of each word to obtain a sequence of L 300-dimensional vectors. Next, we employ a Bi-GRU and take the final hidden states as the representation for the concept c ∈ R <sup>300</sup>. For BERT, we use the average of the output hidden states of all tokens at the last layer to represent the concept c ∈ R <sup>768</sup>. For both types of features, we then use an FC layer to transform the concept embedding to have the same dimension D as the joint embedding space. These concept embeddings are used to initialize the node features for {oi} and {ai} and the edge features for {rij} in the scene graph.
55
+
56
+ After obtaining the graph structure from the parser and the initialized features for all nodes and edges in the graph, we continue to elaborate on our scene graph embedding method as follows. The core idea of our method is that the scene semantics should be composed at two levels in a bottom-up manner, where we use a separate graph attention network (GAT) [\[2,](#page-8-10) [48\]](#page-9-9) for each level. At the bottom level, a GAT models the relations between an object and its associated attributes. At the top level, another GAT is used to model the relations between solely the objects, compose them together and produce the final scene embedding.
57
+
58
+ GAT Preliminaries. GAT is among the most popular graph
59
+
60
+ neural network methods, with SOTA results in graph representation learning. We follow the implementation of GATv2 [\[2\]](#page-8-10), which is an improved version of the original GAT [\[48\]](#page-9-9). We provide a brief description of GATv2 here. Given a directed graph G = (V, E), containing nodes V = {1, ..., N} and E ⊆ V × V where (j, i) ∈ E denotes an edge from node j to i. For each node i, we also have its initial representation denoted as h<sup>i</sup> ∈ R d . In a message passing step, to update features for node i, we first compute the importance value of neighbor node j w.r.t. i as following
61
+
62
+ $$e(h_i, h_j) = a^{\mathrm{T}} \mathrm{LeakyReLU}(W \cdot [h_i || h_j]),$$
63
+ (2)
64
+
65
+ where ∥ denotes vector concatenation, W ∈ R d×2d , a ∈ R d×1 . Followed by softmax, normalized attention coefficients of all neighbors j ∈ N<sup>i</sup> can be obtained: αi,j = softmax(e(h<sup>i</sup> , h<sup>j</sup> )). Then, new representation h<sup>i</sup> for node i is aggregated by
66
+
67
+ $$h_i' = \text{ReLU}(\sum_{j \in \mathcal{N}_i} \alpha_{i,j} W h_j).$$
68
+ (3)
69
+
70
+ Formally, the output of one GAT layer on a graph G is
71
+
72
+ $$\{h_i'\} = GAT(\{h_i\}, G). \tag{4}$$
73
+
74
+ At the bottom level, we care about how the semantic representation of an object is modified by its connected attributes in the graph. These attributes are modifiers that alter the visual appearance of the object. Because an attribute of <span id="page-4-2"></span>one object should in no way alter the appearance of another object, in this step, we apply GAT only on the subgraph $G_{\mathrm{OA}} = (V, E_{\mathrm{OA}})$ consists of only edges between the object and attribute nodes.
75
+
76
+ and attribute nodes. We denote $\{h_i\}_{i=1}^{|V|}, h_i \in \mathbb{R}^D$ as the initial representations for all nodes in the graph. These representations are initialized from the aforementioned semantic concept embedding step. We train a graph attention network, which we name $\mathrm{GAT}_{\mathrm{Obj-Att}}$ to perform message passing in graph $G_{\mathrm{OA}}$ . The updated representation of all nodes is therefore
77
+
78
+ $$\{h_i'\} = \text{GAT}_{\text{Obi-Att}}(\{h_i\}, G_{\text{OA}}). \tag{5}$$
79
+
80
+ At the output, we are only interested in the updated representation of the set of object nodes. Since these objects have been composed with their corresponding attributes, we name them as **entities** and denote them as $\{e_i\}_{i=1}^{|O|}$ , which will be used in one of our proposed losses.
81
+
82
+ At the top level, after acquiring the entity embeddings $\{e_i\}_{i=1}^{|O|}$ for all object nodes, we continue to apply another GAT, which we name $\text{GAT}_{\text{Obj-Obj}}$ on the subgraph $G_{\text{OO}} = (O, E_{\text{OO}})$ consisting of only object nodes and edges between them. Because these object nodes are connected with object-object relation edges $\{r_{ij}\}$ , our first step before applying GAT is to contextualize the entity embeddings with their corresponding edges.
83
+
84
+ Edge features. Consider a directed relation edge $r_{ij}$ . In this relation, node i plays the subject (active) role while node j plays the object (passive) role. For example, in the relation man-hold-cup, man is the subject while cup is the object. To obtain the edge features for this relation, we concatenate its semantic encoding $r_{ij}$ with the embedding of the entity that plays the passive role $e_j$ as follows: $r'_{ij} = [r_{ij}||e_j]$ . While existing work [34] often concatenates $r_{ij}$ with both the subject and object entity, in our work, we find that it is empirically better to characterize a relation with only the passive object entity. This is intuitively reasonable since the meaning of a relation such as hold-cup, use-computer does not depend on what kind of subject is involved.
85
+
86
+ **Edge-contextualized entity features**. Consider object node i, we define $\operatorname{Active}(i) = \{j | r_{ij} \in E_{\text{OO}}\}$ consisting all nodes that node i has a subject (active) relation with. Vice-versa, we define $\operatorname{Passive}(i) = \{j | r_{ji} \in E_{\text{OO}}\}$ which is all nodes that node i has an object (passive) relation. We contextualize the embedding of entity i with its edges as
87
+
88
+ $$e_i' = e_i + \frac{\sum_{j \in Active(i)} W_A r_{ij}'}{|Active(i)|} + \frac{\sum_{j \in Passive(i)} W_P r_{ji}'}{|Passive(i)|}, (6)$$
89
+
90
+ where $W_A$ and $W_P$ are two learnable matrices mapping edge features to have the same dimension with entity embeddings.
91
+
92
+ Scene graph embedding. With $\{e_i'\}_{i=1}^{|O|}$ as the initial representation for all object nodes. We train a GAT<sub>Obj-Obj</sub> on graph $G_{OO}$ . The updated representation for all nodes is
93
+
94
+ $$\{\hat{e}_i\} = \text{GAT}_{\text{Obj-Obj}}(\{e_i'\}, G_{\text{OO}}). \tag{7}$$
95
+
96
+ In order to pool the whole graph into one single embedding vector, we also use GPO [4] similar to our visual feature extraction step. We take the output representation that is pooled from GPO as the scene embedding t to represent the original input text caption in the joint embedding space.
97
+
98
+ Let $B = \{(v_i, t_i, \{e_{ik}\}_{k=1}^{|O_i|})\}_{i=1}^N$ be the training batch of output image embedding $v_i$ of the i-th image, output text embedding $t_i$ of the i-th text caption from $\mathrm{GAT}_{\mathrm{Obj-Obj}}$ , and set of output entity embeddings $\{e_{ik}\}_{k=1}^{|O_i|}$ of the i-th text caption from $\mathrm{GAT}_{\mathrm{Obj-Att}}$ . It is reminded that these entities $\{e_{ik}\}$ are embeddings of the object nodes in the scene graph of $t_i$ . We train our model CORA with the following losses. For brevity, we denote $s(v,t) = v^{\mathrm{T}}t/(\|v\|\|t\|)$ to be the cosine similarity between v and t.
99
+
100
+ **Triplet loss with hardest negatives**. Following prior work in image-text retrieval [4, 10], we also adopt the hinge-based triplet loss with hardest negative mining,
101
+
102
+ $$\mathcal{L}_{\text{HARD}} = \sum_{i} \max_{j} [\alpha + s(v_i, t_j) - s(v_i, t_i)]_{+}$$
103
+ (8)
104
+
105
+ $$+\max_{j} [\alpha + s(v_j, t_i) - s(v_i, t_i)]_{+}.$$
106
+ (9)
107
+
108
+ Essentially, for every matching image-caption $v_i$ and $t_i$ in the training batch, this loss looks for the negative caption $t_j$ that is closest to $v_i$ , and the negative image $v_j$ that is closest to $t_i$ in the embedding space. $t_j$ and $v_j$ are the hardest negatives in the training batch and help provide a strong discriminative learning signal to the model.
109
+
110
+ **Contrastive loss**. As observed by previous work [4], the hardest triplet loss above results in unstable learning during early training epochs. We find that applying a contrastive loss that encourages the model to align the output representations of all matching image, text, and object entity together results in more stable training and better final results. Because the entity embeddings $\{e_{ik}\}_{k=1}^{|O_i|}$ are also involved in the equation here, our model CORA is also trained to perform image retrieval given an object entity (e.g., image searching for straw hat). The loss is formulated as follows
111
+
112
+ $$\mathcal{L}_{\text{CON}} = -\sum_{i} \sum_{u} \log \frac{\exp(s(v_i, u))}{\sum_{u' \in \mathcal{N}_i} \exp(s(v_i, u'))}$$
113
+ (10)
114
+
115
+ $$-\sum_{i}\sum_{u}\log\frac{\exp\left(s(v_{i},u)\right)}{\sum_{v'\in\mathcal{N}_{u}}\exp\left(s(v',u)\right)},\quad(11)$$
116
+
117
+ <span id="page-5-3"></span>where $u \in \{t_i\} \cup \{e_{ik}\}_{k=1}^{|O_i|}$ is the semantic embedding of either the text or an object entity corresponding to image i, $\mathcal{N}_i$ is the negative set of semantic concepts that do not correspond to image i, and similarly $\mathcal{N}_u$ is the negative set of images that do not contain semantic concept u.
118
+
119
+ **Specificity loss**. The contrastive loss above aligns the embeddings of image, text, and entity together in the joint space. In addition, we would like to impose some structure in this space such that the similarity between an image $v_i$ and text $t_i$ should be larger than between $v_i$ and all entities $\{e_{ik}\}$ . The reason is that a caption always depicts more semantic information than an entity alone, hence $t_i$ should be more specific w.r.t. $v_i$ and exhibits a larger similarity score. The loss takes the form of a hinge-based triplet loss
120
+
121
+ $$\mathcal{L}_{SPEC} = \sum_{i} \sum_{k} [\alpha + s(v_i, e_{ik}) - s(v_i, t_i)]_{+}.$$
122
+ (12)
123
+
124
+ Our overall loss is therefore a weighted sum of all losses:
125
+
126
+ $$\mathcal{L} = \mathcal{L}_{\text{HARD}} + \lambda_{\text{CON}} \mathcal{L}_{\text{CON}} + \lambda_{\text{SPEC}} \mathcal{L}_{\text{SPEC}}. \tag{13}$$
2502.09977/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2502.09977/main_diagram/main_diagram.pdf ADDED
Binary file (58.2 kB). View file
 
2502.09977/paper_text/intro_method.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ While large language models (LLMs) excel across various domains, the dynamic nature of information poses significant challenges to their ability to acquire new knowledge effectively. Current studies reveal several limitations of LLMs, including high computational costs when processing long texts, a tendency to produce factual errors and hallucinations, difficulty adapting to specialized domains, and a propensity for generating overly generic responses [@DBLP:journals/corr/abs-2312-10997]. To address these limitations, researchers have explored retrieval-augmented generation (RAG) [@DBLP:conf/icml/GuuLTPC20; @DBLP:conf/nips/LewisPPPKGKLYR020]. RAG enables LLMs to efficiently utilize external knowledge by retrieving the most relevant fragments from uploaded documents, knowledge bases, or websites. However, recent advancements in LLMs, such as GPT-4o [@openai2024a], Llama 3.2 [@meta2024a], Claude 3.5 [@anthropic], and Qwen 2.5 [@yang2024qwen2], now support input lengths of up to 128k tokens, offering an alternative by directly feeding the full context of relevant information into the model. This raises questions about the continued necessity of RAG, which was initially crucial for handling long texts, since these models can now potentially access and process the necessary information directly. Therefore, it is essential to systematically compare the strengths and weaknesses of RAG and long-context (LC) LLMs.
4
+
5
+ ***Our extensive experiments on LaRA demonstrate that the choice between RAG and LC is not trivial***, as it varies significantly depending on factors such as model size, query type, type of tasks, context length, context type, and number of retrieved chunks. If we can pinpoint the scenarios in which RAG outperforms LC, we can better design workflows to route each query through RAG or LC, thereby optimizing for both cost and performance, leading to more efficient and effective LLM applications. Our key findings are as follows:
6
+
7
+ ::: compactitem
8
+ **Model Strength**: RAG provides more significant improvements for weaker models. Our analysis indicates a correlation between model strength and RAG's effectiveness: the weaker the model, the greater the improvement from RAG. For instance, with a 128k context length, RAG outperformed LC by 6.48% and 38.12% in accuracy on Llama-3.2-3B-Instruct and Mistral-Nemo-12B, respectively. For models with strong long-text capabilities, such as GPT-4o and Claude-3.5-sonnet, LC generally outperforms RAG, demonstrating the effectiveness of these models in directly processing extensive contexts.
9
+
10
+ **Context Length**: RAG's advantages become more pronounced as context length increases. With a 32k context length, LC achieved an average accuracy 2.4% higher than RAG across all models. However, with a 128k context length, this trend reversed, with RAG outperforming LC by 3.68%.
11
+
12
+ **Task Performance**: RAG demonstrates similar performance to LC in single-location tasks and offers a significant advantage in identifying hallucinations. In contrast, LC excels in reasoning tasks and comparison tasks.
13
+ :::
14
+
15
+ Desipte a lot of benchmarks has been used to compare RAG with feeding lanuage model with long context, there lacks a clear guidelines and conclusion on when and where the RAG will be a better choice than long context. @DBLP:conf/iclr/0008PWM0LSBSC24 and @DBLP:conf/acl/BaiLZL0HDLZHDTL24 draw opposing conclusions on whether RAG or LC performs better on traditional QA datasets. Recently,  @DBLP:conf/emnlp/Li00MB24 argued that LC consistently outperforms RAG in almost all settings, and  @DBLP:journals/corr/abs-2409-01666 subsequently claim RAG can defeat LC on the same benchmark. In this section, we conduct a detailed analysis on the existing benchmark and analysis, and find the key issues are stem from some significant flaws with their evaluation pipeline. For simplicity, our analysis is mainly limited to question answering tasks based on long contexts.
16
+
17
+ As LLM base models continue to evolve, the definition of "long context\" has also shifted, expanding from the early limit of 4k tokens to the now commonly supported 128k context length. Some early work utilize datasets such as Qasper (QASP) [@DBLP:conf/naacl/DasigiLBCSG21], NarrativeQA (NQA) [@DBLP:journals/tacl/KociskySBDHMG18], and QuALITY (QLTY) [@DBLP:conf/naacl/PangPJNPCPMT0B22] to compare RAG and LC. For instance,  @DBLP:conf/iclr/0008PWM0LSBSC24 conduct experiments on these datasets and find that RAG can strengthen large models, such as Llama2-70B and GPT-43B. Similarly,  @DBLP:journals/corr/abs-2406-13121 combine these datasets to create a new benchmark for further evaluations. However, such datasets no longer align with the current definition of long context. For example, QASP and QLTY have average context lengths of only 4912 and 6592 tokens, respectively, which are far below the context length capabilities of modern LLMs. Moreover, RAG typically uses chunk sizes of 300--600 tokens, and with 5--20 retrieved chunks, the total context length in RAG becomes comparable to that of full-context input, reducing the distinction between the two approaches.
18
+
19
+ Since LLMs use more and more datasets in the training procedure, the problem of data leakage becomes more serious. At the same time, it is challenging to verify whether these early datasets were part of the training data for LLMs, potentially causing the models to memorize the answers. For example, although NarrativeQA has an average context length of 84,770 tokens, Gemini 1.5pro achieves 100% accuracy on this dataset [@DBLP:journals/corr/abs-2406-13121], indicating that either the dataset itself or its contexts were likely included in the model's training process.
20
+
21
+ []{#tab:vote label="tab:vote"}
22
+
23
+ Moreover, to prevent overlap with data seen during LLM training, key entity replacement is employed as a countermeasure in $\infty$-bench. However, upon closer inspection, we find that some replacements are unsuccessful. For example, some entities mentioned in the questions do not exist in the provided context and vice versa[^1].
24
+
25
+ Many previous evaluations use automated metrics such as F1-score and exact match (EM) [@DBLP:conf/aaai/0011LH024; @DBLP:conf/acl/ZhangFC24], which are not reliable for NLG [@DBLP:conf/emnlp/NovikovaDCR17]. For example, if the ground truth answer is "Allyson Kalia\" and the model's response is "Allyson Kalia is convicted of the murder of Kiran's younger brother, Rosetta.\" the prediction is clearly correct. However, it would only achieve an F1-score of 0.29. This is also why the scores on the En.QA task in $\infty$-bench [@DBLP:conf/acl/ZhangCHXCH0TW0024] tend to be very low. We use LLM to re-evaluate, and the results are shown in Table [\[tab:vote\]](#tab:vote){reference-type="ref" reference="tab:vote"}. The accuracy becomes much higher, indicating that these datasets are not as difficult as they appear.
26
+
27
+ In this section, we introduce the construction of LaRA and how it addresses the issues present in previous benchmarks, as mentioned in Section 3. The statistics of LaRA are provided in Appendix [6](#apx:stat){reference-type="ref" reference="apx:stat"}.
28
+
29
+ In our context selection process, we adhere to the following principles: (1) Timeliness: We select **recent** high-quality long contexts to prevent data leakage issues, ensuring that they are less likely to have been included in the LLM's training data. (2) Appropriate Length: Considering that mainstream commercial and open-weight models typically support context length of 32k and 128k, we choose contexts that are as close to these window sizes as possible without exceeding them. (3) Naturalness: The chosen contexts are naturally occurring long documents, rather than artificially constructed or assembled from unrelated short texts, to ensure the benchmark reflects the complexity and diversity of real-world use. (4) Authoritativeness: All contexts are considered reliable and credible sources of information due to expertise, reputation, and qualifications of the authors or institutions behind them.
30
+
31
+ To ensure a diverse range of contexts, we select novels[^2], financial statements[^3], and academic papers[^4] as the context. For novels, we choose the txt format of novelettes and novels to serve as the 32k and 128k contexts, respectively. Financial statements include the latest quarterly reports (32k) and annual reports (128k) from publicly listed companies in the United States for the year 2024. To create contexts of appropriate length for academic papers, we combine several papers published on arXiv in 2024 that are related through citations.
32
+
33
+ To mitigate the risk of data leakage from novels, which are likely present in LLMs' training data, we perform entity replacement. Previous work has employed similar strategies [@DBLP:conf/acl/ZhangCHXCH0TW0024; @DBLP:journals/tkde/LiSHL22], but we find that many entity replacements were incorrect or inconsistent, leading to inaccurate evaluations. To address this, we use GPT-4o to accurately identify and replace character entities as well as formulating questions targeting the replaced entities, ensuring consistency between the novel text and the questions. Details are provided in Appendix [7](#apx:ner){reference-type="ref" reference="apx:ner"}.
34
+
35
+ To comprehensively evaluate the capabilities of LC LLMs and RAG, LaRA includes four major task categories: location, reasoning, comparison, and hallucination detection, which are designed to assess distinct aspects of LLM performance, motivated by the need to assess both the strengths and weaknesses of RAG and LC in handling complex, real-world information needs. Below, we will introduce each task in detail and further elaborate on the motivation behind them. Examples of each task are provided in Appendix [\[apx:case\]](#apx:case){reference-type="ref" reference="apx:case"}.
36
+
37
+ The location task, the most fundamental task in LaRA, evaluates an LLM's ability to locate specific information within a long context. In this task, the answer resides in a single sentence or paragraph within a long context, and no additional reasoning or computation is required to formulate a correct response, such as identifying a character's name or a specific value mentioned in the text. It is worth noting that the location task differs from the "Needle in a Haystack\" problem [@needle2023], which focuses on verbatim matching. In contrast, the location task allows for paraphrasing, as long as the underlying meaning is preserved. This task is crucial for assessing an LLM's basic comprehension and information retrieval capabilities within a long context.
38
+
39
+ The reasoning task in LaRA involve questions that require logical deduction, inference, or calculation based on the information provided in the long context. Instead of directly extracting answers from the text, these tasks demand a deeper understanding and processing of the information to derive the correct answer, such as inferring the relationship between two characters or calculating relevant data in financial statements. These tasks evaluate the ability of LC and RAG to handle complex questions, particularly in scenarios where the long context contains a significant amount of noise irrelevant to the question. The specific questions vary significantly depending on the type of context involved. Instead of explicitly defining sub-task types, we adopt different seed questions tailored to specific text types. These seed questions are used to generate similar QA pairs through in-context learning. For example, in financial statements, which contain a significant amount of statistical data, we focus on computational questions, and for novels, the questions involve reasoning about the plot or character traits.
40
+
41
+ The comparison task in LaRA evaluates the ability of RAG and LC to synthesize information from multiple parts of a long context, comparing their content or numerical values to arrive at the final answer. Crucially, the comparison task also involves manually designing different seed questions tailored to various text types. This approach ensures that the generated questions are not only relevant but also reflect the nuances and complexities of the specific context. For instance, in academic papers, the questions may focus on comparing different explanations of the same phenomenon, while in novels, they may compare changes in a character's traits or appearance over time. This task is essential for assessing an LLM's ability to extract information from different parts.
42
+
43
+ Hallucination, a common issue in LLMs, occurs when the model generates inaccurate or irrelevant information [@DBLP:journals/corr/abs-2311-05232]. The hallucination detection task aims to test the model's ability to decline answering questions that are not mentioned in the given context. Although the questions appear to be answerable using the context, the required information is not actually mentioned in the text. Consequently, such questions have a uniform answer: *"XXX is not mentioned in the provided context.\"* The ability to refuse to answer is crucial in practical applications of RAG and LC, particularly in domains where accuracy and reliability are paramount, as users cannot always guarantee that their questions have answers within the provided context. For example, a user might pose a seemingly relevant question about a paper, and if the model hallucinates and generates an incorrect response, it could be highly detrimental.
44
+
45
+ The annotation process for different tasks follows a similar framework, starting with the manual creation of seed questions and answers. We then utilize GPT-4o to generate new QA pairs through in-context learning. A subset of newly generated QAs is sampled for manual validation to ensure correctness and practicality. If the pass rate does not meet a predefined threshold, the seed QAs and prompts are refined, followed by re-generation and re-validation. We provide the annotation prompt in Appendix [10](#apx:annotation){reference-type="ref" reference="apx:annotation"}.
46
+
47
+ Annotating long texts presents a unique challenge due to the inherent difficulty of long context processing. One effective approach to improve generation quality is to convert annotations for long texts into annotations for shorter texts. To achieve this, we employ various strategies tailored to different context types and tasks. Specifically, for location and reasoning tasks, we split the long context into multiple segments, each approximately 10k tokens in length, and input them individually into GPT-4o to generate QAs. This approach serves multiple purposes: First, it reduces the cognitive load on the annotator (GPT-4o here) and improves the focus and accuracy of the generated QA pairs. Second, it ensures that the answers are evenly distributed across the entire context, as we observe that providing the full context to the LLM often results in answers being concentrated at the beginning and end of the context. Third, it allows us to examine the relationship between answer accuracy and answer location, enabling us to investigate whether the LLM suffers from the "lost in the middle\" issue, where performance declines for information located in the middle sections of long documents [@DBLP:journals/tacl/LiuLHPBPL24]. For the comparison task, we split the context into smaller segments and then sample two segments to generate comparison questions similar to the seed questions. Meanwhile, our segmentation strategies are tailored to the specific context type to preserve the inherent structure and coherence of the documents. For research papers, we separate concatenated papers to maintain the integrity of each individual paper. For novels and financial statements, we directly split the text into multiple segments based on token count.