FigAgent / 2005.00705 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
3bc17de verified
# Introduction
Accuracy evaluation is essential both to guide system development as well as to estimate its quality, which is important for researchers, developers, and users. This is often conducted using benchmarking datasets, containing a data sample, *possibly* representative of the target data distribution, provided with Gold Standard (GS) labels (typically produced with a human annotation process). The evaluation is done by comparing the system output with the expected labels using some metrics.
This approach unfortunately falls short when dealing with generation tasks, for which the system output may span a large, possibly infinite, set of correct items. For example, in case of Question Answering (QA) systems, the correct answers for the question, *Where is Rome located ?* is large. As it is impossible, also for cost reasons, to annotate all possible system pieces of output, the standard approach is to manually re-evaluate the new output of the system. This dramatically limits the experimentation velocity, while increasing significantly the development costs.
Another viable solution in specific domains consists in automatically generating an evaluation score between the system and the reference answers, which correlates with human judgement. The BLEU score, for example, is one popular measure in Machine Translation [@papineni-etal-2002-bleu]. This, however, can only be applied to specific tasks and even in those cases, it typically shows limitations [@DBLP:journals/corr/abs-1803-08409]. As a consequence there is an active research in learning methods to automatically evaluate MT systems [@ma-etal-2019-results], while human evaluation becomes a requirement in machine translation benchmarking [@barrault-etal-2019-findings].
QA will definitely benefit by a similar approach but the automatic evaluation is technically more complex for several reasons: First, segment overlapping metrics such as BLEU, METEOR, or ROUGE, do not work since the correctness of an answer loosely depends on the match between the reference and candidate answers. For example, two text candidates can be correct and incorrect even if they only differ by one word (or even one character), e.g., for the questions, *Who was the 43$^{rd}$ president of USA ?*, a correct answer is *George W. Bush*, while the very similar answer, *George H. W. Bush*, is wrong.
Second, the matching between the answer candidates and the reference must be carried out at semantic level and it is radically affected by the question semantics. For example, *match*$(t, r | q_1)$ can be true but *match*$(t, r | q_2)$ can be false, where $t$ and $r$ are a pair of answer candidate and reference, and $q_1$ and $q_2$ are two different questions. This can especially happen for the case of the so-called non-factoid questions, e.g., asking for a description, opinion, manner, etc., which are typically answered by a fairly long explanatory text. For example, Table [\[nonfactq\]](#nonfactq){reference-type="ref" reference="nonfactq"} shows an example of a non factoid question and three different valid answers, which share similarity with respect to the question. However, if the question were, *what may cause anxiety ?*, Answer 1 and Answer 3 would intuitively look less related to Answer 2.
In this paper, we study the design of models for measuring the Accuracy of QA systems. In particular, we design several pre-trained Transformer models [@devlin2018bert; @DBLP:journals/corr/abs-1907-11692] that encode the triple of question $q$, candidate $t$, and reference $r$ in different ways.
Most importantly, we built (i) two datasets for training and testing the point-wise estimation of QA system output, i.e., the evaluation if an answer is correct or not, given a GS answer; and (ii) two datasets constituted by a set of outputs from several QA systems, for which AVA is supposed to estimate the Accuracy.
The results show a high Accuracy for point-wise models, up to 75%. Regarding the overall Accuracy estimation, AVA can almost always replicate the ranking of systems in terms of Accuracy performed by humans. Finally, the RMSE with respect to human evaluation depends on the datasets, ranging from 2% to 10%, with an acceptable Std. Dev. lower than 3-4%.
The structure of the paper is as follows: we begin with the description of the problem in Sec. [3](#sec:problem){reference-type="ref" reference="sec:problem"}. This is then followed by the details of the data construction and model design, which are key aspects for system development, in sections [4](#sec:model){reference-type="ref" reference="sec:model"} and [5](#sec:data){reference-type="ref" reference="sec:data"}. We study the performance of our models in three different evaluation scenarios in Sec. [6](#sec:experiments){reference-type="ref" reference="sec:experiments"}.
# Method
We target the automatic evaluation of QA systems, for which system Accuracy (the percentage of correct answers) is the most important measure. We also consider more complex measures such as MAP and MRR in the context of Answer Sentence Reranking/Selection.
The task of reranking answer sentence candidates provided by a retrieval engine can be modeled with a classifier scoring the candidates. Let $q$ be a question, $T_q=\{t_1, \dots, t_n\}$ be a set of answer sentence candidates for $q$, we define $\mathcal{R}$ as a ranking function, which orders the candidates in $T_q$ according to a score, $p\left(q, t_i\right)$, indicating the probability of $t_i$ to be a correct answer for $q$. Popular methods modeling $\mathcal{R}$ include Compare-Aggregate [@DBLP:journals/corr/abs-1905-12897], inter-weighted alignment networks [@shen-etal-2017-inter], and BERT [@garg2019tanda].
The evaluation of system Accuracy can be approached in two ways: (i) evaluation of the single answer provided by the target system, which we call point-wise evaluation; and (ii) the aggregated evaluation of a set of questions, which we call system-wise evaluation.
We define the former as a function: $\mathcal{A}\left(q, r, t_i\right) \rightarrow \{0, 1\}$, where $r$ is a reference answer (GS answer) and the output is simply a correct/incorrect label. Table [\[evaluator-input\]](#evaluator-input){reference-type="ref" reference="evaluator-input"} shows an example question associated with a reference, a system answer, and a short answer[^1].
A configuration of $\mathcal{A}$ is applied to compute the final Accuracy of a system using an aggregator function. In other words, to estimate the overall system Accuracy, we simply assume the point-wise AVA predictions as they were the GS. For example, in case of the Accuracy measure, we simply average the AVA predictions, i.e., $\frac{1}{|Q|}\sum_{q\in Q} \mathcal{A}(q,r,t_i[,s])$, where $s$ is a short answer (e.g., used in machine reading). It is an optional input, which we only use for a baseline, described in Section [4.1](#sec:linear){reference-type="ref" reference="sec:linear"}.
The main intuition on building an automatic evaluator for QA is that the model should capture (i) the same information a standard QA system uses; while (ii) exploiting the semantic similarity between the system answer and the reference, biased by the information asked by the question. We build two types of models: (i) linear classifier, which is more interpretable and can help us to verify our design hypothesis and (ii) Transformer-based methods, which have been successfully used in several language understanding tasks.
Given an input example, $\left(q, r, s, t\right)$, our classifier uses the following similarity features: $x_1$=*sim-token*$\left(s,r\right)$, $x_2$=*sim-text*$\left(r,t\right)$, $x_3$=*sim-text*$\left(r,q\right)$; and $x_4$=*sim-text*$\left(q,t\right)$, where *sim-token* between $s$ and $r$ is a binary feature testing if $r$ is included in $s$, $\emph{sim-text}$ is a sort of Jaccard similarity: $$\emph{sim-text}\left(s_i, s_j\right)= 2\frac{|\text{\emph{tok}}\left(s_i\right)\cap\text{\emph{tok}}\left(s_j\right)|}{|\text{\emph{tok}}\left(s_i\right)| + |\text{\emph{tok}}\left(s_j\right)|},$$ and $\emph{tok}\left(s\right)$ is a function that splits $s$ into tokens.
Let ${\bf x} = f\left(q,r,s,t\right)=\left(x_1, x_2, x_3, x_4\right)$ be a similarity feature vector describing our evaluation tuple. We train ${\bf w}$ on a dataset $D=\{d_i:\left({\bf x}_i,l_i\right)\}$ using SVM, where $l_i$ is a binary label indicating whether $t$ answers $q$ or not. We compute the point-wise evaluation of $t$ as the test ${\bf x}\!\cdot\!{\bf w} > \alpha$, where $\alpha$ is a threshold trading off Precision for Recall in standard classification approaches.
Transformer-based architectures have proved to be powerful language models, which can capture complex similarity patterns. Thus, they are suitable methods to improve our basic approach described in the previous section. Following the linear classifier modeling, we propose three different ways to exploit the relations among the members of the tuple $\left(q,r,s,t\right)$.
Let $\mathcal{B}$ be a pre-trained language model, e.g., the recently proposed BERT [@devlin2018bert], RoBERTa [@DBLP:journals/corr/abs-1907-11692], XLNet [@DBLP:journals/corr/abs-1906-08237], AlBERT [@anonymous2020albert]. We use a language model to compute the embedding representation of the tuple members: $\mathcal{B}\left(a, a'\right) \rightarrow {\bf x} \in \mathbb{R}^d$, where $\left(a, a'\right)$ is a sentence pair, ${\bf x}$ is the output representation of the pair, and $d$ is the dimension of the output representations. The classification layer is a standard feedforward network as $\mathcal{A}\left({\bf x}\right) = {\bf W}^{\intercal}{\bf x} + b$, where **W** and $b$ are parameters we learn by fine-tuning the model on a dataset $D$.
We describe different designs for $\mathcal{A}$ as follows.
We build a language model representation for pairs of members of the tuple, $x=\left(q,r,t\right)$ by simply inputing them to Transformer models $\mathcal{B}$ in the standard sentence pair fashion. We consider four different configurations of $\mathcal{A}_0$, one for each following pair $\left(q,r\right)$, $\left(q,t\right)$, $\left(r,t\right)$, and one for the triplet, $\left(q, r, t\right)$, modeled as the concatenation of the previous three. The representation for each pair is produced by a different and independent BERT instance, i.e., $\mathcal{B}_p$. More formally, we have the following three models $\mathcal{A}_0\left( \mathcal{B}_p(p)\right)$, $\forall p \in \mathcal{D}_0$, where $\mathcal{D}_0 = \{(q,r), (q,t), (r,t)\}$. Additionally, we design a model over $(q, r, t)$ with $\mathcal{A}_0\left( \cup_{p \in \mathcal{D}_0} \hspace{.3em}\mathcal{B}_p(p) \right)$, where $\cup$ means concatenation of the representations. We do not use the short answer, $s$, as its contribution is minimal when using powerful Transformer-based models.
The models of the previous section are limited to pair representations. We improve this by designing $\mathcal{B}$ models that can capture pattern dependencies across $q$, $r$ and $t$. To achieve this, we concatenate pairs of the three pieces of text above. We indicate this string concatenation with the $\circ$ operator. Specifically, we consider $\mathcal{D}_1 = \{(q,r \circ t), (r,q \circ t), (t,q \circ r)\}$ and propose the following $\mathcal{A}_1$. As before, we have the individual models, $\mathcal{A}_1\left( \mathcal{B}_p(p)\right)$, $\forall p \in \mathcal{D}_1$ as well as the combined model, $\mathcal{A}_1\left( \cup_{p \in \mathcal{D}_1} \hspace{.3em}\mathcal{B}_p(p) \right)$, where again, we use different instances of $\mathcal{B}$ and fine-tune them together accordingly.
Our previous designs instantiate different $\mathcal{B}$ for each pair, learning the feature representations of the target pair and the relations between its members, during the fine-tuning process. This individual optimization prevents to capture patterns across the representations of different pairs as there is no strong connection between the $\mathcal{B}$ instances. Indeed, the combination of feature representations *only* happens in the last classification layer.
We propose *peer-attention* to encourage the feature transferring between different $\mathcal{B}$ instances. The idea, similar to encoder-decoder setting in Transformer-based models [@NIPS2017_7181], is to introduce an additional decoding step for each pair. Figure [1](#fig:peerattention){reference-type="ref" reference="fig:peerattention"} depicts our proposed setting for learning representation of two different pairs: $a_0=\left(a, a'\right)$ and $g_0=\left(g, g'\right)$. The standard approach learns representations for these two in one pass, via $\mathcal{B}_{a_0}$ and $\mathcal{B}_{g_0}$. In *peer-attention* setting, the representation output after processing one pair, captured in ${H}_{[CLS]}$, is input to the second pass of fine-tuning for the other pair. Thus, the representation in one pair can attend over the representation in the other pair during the decoding stage. This allows the feature representations from each $\mathcal{B}$ instance to be shared both during training and prediction stages.
<figure id="fig:peerattention" data-latex-placement="t">
<img src="peer-attention-ava.png" style="width:80.0%" />
<figcaption>peer attention on <span class="math inline">(<em>a</em>, <em>a</em><sup></sup>)</span> and <span class="math inline">(<em>g</em>, <em>g</em><sup></sup>)</span>.</figcaption>
</figure>
We describe the datasets we created to develop AVA. First, we build two large scale datasets for the standard QA task, namely **AS2-NQ** and **AS2-GPD**, derived from the Google Natural Questions dataset and our internal dataset, respectively. The construction of the datasets is described in Section [5.1](#sec:qadata){reference-type="ref" reference="sec:qadata"}. Second, we describe our approach to generate labelled data for AVA using the datasets for QA task, described in Section [5.2](#sec:avadata){reference-type="ref" reference="sec:avadata"}. Finally, we build an additional dataset constituted by a set of systems and their output on target test sets. This can be used to evaluate the ability of AVA to estimate the end-to-end system performance (system-wise evaluation), described in Section [5.3](#sec:evaldata){reference-type="ref" reference="sec:evaldata"}.
Google Natural Questions (NQ) is a large scale dataset for machine reading task [@47761]. Each question is associated with a Wikipedia page and at least one long paragraph (`long_answer`) that contains the answer to the question. The `long_answer` may contain additional annotations of `short_answer`, a succint extractive answer from the long paragraph. A `long_answer` usually consists of multiple sentences, thus NQ is not directly applicable to our setting.
We create AS2-NQ from NQ by leveraging both `long_answer` and `short_answer` annotations. In particular for a given question, the (correct) answers for a question are sentences in the long answer paragraphs that contain *annotated* `short_answer`s. The other sentences from the Wikipedia page are considered incorrect. The negative examples can be of the following types: (i) Sentences that are in the `long_answer` but do not contain *annotated* short answers. It is possible that these sentences might contain the `short_answer`. (ii) Sentences that are not part of the `long_answer` but contain a `short_answer` as subphrase. Such occurrence is generally accidental. (iii) All the other sentences in the document.
The generation of negative examples impacts on the robustness of the training model when selecting the correct answer out of the incorrect ones. AS2-NQ has four labels that describe possible confusing levels of a sentence candidate. We apply the same processing both to training and development sets of NQ. This dataset enables to perform an effective transfer step [@garg2019tanda]. Table [\[table:nq\]](#table:nq){reference-type="ref" reference="table:nq"} shows the statistics of the dataset.
::: table*
:::
A search engine using a large index can retrieve more relevant documents than those available in Wikipedia. Thus, we retrieved high-probably relevant candidates as follows: we (i) retrieved top 500 relevant documents; (ii) automatically extracted the top 100 sentences ranked by a BERT model over all sentences of the documents; and (iii) had all the top 100 sentences manually annotated as correct or incorrect answers. This process does not guarantee that we have all correct answers but the probability to miss them is much lower than for other datasets. In addition, this dataset is richer than AS2-NQ as it consists of answers from multiple sources. Furthermore, the average number of answers to a question is also higher than in AS2-NQ. Table [\[table:gpd\]](#table:gpd){reference-type="ref" reference="table:gpd"} shows the statistics of the dataset.
::: table*
:::
The AS2 datasets from the previous section typically consist of a set of questions $Q$. Each $q \in Q$ has $T_q=\{t_1, \dots, t_n\}$ candidates, comprised of both correct answers $C_q$ and incorrect answers $\overline{C_q}$, $T_q = C_q \cup \overline{C_q}$. We construct the dataset for point-wise automatic evaluation (described in Section [4](#sec:model){reference-type="ref" reference="sec:model"}) in the following steps: (i) to have positive and negative examples for AVA, we first filter the QA dataset to only keep questions that have at least two correct answers. This is critical to build positive and negative examples.
Formally, let $\left\langle q, r, t, l \right\rangle$ be an input for AVA, $$\text{AVA-Positives} = \left\langle q; \left( r, t \right) \in C_q \times C_q \text{ and } r \ne t \right\rangle$$ We also build negative examples as follows: $$\text{AVA-Negatives} = \left\langle q; \left( r, t \right) \in C_q \times \overline{C_q} \right\rangle$$ We create AVA-NQ and AVA-GPD from the QA datasets, AS2-NQ and AS2-GPD. The statistics are presented on the right side of tables [\[table:nq\]](#table:nq){reference-type="ref" reference="table:nq"} and [\[table:gpd\]](#table:gpd){reference-type="ref" reference="table:gpd"}.
To test AVA at level of overall system Accuracy, we need to have a sample of systems and their output on different test sets. We create a dataset that has candidate answers collected from eight systems from a set of 1,340 questions. The questions were sampled from an anonymized set of user utterances. We only considered information inquiry questions. The systems differ from each other in multiple ways, including: (i) *modeling*: Compare-Aggregate (CNN-based) and different Transformers-based architectures with different hyper-parameter settings; (ii) *training*: the systems trained on different resources; and (iii) *candidates*: the pool of candidates for the selected answers are different.