ACL-OCL / Base_JSON /prefixC /json /clssts /2020.clssts-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:46.891192Z"
},
"title": "What Set of Documents to Present to an Analyst?",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Technologies Cambridge MA",
"location": {
"settlement": "Raytheon",
"country": "USA"
}
},
"email": "rich.schwartz@raytheon.com"
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Technologies Cambridge MA",
"location": {
"settlement": "Raytheon",
"country": "USA"
}
},
"email": "john.makhoul@raytheon.com"
},
{
"first": "Lee",
"middle": [],
"last": "Tarlin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Technologies Cambridge MA",
"location": {
"settlement": "Raytheon",
"country": "USA"
}
},
"email": "lee.tarlin@raytheon.com"
},
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Technologies Cambridge MA",
"location": {
"settlement": "Raytheon",
"country": "USA"
}
},
"email": "damianos.karakos@raytheon.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe the human triage scenario envisioned in the Cross-Lingual Information Retrieval (CLIR) problem of the IARPA MATERIAL Program. The overall goal is to maximize the quality of the set of documents that is given to a bilingual analyst, as measured by the AQWV score. The initial set of source documents that are retrieved by the CLIR system is summarized in English and presented to human judges who attempt to remove the irrelevant documents (false alarms); the resulting documents are then presented to the analyst. First, we describe the AQWV performance measure and show that, in our experience, if the acceptance threshold of the CLIR component has been optimized to maximize AQWV, the loss in AQWV due to false alarms is relatively constant across many conditions, which also limits the possible gain that can be achieved by any post filter (such as human judgments) that removes false alarms. Second, we analyze the likely benefits for the triage operation as a function of the initial CLIR AQWV score and the ability of the human judges to remove false alarms without removing relevant documents. Third, we demonstrate that we can increase the benefit for human judgments by combining the human judgment scores with the original document scores returned by the automatic CLIR system.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe the human triage scenario envisioned in the Cross-Lingual Information Retrieval (CLIR) problem of the IARPA MATERIAL Program. The overall goal is to maximize the quality of the set of documents that is given to a bilingual analyst, as measured by the AQWV score. The initial set of source documents that are retrieved by the CLIR system is summarized in English and presented to human judges who attempt to remove the irrelevant documents (false alarms); the resulting documents are then presented to the analyst. First, we describe the AQWV performance measure and show that, in our experience, if the acceptance threshold of the CLIR component has been optimized to maximize AQWV, the loss in AQWV due to false alarms is relatively constant across many conditions, which also limits the possible gain that can be achieved by any post filter (such as human judgments) that removes false alarms. Second, we analyze the likely benefits for the triage operation as a function of the initial CLIR AQWV score and the ability of the human judges to remove false alarms without removing relevant documents. Third, we demonstrate that we can increase the benefit for human judgments by combining the human judgment scores with the original document scores returned by the automatic CLIR system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of the IARPA MATERIAL 1 Program is to search a corpus of foreign language documents and to return those documents that are relevant to an English language query in order to give those documents to a bilingual analyst. The program envisions a two-stage procedure. The first stage uses an automatic CLIR system that takes a structured English query and retrieves foreign documents that are likely to be relevant to that query. However, there is usually a shortage of qualified bilingual analysts. So we would like to do anything we can to reduce the number of false alarms in the returned lists. The solution in the MATERIAL program is a second stage, which is a triage operation in which the system produces a short English summary for each of the returned documents, that provides the evidence for the document being relevant to the query. These summaries are shown to an English-speaking triage analyst whose job is to discard documents that they believe might be irrelevant. In fact, rather than making a binary decision, the analyst is asked to provide a judgment score from 1 to 5 reflecting how likely they think it is that the document is relevant. In the next section, we will describe the AQWV measure and explain why this measure might be appropriate for this particular task. We compare it with the Mean Average Precision (MAP) measure that is most commonly used for measuring IR performance (Manning et al., 2008) . In section 3, we look at the maximum possible benefit that could be achieved by perfect triage judgments -judgments that discard all of the irrelevant documents without discarding any relevant documents. We show, empirically, that when the acceptance threshold for a system is optimized to maximize AQWV, the loss due to false alarms is relatively constant and fairly small (approximately 10%), across a wide range of conditions. And we also show that this is not true for the MAP measure. Of course, the Triage an-1 https://www.iarpa.gov/index.php/research-programs/material alysts cannot do this job perfectly, so we look at the theoretical performance that can be achieved, given that the average triage analyst has some probability of correctly rejecting an irrelevant document (TR) and another probability of falsely rejecting a relevant document (FR). We will show that the triage analyst has a very difficult task, especially if the initial performance of the automatic CLIR system is very good. In Section 4, we examine the results of actual experiments and we measure the improvement that we get by setting a threshold on the judgment scores produced by the triage analysts. In Section 5, we consider better ways to use the triage analysts' judgments. In particular, we show that it is advantageous to combine the triage judgment score for a document with the original CLIR score before comparing with any threshold. This makes it more likely that the triage judgments can improve the quality of the documents provided to the final bilingual analyst.",
"cite_spans": [
{
"start": 1411,
"end": 1433,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In some applications (such as web searches), the search engine returns a ranked list of documents and the user may look at as many documents as they need until they find the information they want. So it is particularly important that the most relevant documents are near the beginning of the list. In contrast, in the application here, we assume that the user is not just looking for a \"good enough relevant document\". Instead, they would like to find all relevant documents. But at the same time, they cannot afford to look at too many irrelevant documents. So instead of returning a ranked list of documents, the system will return a truncated list of documents and the analyst will read all of them. To reflect this different need, the performance measure used is the Average Query Weighted Value (AQWV). For each query, we measure the recall and the false alarm performance. The recall = (1 -pMiss) is the fraction of all of the relevant documents that were included in the returned list. The false alarm rate, pFA, is the fraction of the non-relevant documents in the corpus that show up in the returned list. Note that, while pMiss might be in the range from 20% to 80%, pFA is likely to be a small number, since the number of documents in the corpus is large. The performance for a single query, or QWV is simply a weighted combination of these two measures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "QW V q = 1 \u2212 pM iss q \u2212 \u03b2 \u00d7 pF A q (1) QW V q = Recall q \u2212 \u03b2 \u00d7 pF A q (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "where \u03b2 is a weight that reflects the relative cost of giving false alarms to the analyst and is usually >> 1 because pFA is usually much smaller than pMiss. In most of our experiments, \u03b2 = 40. The overall score for a set of queries, AQWV, is simply the average of the QWV for all of the queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "AQW V = Avg q [QW V q ]",
"eq_num": "(3)"
}
],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "However, it is possible that some of the queries might actually have no relevant documents in the corpus being searched, so we cannot compute Recall for those queries. At the same time, any irrelevant documents returned (false alarms) in response to those queries are still costly. So we change the computation such that we only compute the average Recall on those queries that have relevant documents, while the average pFA is computed over all queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "AQW V = Avg q\u2212rel [Recall q ]\u2212\u03b2\u00d7Avg all\u2212q [pF A q ] (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "The measure that is more commonly used in Information Retrieval (IR) research is the Mean Average Precision (MAP). We assume, here, that the ranked list of documents produced by a system using AQWV and MAP are the same. However, the system does not have the option of changing the number of documents returned for each query. It is a constant number, for example 100. Of course, the goal is to return as many of the relevant documents as possible within that list, but also to rank them such that the relevant documents are as close to the beginning of the list as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "For each query, we compute the precision at the rank of each relevant document. Any document that is not in the retrieved list is given a precision of zero. Then, we average the precision values over the relevant documents. (Hence the name \"Average Precision\".) So the main difference is that with AQWV, we have the opportunity to vary the length of the list in order to reduce the number of irrelevant documents retrieved for any given query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The AQWV Measure",
"sec_num": "2."
},
{
"text": "We measured the cost of the false alarms (\u03b2 \u00d7 pF A) over several languages with very different performance. We also measured the benefit for different values of \u03b2. One might think that when the cost for false alarms (\u03b2) is higher, the possible benefit for triage judgments is larger. In fact, this is not the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possible Benefit for Triage Judgments",
"sec_num": "3."
},
{
"text": "If the triage judges were perfect, the AQWV after the triage would be equal to the Recall for that system. Figure 1 shows the AQWV as a function of the Recall for six MA-TERIAL languages with a wide range of AQWV and Recall. It is worth noting that the value of \u03b2 was not the same for all of these languages. \u03b2 was 20 for Swahili and Tagalog, and 40 for the other four languages. But still, we see that the loss for false alarms is roughly the same (actually slightly more for Swahili and Tagalog, even though the cost for each false alarm was smaller). The upper diagonal line shows AQWV = Recall. The lower diagonal line shows AQW V = Recall-10. As can be seen, most of the languages fall very close to the lower line, with losses due to false alarms of 8% to 13% absolute. The loss due to false alarms represents the maximum possible benefit for removing false alarms. We have made similar measurements with different values of \u03b2 and the results are always the same. When \u03b2 increases and the system is tuned to choose the optimal threshold, it automatically produces",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Possible Benefit for Triage Judgments",
"sec_num": "3."
},
{
"text": "\u2022 Combining AMT and CLIR scores optimally should decrease FR (and perhaps increase TR), resulting in higher values of AQWV for E2E fewer false alarms and in doing so, it also decreases recall. Empirically, we find that the resulting loss for false alarms is always about the same. In the Babel program for keyword spotting we used the ATWV measure (Karakos et al., 2013; Alum\u00e4e et al., 2017) , which is analogous to the AQWV measure. We found this same result for 26 different languages. So it seems to be an empirical property of the measure. It might seem surprising that maximum possible cost of the false alarms is both relatively constant and also fairly small. This is not typically true with other measures, like MAP. The reason is that, with MAP, the system does not have the opportunity (or any incentive) to reduce the number of false alarms by reducing the number of documents retrieved. If it did reduce the returned documents, the only possible effect would be to replace the precision for some of the retrieved documents with a precision of zero, which is always worse. Let us consider the case of a representative ranked list. Typically, the ranked list has more relevant documents near the head of the list and the relevant documents are more sparse as we go down the list. Let us consider a query with 10 relevant documents and assume that the relevant documents occur at every power of 2. So the relevant documents are at rank 1, 2, 4, 8, ...512. Only 7 of these 10 documents would appear within the first 100 returned documents. When we compute the average precision at each of these ranks, we get a list of 10 precisions: 1, 1, 3/4, 4/8, 5/16, 6/32, 7/64, 0, 0, 0. The average of these numbers is .3859375 or 38.6%. Let's say we had a person who could review all of the 100 retrieved documents and correctly remove all of the irrelevant documents. In this case, the precision for the 7 documents within the list would be 1, so the overall precision would be 0.7 or 70%, which is a very large improvement. But the cost for this improvement would be very large because it would require that the person review 93 false documents. The AQWV measure is an attempt to include the cost of that review. But why is it that, when we optimize the threshold or the number of retrieved documents, the cost of the remaining false alarms is always around 10%? There is certainly no proof that this must be the case, because it depends on the distribution of the relevant documents. But let us consider a distribution of relevant documents similar to the one described above. That is, we assume that at any given rank, the number of relevant documents within that rank, R is log 2 (R) + 1. So at rank 8, we would have 4 relevant documents, just as in the example above.",
"cite_spans": [
{
"start": 348,
"end": 370,
"text": "(Karakos et al., 2013;",
"ref_id": "BIBREF1"
},
{
"start": 371,
"end": 391,
"text": "Alum\u00e4e et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "In Table 1 below, we show the AQWV as a function of the number of documents retained (in the left column) and the value of Beta. The second column shows the expected recall for each number of retrieved documents, which is just the number of retrieved documents divided by 10. We assume there are 10,000 documents in the entire corpus. For each number of retrieved documents and value of Beta, we give the value of AQWV. The optimal AQWV (in this quantized table) and any value within 0.004 of this best value is shown in bold. For Beta=10, the cost of false alarms is very low. So the best result shown is if we retrieve 120 to 140 documents. We see that the recall is between 79% and 81% and the AQWV is 68% -about 11% to 13% worse. When Beta increases, the best AQWV is achieved with fewer retrieved documents, because the cost of false alarms is not worth the sparse relevant documents with larger lists. As can be seen, in each case, the difference between the optimal AQWV and the recall at that same list size is between 0.1 and 0.13, or 10% to 13%. We suspect that this will be the case for most functions where the relevant documents become more sparse as we go further down the list. Of course, for any single query, this may not be the case, but when we average over many queries it will always tend to be true. From our empirical results with different languages and Table 1 : AQWV scores as a function of list size and Beta value for a corpus of 10,000 documents. The optimal value of AQWV in each column is in bold. The difference between this value and the recall in the second column is usually between 0.1 and 0.13.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
},
{
"start": 1378,
"end": 1385,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "conditions, we believe that the maximum we can benefit from removing irrelevant documents is approximately 10% absolute. But of course, real triage judgments will not achieve this benefit because there will be some false rejection of relevant documents and false acceptance of irrelevant documents. Below, we derive the benefit that can be achieved for a system as a function of the initial AQWV. First, we define the cost of false alarms, cFA. We denote CLIR as a shorthand for the AQWV that results from the CLIR system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "cF A = \u03b2 \u00d7 pF A (5) CLIR = Recall \u2212 cF A (6) Recall = CLIR + cF A",
"eq_num": "(7)"
}
],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "Now after rejecting some documents through Triage judgments, we can define the percentage of true rejections, T R, and the percentage of false rejections, F R. Define Triage as the AQWV that results after removing those documents. So by correctly removing false alarms, Triage will go up by T R \u00d7 cF A. On the other hand, but removing relevant documents, Triage will go down by F R \u00d7 Recall. So the resulting Triage score will be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "T riage = CLIR + T R \u00d7 cF A \u2212 F R \u00d7 Recall (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "And substituting Recall from the preceding equation, the change in AQWV from the Triage process will be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "Change = T riage \u2212 CLIR = T R \u00d7 cF A \u2212 F R \u00d7 (CLIR + cF A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "We can plot Change as a function of the original CLIR score for Triage systems with different F R/T R behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "In the Figure 2 , we assume that cFA = 10%, because this is the typical behavior. For example, a good Triage system (good summaries and good judges) might result in only 10% F R, together with 50% T R. That is, the triage analyst removes half of the false alarms, at a cost of losing only 10% of the relevant documents returned by the CLIR. As can be seen in the figure, as the initial AQWV increases, the change in AQWV decreases and is usually negative rather than positive. There is only a small predicted gain of about 1% absolute for the lowest initial AQWV (on Somali). For the other languages, there are substantial losses rather than the gain hoped for. A different summarization system and set of triage judges might have a different operating point, where they are able to correctly reject 80% of the irrelevant documents, but at a cost of falsely rejecting 20% of the relevant documents. While one might predict that this system might have similar overall performance, the line plotted for this triage system shows that the losses are much larger. This shows that, for this performance measure, the most important feature of the triage performance is that the FR rate must be extremely low. Finally, a third line shows what would happen if the triage analysts (together with their summaries) were able to remove 50% of the irrelevant documents, but only falsely discard 5% of the relevant documents. In this case, there is a modest gain for all of the languages. The conclusion is that it is very difficult for a triage analyst to make a significant improvement in AQWV.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 15,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Change from CLIR",
"sec_num": null
},
{
"text": "Next we look at different ways to use the judgments that result from the triage operation. The first thing we look at is the effect of the threshold on the judgment score. We performed a set of experiments using a Lithuanian corpus of text and audio documents within the MATERIAL program. The CLIR system was run on the Analysis set using the Q1 set of 300 queries. Summaries were generated and were judged using Amazon Mechanical Turk (AMT). Each judgment was on a scale from 1 to 5, with 1 being clearly irrelevant and 5 being clearly relevant. Table 2 shows the AQWV values for each of the five thresholds, for both Text and Audio. For each threshold, we show the result using the judgments. The result with the highest AQWV for each condition is shown in bold. A threshold of 1 means all documents will be accepted, and therefore gives the AQWV obtained by the CLIR system.",
"cite_spans": [],
"ref_spans": [
{
"start": 547,
"end": 554,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the Decision Threshold",
"sec_num": "4."
},
{
"text": "For both text and audio, we see that there is a modest gain for text and a larger gain for audio data. Using thresholds greater than 2 gives worse results than the original CLIR (threshold 1). For reference, we also show in the last row of Table 2 (labeled 'Oracle') the AQWV that we would get if the AMT judges made perfect judgments, i.e., if they judged all relevant documents as relevant and all nonrelevant documents as nonrelevant. Note that these Oracle AQWV values are 9-11 points higher than the original CLIR values. So, this is the maximum possible gain achievable from perfect summaries and judges. By finding the threshold that maximizes AQWV in Table 2 , we have narrowed that gap a little. Of course, a different system might have a different optimal threshold. So the optimal threshold for a system must be determined empirically.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 247,
"text": "Table 2",
"ref_id": null
},
{
"start": 659,
"end": 666,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the Decision Threshold",
"sec_num": "4."
},
{
"text": "We shall see below that the gap can be narrowed further by including the CLIR score in our optimization. As can be seen in Table 2 , even with the optimal threshold, the gain in AQWV for using the judgments is a small fraction of the upper bound. So the question is whether there is any other way to use the scores to get better results.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the Decision Threshold",
"sec_num": "4."
},
{
"text": "In the previous section, we discussed the improvement in AQWV that we might get if we replace the relevance score for each document, produced by the CLIR system with the judgment score produced by the Triage analyst and used an acceptance threshold. But the CLIR relevance score also contains very useful information. We maintain that, in order to optimize E2E performance, we should make use of both CLIR and Triage scores in making the final decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimizing End-to-End (E2E) Performance",
"sec_num": "5."
},
{
"text": "Our proposal is to combine the CLIR relevance and Triage judgment scores (analogous to what we normally do in system combination). A simple weighted linear combination",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimizing End-to-End (E2E) Performance",
"sec_num": "5."
},
{
"text": "Interpolation weight w Text Audio 0.0 (only AMT score) 64.3 55.0 0.3 65.6 57.3 0.7 65.3 57.9 1.0 (only CLIR score) 64.3 53.9 Oracle 73.1 64.6 Table 3 : Results for combining AMT score with CLIR score (scaled linearly to 1 to 5) as a function of the interpolation weight w. Best results are shown in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Optimizing End-to-End (E2E) Performance",
"sec_num": "5."
},
{
"text": "of the two scores for each document is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimizing End-to-End (E2E) Performance",
"sec_num": "5."
},
{
"text": "Combined score = w\u00d7CLIR score +(1\u2212w)\u00d7T riage score (9) where 0 \u2264 w \u2264 1. We then find the value of w that maximizes AQWV for a particular system and condition (text or audio). Before combining the scores, we first scale all the CLIR scores (for text and audio separately) linearly to occupy the same range as the Triage scores (1-5). In this way, this simple combination mechanism above might be applied to CLIR systems with different types of scores. (One could obviously use a more complex nonlinear combination or learn the optimal combination from a small amount of labeled data. But we wanted to make the point by keeping this really simple.) In Table 3 , we show the results of an E2E experiment using the results of the same CLIR/Triage experiment for Lithuanian reported above. We sweep weight w from 0 (only Triage score) to 1 (only CLIR score). For each value of w, we find the threshold on the combined score that gives the highest value of AQWV. The first row in the table (weight 0) are the same values shown in Table 2 for threshold 2, and the row with weight 1.0 are the AQWV values using CLIR scores only. As can be seen from this table, it is possible to improve on overall results by combining Triage and CLIR scores. The improvement for text is 1.3 points and 2.9 points for audio over the best AQWV values from using the optimal thresholds for AMT scores. By comparing the bold numbers in Table 3 with the Oracle numbers in Table 2 , we see that the gap has narrowed to about 7 points. In fairness, we should point out that the weight and the threshold were optimized on the same data on which we measure performance. In a proper procedure, we should estimate these 2 parameters on a held out tuning set. However, since we have 300 queries and 1000 returned documents, we do not believe the results would change much. As we can see in Table 2 , the performance does not even change very much between weights of 0.3 and 0.7. So we do not believe these results are unrealistic.",
"cite_spans": [],
"ref_spans": [
{
"start": 650,
"end": 657,
"text": "Table 3",
"ref_id": null
},
{
"start": 1024,
"end": 1031,
"text": "Table 2",
"ref_id": null
},
{
"start": 1408,
"end": 1415,
"text": "Table 3",
"ref_id": null
},
{
"start": 1443,
"end": 1450,
"text": "Table 2",
"ref_id": null
},
{
"start": 1854,
"end": 1861,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Optimizing End-to-End (E2E) Performance",
"sec_num": "5."
},
{
"text": "The simple experiments performed here show that, even though it is very difficult to improve on the CLIR result alone, it is possible to get some improvements if we use the scores in an appropriate way. Undoubtedly, there are better ways of combining the judgment and CLIR scores. These methods were just the simplest reasonable methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "One reason that the maximum benefit for discarding documents is that we use the same value of \u03b2 for optimizing the initial CLIR threshold and for scoring the final result after the Triage operation. If we had used a lower value of \u03b2 for the first stage, thereby returning more documents from the CLIR, there would be more relevant documents and there would be a chance for a higher final AQWV score. Of course, this would come at the cost of having to judge more documents in the Triage stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "We have examined the AQWV measure and the effect it has in a CLIR system with a human Triage component. We have shown that the nature of the measure in our system when optimized system results in a relatively small loss due to false alarms. This in turn, makes it difficult to obtain further gains by using human judgments to remove those false alarms. We showed that if human judgments are used, the scores of the judgments are most powerful if they are combined with all other scores in order to derive the most benefit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "The system used for all of these measurements was the FLAIR System within the MATERIAL program (Zhang et al., 2020) . So we thank all of the people involved in building that system. This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Air Force Research Laboratory contract number FA8650-17-C-9118.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The 2016 BBN georgian telephone speech keyword spotting system",
"authors": [
{
"first": "T",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hsiao",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "5755--5759",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alum\u00e4e, T., Karakos, D., Hartmann, W., Hsiao, R., Zhang, L., Nguyen, L., Tsakalidis, S., and Schwartz, R. M. (2017). The 2016 BBN georgian telephone speech key- word spotting system. In 2017 IEEE International Con- ference on Acoustics, Speech and Signal Processing, ICASSP 2017, New Orleans, LA, USA, March 5-9, 2017, pages 5755-5759. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Score normalization and system combination for improved keyword spotting",
"authors": [
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ranjan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hsiao",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Saikumar",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bulyko",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gr\u00e9zl",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sz\u00f6ke",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vesel\u00fd",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lamel",
"suffix": ""
},
{
"first": "V",
"middle": [
"B"
],
"last": "Le",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "210--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karakos, D., Schwartz, R. M., Tsakalidis, S., Zhang, L., Ranjan, S., Ng, T., Hsiao, R., Saikumar, G., Bulyko, I., Nguyen, L., Makhoul, J., Gr\u00e9zl, F., Hannemann, M., Karafi\u00e1t, M., Sz\u00f6ke, I., Vesel\u00fd, K., Lamel, L., and Le, V. B. (2013). Score normalization and system combina- tion for improved keyword spotting. In 2013 IEEE Work- shop on Automatic Speech Recognition and Understand- ing, Olomouc, Czech Republic, December 8-12, 2013, pages 210-215. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, C. D., Raghavan, P., and Sch\u00fctze, H. (2008). In- troduction to Information Retrieval. Cambridge Univer- sity Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The 2019 bbn cross-lingual information retrieval system",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Tarlin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Akodes",
"suffix": ""
},
{
"first": "S",
"middle": [
"K"
],
"last": "Gouda",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bathool",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of LREC Workshop on Cross-Language Search and Summarization of Text and Speech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, L., Karakos, D., Hartmann, W., Srivastava, M., Tar- lin, L., Akodes, D., Gouda, S. K., Bathool, N., Zhao, L., Jiang, Z., Schwartz, R., and Makhoul, J. (2020). The 2019 bbn cross-lingual information retrieval system. In Proceedings of LREC Workshop on Cross-Language Search and Summarization of Text and Speech, Mar- seille, France, 2020.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "AQWV(CLIR) = Recall -Beta x pFA Recall \u2022 Loss due to pFA is typically between 8%-13% BBN Technologies Corp. All Rights Reserved. 1"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The AQWV vs. Recall values for 6 MATERIAL languages. The upper diagonal line represents AQWV = Recall. The lower diagonal line represents AQWV = Recall -10. Most languages fall near the lower line."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "A plot of the expected change in AQWV that would accompany a Triage operation with the specified FR/TR (False Rejection / True Rejection) behavior, as a function of the initial AQWV produced by the CLIR system. For reference, we show the initial CLIR for 6 languages."
}
}
}
}