ACL-OCL / Base_JSON /prefixN /json /N04 /N04-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:45:10.477705Z"
},
"title": "Multiple Similarity Measures and Source-Pair Information in Story Link Detection",
"authors": [
{
"first": "Francine",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "fchen@parc.com"
},
{
"first": "Ayman",
"middle": [],
"last": "Farahat",
"suffix": "",
"affiliation": {},
"email": "farahat\u00a1@parc.com"
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": "",
"affiliation": {},
"email": "thorsten@brants.net"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "State-of-the-art story link detection systems, that is, systems that determine whether two stories are about the same event or linked, are usually based on the cosine-similarity measured between two stories. This paper presents a method for improving the performance of a link detection system by using a variety of similarity measures and using source-pair specific statistical information. The utility of a number of different similarity measures, including cosine, Hellinger, Tanimoto, and clarity, both alone and in combination, was investigated. We also compared several machine learning techniques for combining the different types of information. The techniques investigated were SVMs, voting, and decision trees, each of which makes use of similarity and statistical information differently. Our experimental results indicate that the combination of similarity measures and source-pair specific statistical information using an SVM provides the largest improvement in estimating whether two stories are linked; the resulting system was the bestperforming link detection system at TDT-2002.",
"pdf_parse": {
"paper_id": "N04-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "State-of-the-art story link detection systems, that is, systems that determine whether two stories are about the same event or linked, are usually based on the cosine-similarity measured between two stories. This paper presents a method for improving the performance of a link detection system by using a variety of similarity measures and using source-pair specific statistical information. The utility of a number of different similarity measures, including cosine, Hellinger, Tanimoto, and clarity, both alone and in combination, was investigated. We also compared several machine learning techniques for combining the different types of information. The techniques investigated were SVMs, voting, and decision trees, each of which makes use of similarity and statistical information differently. Our experimental results indicate that the combination of similarity measures and source-pair specific statistical information using an SVM provides the largest improvement in estimating whether two stories are linked; the resulting system was the bestperforming link detection system at TDT-2002.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Story link detection, as defined in the Topic Detection and Tracking (TDT) competition sponsored by the DARPA TIDES program, is the task of determining whether two stories, such as news articles and/or radio broadcasts, are about the same event, or linked. In TDT an event is defined as \"something that happens at some specific time and place\" (TDT, 2002) . For example, a story about a tornado in Kansas in May and another story about a tornado in Nebraska in June should not be classified as linked because they are about different events, although they both fall under the same general \"topic\" of natural disasters. But a story about damage due to a tornado in Kansas and a story about the clean-up and repairs due to the same tornado in Kansas are considered linked events.",
"cite_spans": [
{
"start": 344,
"end": 355,
"text": "(TDT, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the TDT link detection task, a link detection system is given a sequence of time-ordered sets of stories, where each set is from one news source. The system can \"look ahead\" N source files from the current source file being processed when deciding whether the current pair is linked. Because the TDT link detection task is focused on streams of news stories, one of the primary differences between link detection and the more traditional IR categorization task is that new events occur relatively frequently and comparisons of interest are focused on events that are not known in advance. One consequence of this is that the best-performing systems usually adapt to new input. Link detection is thought of as the basis for other event-based topic analysis tasks, such as topic tracking, topic detection, and first-story detection (TDT, 2002) .",
"cite_spans": [
{
"start": 833,
"end": 844,
"text": "(TDT, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The DARPA TDT story link detection task requires identifying pairs of linked stories. The original language of the stories are in English, Mandarin and Arabic. The sources include broadcast news and newswire. For the required story link detection task, the research groups tested their systems on a processed version of the data in which the story boundaries have been manually identified, the Arabic and Mandarin stories have been automatically translated to English, and the broadcast news stories have been converted to text by an automatic speech recognition (ASR) system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "A number of research groups have developed story link detection systems. The best current technology for link detection relies on the use of cosine similarity between document terms vectors with TF-IDF term weighting. In a TF-IDF model, the frequency of a term in a document (TF) is weighted by the inverse document frequency (IDF), the inverse of the number of documents containing a term. UMass (Allan et al., 2000) has examined a number of similarity measures in the link detection task, including weighted sum, language modeling and Kullback-Leibler divergence, and found that the cosine similarity produced the best results. More recently, in Lavrenko et al. (2002) , UMass found that the clarity similarity measure performed best for the link detection task. In this paper, we also examine a number of similarity measures, both separately, as in Allan et al. (2000) , and in combination. In the machine learning field, classifier combination has been shown to provide accuracy gains (e.g., Belkin et al.(1995) ; Kittler et al. (1998) ; Brill and Wu (1998) ; Dietterich (2000) ). Motivated by the performance improvement observed in these studies, we explored the combination of similarity measures for improving Story Link Detection.",
"cite_spans": [
{
"start": 397,
"end": 417,
"text": "(Allan et al., 2000)",
"ref_id": "BIBREF0"
},
{
"start": 648,
"end": 670,
"text": "Lavrenko et al. (2002)",
"ref_id": "BIBREF17"
},
{
"start": 852,
"end": 871,
"text": "Allan et al. (2000)",
"ref_id": "BIBREF0"
},
{
"start": 996,
"end": 1015,
"text": "Belkin et al.(1995)",
"ref_id": "BIBREF2"
},
{
"start": 1018,
"end": 1039,
"text": "Kittler et al. (1998)",
"ref_id": "BIBREF16"
},
{
"start": 1042,
"end": 1061,
"text": "Brill and Wu (1998)",
"ref_id": "BIBREF5"
},
{
"start": 1064,
"end": 1081,
"text": "Dietterich (2000)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "CMU hypothesized that the similarity between a pair of stories is influenced by the source of each story. For example, sources in a language that is translated to English will consistently use the same terminology, resulting in greater similarity between linked documents with the same native language. In contrast, sources from radio broadcasts may be transcribed much less consistently than text sources due to recognition errors, so that the expected similarity of a radio broadcast and a text source is less than that of two text sources. They found that similarity thresholds that were dependent on the type of the story-pair sources (e.g., English/non-English language and broadcast news/newswire) improved story-link detection results by 15% (Carbonell et al., 2001) . We also investigate how to make use of differences in similarity that are dependent on the types of sources composing a story pair. We refer to the statistics characterizing story pairs with the same source types as source-pair specific information. In contrast to the source-specific thresholds used by CMU, we normalize the similarity measures based on the sourcepair specific information, simultaneously with combining different similarity measures.",
"cite_spans": [
{
"start": 749,
"end": 773,
"text": "(Carbonell et al., 2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Other researchers have successfully used machine learning algorithms such as support vector machines (SVM) (Cristianini and Shawe-Taylor, 2000; Joachims, 1998) and boosted decision stumps (Schapire and Singer, 2000) for text categorization. SVM-based systems, such as that described in (Joachims, 1998) , are typically among the best performers for the categorization task. However, attempts to directly apply SVMs to TDT tasks such as tracking and link detection have not been successful; this has been attributed in part to the lack of enough data for training the SVM 1 . In these systems, the input was the set of term vectors characterizing each document, similar to the input used for the categorization task. In this pa-per, we present a method for using SVMs to improve link detection performance by combining heterogeneous input features, composed of multiple similarity metrics and statistical characterization of the story sources. We additionally examine the utility of the statistical information by comparing against decision trees, where the statistical characterization is not utilized. We also examine the utility of the similarity values by comparing against voting, where the classification based on each similarity measure is combined.",
"cite_spans": [
{
"start": 107,
"end": 143,
"text": "(Cristianini and Shawe-Taylor, 2000;",
"ref_id": "BIBREF9"
},
{
"start": 144,
"end": 159,
"text": "Joachims, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 188,
"end": 215,
"text": "(Schapire and Singer, 2000)",
"ref_id": "BIBREF18"
},
{
"start": 286,
"end": 302,
"text": "(Joachims, 1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "To determine whether two documents are linked, state-ofthe-art link detection systems perform three primary processing steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "1. preprocessing to create a normalized set of terms for representing each document as a vector of term counts, or term vector 2. adapting model parameters (i.e., IDF) as new story sets are introduced and computing the similarity of the term vectors 3. determining whether a pair of stories are linked based on the similarity score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "In this paper, we describe our investigations in improving the basic story link detection systems by using source specific information and combining a number of similarity measures. As in the basic story link detection system, a similarity score between two stories is computed. In contrast to the basic story link detection system, a variety of similarity measures is computed and the prediction models use source-pair-specific statistics (i.e., median, average, and variance of the story pair similarity scores). We do this in a post-processing step using machine learning classifiers (i.e., SVMs, decision trees, or voting) to produce a decision with an associated confidence score as to whether a pair of stories are linked. Source-pair-specific statistics and multiple similarity measures are used as input features to the machine learning based techniques in post-processing the similarity scores. In the next sections, we describe the components and processing performed by our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "For preprocessing, we tokenize the data, remove stopwords, replace spelled-out numbers by digits, replace the tokens by their stems using the Inxight LinguistX morphological analyzer, and then generate a term-frequency vector to represent each story. For text where the original source is Mandarin, some of the terms are untranslated. In our experiments, we retain these terms because many are content words. Both the training data and test data are preprocessed in the same way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "Our base stoplist is composed of 577 terms. We extend the stoplist with terms that are represented differently by ASR systems and text documents. For example, in the broadcast news documents in the TDT collection \"30\" is spelled out as \"thirty\" and \"CNN\" is represented as three separate tokens \"C\", \"N\", and \"N\". To handle these differences, an \"ASR stoplist\" was automatically created. Chen et al. (2003) found that the use of an enhanced stoplist, formed from the union of a base stoplist and ASR stoplist, was very effective in improving performance and empirically better than normalizing ASR abbreviations.",
"cite_spans": [
{
"start": 388,
"end": 406,
"text": "Chen et al. (2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stop Words",
"sec_num": "3.1.1"
},
{
"text": "The training data is used to compute the initial document frequency over the corpus for each term. The document frequency of term ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a6 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "is defined to be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a6 \u00a7 \u00a9 \u00a4 \u00a6 \u00a7 \u00a9 \u00a2 \u00a1 ! # \" % $ % & ' ! ) ( $ 1 0 2 \u00a2 \u00a1 3 4 0 ! $ 6 5 ( 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "Separate document term counts, \u00a4 \u00a6 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": ", and document counts, , are computed for each type of source. Our similarity calculations of documents are based on an incremental TF-IDF model. Term vectors are created for each story, and the vectors are weighted by the inverse document frequency, IDF, i.e., , is added to the model, the document term counts are updated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "U \u00a4 \u00a6 \u00a7 V \u00a9 U % W Y X \u00a4 \u00a6 \u00a7 \u00e0 Y b d c \u00a4 Q \u00a7 where b c F I H Q P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "denotes the document count for term in the newly added set of documents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "T 4 U . The initial document counts f e \u00a4 Q \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "were generated from a training set. In a static TF-IDF model, new words (i.e., those words, that did not occur in the training set) are ignored in further computations. An incremental TF-IDF model uses the new vocabulary in similarity calculations, which is an advantage for the TDT task because new events often contain new vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "Since very low frequency terms tend to be uninformative, we set a threshold ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-specific Incremental TF-IDF Model",
"sec_num": "3.1.2"
},
{
"text": "The document frequencies,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "\u00a1 i \u00a2 \u00a5 \u00a4 Q \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": ", the number of documents containing term , and document term frequencies,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "\u00a2 \u00a5 \u00a4 v \u00a1 x w y \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": ", are used to calculate TF-IDF based weights",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "\u00a4 \u00a6 ) w y \u00a1 G \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "for the terms in a document (or story)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a1 : \u00a4 \u00a6 \u00a1 w \u00a8 \u00a7 \u00a9 \u00a4 v \u00a1 \u00a7 \u00a2 \u00a5 \u00a4 \u00a6 \u00a1 w \u00a8 \u00a7 \u00a5 ! 8 9 s A \u00a4 Q \u00a7",
"eq_num": "(1)"
}
],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "where is the total number of documents and \u00a4 \u00a6 \u00a1 G \u00a7 is a normalization value. For the Hellinger, Tanimoto, and clarity measures, it is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a4 v \u00a1 \u00a7 V \u00a9 H \u00a2 \u00a5 \u00a4 \u00a6 \u00a1 w \u00a8 \u00a7 ! 8 @ 9 B A \u00a4 Q \u00a7",
"eq_num": "(2)"
}
],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "For cosine distance it is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "\u00a4 \u00a6 \u00a1 G \u00a7 V \u00a9 H \u00a2 \u00a5 \u00a4 \u00a6 \u00a1 w \u00a8 \u00a7 \u00a5 8 @ 9 B A H \u00a4 Q \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Weighting",
"sec_num": "3.1.3"
},
{
"text": "In addition to the cosine similarity measure used in baseline systems, we compute story pair similarity over a set of measures, motivated by the accuracy gains obtained by others when combining classifiers (see Section 2). A vector composed of the similarity values is created and is given to a trained classifier, which emits a score. The score can be used as a measure of confidence that the story pairs are linked. The similarity measures that we examined are cosine, Hellinger, Tanimoto, and clarity. Each of the measures captures a different aspect of the similarity of the terms in a document. Classifier combination has been observed to perform best when the classifiers produce independent judgments. The cosine distance between the word distribution for documents \u00a1 X and \u00a1 is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "! 0 & \u00a4 \u00a6 \u00a1 X w 6 \u00a1 \u00a7 \u00a9 H \u00a4 v \u00a1 X w \u00a8 \u00a7 \u00a5 \u00a4 \u00a6 \u00a1 w \u00a8 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "This measure has been found to perform well and was used by all the TDT 2002 link detection systems (unpublished presentations at the TDT2002 workshop). In contrast to the Euclidean distance based cosine measure, the Hellinger measure is a probabilistic measure. The Hellinger measure between the word distributions for documents \u00a1 X and \u00a1 is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "3 0 & d \u00a4 v \u00a1 X w y \u00a1 \u00a7 V \u00a9 H r e \u00a4 \u00a6 \u00a1 X w y \u00a7 \u00a5 \u00a4 v \u00a1 w \u00a8 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "where ranges over the terms that occur in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "\u00a1 X or \u00a1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "In Brants et al. (2002) , the Hellinger measure was used in a text segmentation application and was found to be superior to the cosine similarity.",
"cite_spans": [
{
"start": 3,
"end": 23,
"text": "Brants et al. (2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "The Tanimoto (Duda and Hart, 1973) measure is a measure of the ratio of the number of shared terms between two documents to the number possessed by one document only. We modified it to use frequency counts, instead of a binary indicator as to whether a term is present and computed it as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "f h g Q i k j m l G n h o i l B p 6 q s r t v u w j m l n o \u00a3 x # q 7 y w j m l p o \u00a3 x i q t u z w j m l G n h o \u00a3 x i q p s { w j m l B p ! o \u00a3 x # q p | w j m l G n 6 o \u00a3 x # q 7 y w j m l B p 3 o \u00a3 x # q z }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "The clarity measure was introduced by Croft et al. 2001and shown to improve link detection performance by Lavrenko et al. (2002) . It gets its name from the distance to general English, which is called Clarity. We used a symmetric version that is computed as:",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "Lavrenko et al. (2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "3 0 & \u00a4 \u00a6 \u00a1 X w y \u00a1 \u00a7 \u00a9 \u00a1 \u00a2 \u00a4 \u00a3 \u00a4 \u00a4 Q ) w y \u00a1 X \u00a7 \u00a6 \u00a5 \u00a7 \u00a5 \u00a4 \u00a6 ) w y \u00a1 \u00a7 y \u00a7\u00a2 \u00a4 \u00a3 \u00a4 \u00a4 Q ) w 6 \u00a1 G X 3 \u00a7 \u00a5 \u00a9 \u00a5 \u00a7 \u00a1 \u00a2 \u00a4 \u00a3 \u00a4 \u00a4 Q ) w 6 \u00a1 \u00a7 \u00a5 \u00a9 \u00a5 \u00a4 Q ) w y \u00a1 X \u00a7 \u00a7\u00a2 \u00a4 \u00a3 \u00a4 \u00a4 Q ) w 6 \u00a1 \u00a5 \u00a7 \u00a5 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "where is the probability distribution of words for \"general English\" as derived from the training corpus, and KL is the Kullback-Leibler divergence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "\u00a2 \u00a4 \u00a3 \u00a4 I 5 \u00a5 \u00a9 \u00a5 z \u00a7 \u00a9 t 5 s \u00a4 7 \u00a7 \" ! $ # F P % ) F P ' &",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "I n computing the clarity measure, the term frequencies were smoothed with the General English model using a weight of 0.01. This enables the KL divergence to be defined when",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "\u00a4 Q ) w 6 \u00a1 X \u00a7 or \u00a4 Q ) w 6 \u00a1 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "is 0. The idea behind the clarity measure is to give credit to similar pairs of documents with term distributions that are very different from general English, and to discount similar pairs of documents with term distributions that are close to general English, which can be interpreted as being nontopical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "We also defined the \"source-pair normalized cosine\" distance as the cosine distance normalized by dividing by the running median of the similarity values corresponding to the source-pair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "3 0 & d \u00a4 v \u00a1 X w y \u00a1 \u00a7 V \u00a9 t H \u00a4 \u00a6 \u00a1 X B w \u00a8 \u00a7 ) ( \u00a4 v \u00a1 w y \u00a7 0 2 1 \" 3 5 4 7 6 ' 8 \u00a4 ! 0 & h @ 9 B Ah D C ! \u00a7 where 0 E 1 \u00a6 3 5 4 7 6 ' 8 \u00a4 3 0 & h 9AhC ! \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "is the running median of the similarity values of all processed story pairs where the source of . This is a finer-grained use of source pair information than what was used by CMU, which used decision thresholds conditioned on whether or not the sources were cross-language or cross-ASR/newswire (Carbonell et al., 2001) .",
"cite_spans": [
{
"start": 295,
"end": 319,
"text": "(Carbonell et al., 2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "In a base system employing a single similarity measure, the system computes the similarity measure for each story pair, which is given to the evaluation program (see Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.2"
},
{
"text": "We examined a number of methods for improving link detection, including: P compare the 5 similarity measures alone P combine subsets of similarity scores using a support vector machine (SVM) P combine source-pair statistics with the corresponding similarity score using an SVM, for each of the 5 similarity measures P combine subsets of similarity scores with source pair information using an SVM P compare SVMs, decision trees, and majority voting as alternative methods for combining scores In contrast to earlier attempts that applied the machine learning categorization paradigm of using the term vectors as input features (Joachims, 1998) to the link detection task, we believed that the use of document term vectors is too fine-grained for the SVMs to develop good generalization with a limited amount of labeled training data. Furthermore, the use of terms as input to a learner, as was done in the categorization task (see Section 2), would require frequent retraining of a link detection system since new stories often discuss new topics and introduce new terms. For our work, we used more general characteristics of a document pair, the similarity between a pair of documents, as input to the machine learning systems. Thus, in contrast to the term-based systems, the machine learning techniques are used in a post-processing step after the similarity scores are computed. Additionally, to normalize differences in expected similarity among pairs of source types, source-pair statistics are used as features in deciding whether two stories are linked and in estimating the confidence of the decision.",
"cite_spans": [
{
"start": 627,
"end": 643,
"text": "(Joachims, 1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Story Link Detection Performance",
"sec_num": "3.3"
},
{
"text": "In the next sections, we describe our methods for combining the similarity scores using machine learning techniques, and for combining the similarity scores with source-pair specific information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Story Link Detection Performance",
"sec_num": "3.3"
},
{
"text": "We used an SVM to combine sets of similarity measures for predicting whether two stories are linked because theoretically it has good generalization properties (Cristianini and Shawe-Taylor, 2000) , it has been shown to be a competitive classifier for a variety of tasks (e.g., (Cristianini and Shawe-Taylor, 2000; Gestal et al., 2000) , and it makes full use of the similarity scores and statistical characterizations. We also empirically show in Section 4.3.2 that it provides better performance than decision trees and voting for this task. The SVM is first trained on a set of labeled data where the input features are the sets of similarity measures and the class labels are the manually assigned decisions as to whether a pair of documents are linked. The trained model is then used to automatically decide whether a new pair of stories are linked. For the support vector machine, we used SVM-light (Joachims, 1999) . A polynomial kernel was used in all the reported SVM experiments. In addition to making a decision as to whether two stories are linked, we use the value of the decision function produced by SVM-light as a measure of Training SVM-light on a 20,000 story-pair training corpus usually requires less than five minutes on a 1.8 GHz Linux machine, although the time is quite variable depending on the corpus characteristics. However, once the system is trained, testing new story pair similarities requires less than 1 min for over 20,000 story pairs.",
"cite_spans": [
{
"start": 160,
"end": 196,
"text": "(Cristianini and Shawe-Taylor, 2000)",
"ref_id": "BIBREF9"
},
{
"start": 278,
"end": 314,
"text": "(Cristianini and Shawe-Taylor, 2000;",
"ref_id": "BIBREF9"
},
{
"start": 315,
"end": 335,
"text": "Gestal et al., 2000)",
"ref_id": null
},
{
"start": 905,
"end": 921,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Similarity Scores with SVMs",
"sec_num": "3.3.1"
},
{
"text": "Source-pair-specific information that statistically characterizes each of the similarity measures is used in a postprocessing step. In particular, we compute statistics from the training data similarity scores for different combinations of source modalities and languages. The modality pairs that we considered are: asr:asr, asr:text, and text:text, where asr represents \"automatic speech recognition\". The combinations of languages that we used are: English:English, English:Arabic, English:Mandarin, Arabic:Arabic, Arabic:Mandarin, Mandarin:Mandarin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-Pair Specific Information",
"sec_num": "3.3.2"
},
{
"text": "The rows of Table 1 represent possible combinations of source language for the story pairs; the columns represent different combinations of source modality. The alphabetic characters in the cells represent the pair similarity statistics of mean, median, and variance for that condition obtained from the training corpus. For conditions where training data was not available, we used the statistics of a coarser grouping. For example, if there is no data for the cell with languages Mandarin:Arabic and modality pair asr:asr, we would use statistics from the language pair non-English:non-English and modality pair asr:asr.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Source-Pair Specific Information",
"sec_num": "3.3.2"
},
{
"text": "Prior to use in link detection, an SVM is trained on a set of features computed for each story pair. These include the similarity measures described in Section 3.2 and corresponding source-pair specific statistics (average, median and variance) for the similarity measures. The motivation for using the statistical values is to inform the SVM about the type of source pairs that are being considered. Rather than using categorical labels, the sourcepair statistics provide a natural ordering to the source-pair types and can be used for normalization. When a new pair of stories is post-processed, the computed similarity measures and the corresponding source-pair statistics are used as input to the trained SVM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-Pair Specific Information",
"sec_num": "3.3.2"
},
{
"text": "In addition to SVMs, we investigated the utility of decision trees (Breiman et al., 1984) and majority voting (Kittler et al., 1998) as techniques to combine similarity measures and statistical information in a post-processing step. The simplest method that we examined for combining similarity scores is to create a separate classifier for each similarity measure and then classify based a combination of the votes of the different classifiers (Kittler et al., 1998) . This method does not utilize statistical information. The single measure classifiers use an empirically determined threshold based on training data.",
"cite_spans": [
{
"start": 67,
"end": 89,
"text": "(Breiman et al., 1984)",
"ref_id": "BIBREF4"
},
{
"start": 110,
"end": 132,
"text": "(Kittler et al., 1998)",
"ref_id": "BIBREF16"
},
{
"start": 445,
"end": 467,
"text": "(Kittler et al., 1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Methods for Combining Similarity Scores",
"sec_num": "3.3.3"
},
{
"text": "Decision trees and SVMs are classifiers that use the similarity scores directly. Decision trees such as C4.5 easily handle categorical data. In our experiments, we noted that although source-pair specific statistics were used as an input feature to the decision tree, the decision trees treated the source-pair based statistical information as categorical features. For the decision trees we used the WEKA implementation of C4.5 (Witten and Frank, 1999) .",
"cite_spans": [
{
"start": 429,
"end": 453,
"text": "(Witten and Frank, 1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Methods for Combining Similarity Scores",
"sec_num": "3.3.3"
},
{
"text": "We conducted a set of experiments to compare the utility of combining similarity measures and the use of normalization statistics. We also compared the utility of different statistical learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For our studies, we used corpora developed by the LDC for the TDT tasks (Cieri et al, 2003) . The TDT3 corpus contains 40,000 news articles and broadcast news stories with 120 labeled events in English, Mandarin and Arabic from October through December 1998. For our comparative evaluations, we initialized the document term counts and document counts using the TDT2 data (from TDT 1998). Our \"post-processing\" system was trained on the TDT3pub partition of the TDT2001 story pairs, and tested on the TDT3unp partition of the TDT2001 test story pairs. The source-pair statistics were computed from the linked story pairs in the TDT2002 dry run test set. There are 20,966, 27,541, and 20,191 labeled story pairs in TDT3pub, TDT2002 dry run, and TDT3unp, respectively. The preprocessing and similarity computations do not require training, although adaptation is performed by incrementally updating the document counts, document frequencies, and the source-specific similarities, and using the updated values in the computations. The training data is used to compute similarity data for training the post-processing systems. ",
"cite_spans": [
{
"start": 72,
"end": 91,
"text": "(Cieri et al, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 664,
"end": 690,
"text": "20,966, 27,541, and 20,191",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "4.1"
},
{
"text": "The goal of link detection is to minimize the cost, or penalty, due to errors by the system. The TDT tasks are evaluated by computing a \"detection cost\":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "T \u00a1 \u00a3 \u00a2 H \u00a9 v T \u00a5 \u00a4 F \u00a7 \u00a6 \u00a6 \u00a9 \u00a4 F \u00a6 \u00a6 \u00a9 H \u00a2 H q T \u00a1 \u00a9 t \u00a9 E \" ! y E W H \u00a2 H where T \u00a5 \u00a4 F \u00a7 \u00a6 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "is the cost of a miss,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "\u00a9 \u00a4 F \u00a7 \u00a6 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "is the estimated probability of a miss, \u00a9 H \u00a2 H is the prior probability that a pair of stories are linked,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "is the cost of a false alarm,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T #",
"sec_num": null
},
{
"text": "is the estimated probability of a false alarm, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "\u00a9 E $ ! \u00cb W H \u00a2 H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "is the prior probability that a pair of stories are not linked. A miss occurs when a linked story pair is not identified as linked by the system. A false alarm occurs when a pair of stories that are not linked are identified as linked by the system. A target is a pair of linked stories; conversely, a non-target is a pair of stories that are not linked. For the link detection task these parameters are set as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "T % \u00a4 F \u00a6 \u00a6 is 1.0, \u00a9 H & \u00a2 H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "is 0.02, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "T \u00a5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "is 0.1. The cost for each topic is equally weighted (i.e., the cost is topic-weighted, rather than story-weighted) and normalized so that for a given system, \"\u00a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "T \u00a5 ' \u00a2 H \u00a7 C ! (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "can be no less than one without extracting information from the source data\" (TDT, 2002):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "j ) 1 0 2 u q 4 3 # 5 6 7 r ) 0 2 u 8 @ 9 B A x j ) 1 C E D B F G F \u00a5 y H u I 6 4 P Q 2 u o ) 1 R T S y H V U W 5 U Y X u I 6 P 2 u q and T \u00a1 \u00a3 \u00a2 H ! \u00a2 a a b c b \u00a9 t F \u00a4 v T F \u00a3 \u00a2 H \u00a7 C ! Q ( e d 5 0 i !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "where the sum is over topics 0 . A detection curve (DET curve) is computed by sweeping a threshold over the range of scores, and the minimum cost over the DET curve is identified as the minimum detection cost or min DET. The topic-weighted DET cost, or score, is dependent on both a good minimum cost over the DET curve, and a good method for selecting an operating point, which is usually implemented by selecting a threshold. A system with a very low min DET score can have a much larger topicweighted DET score. Therefore, we focus on the minimum DET score for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 #",
"sec_num": null
},
{
"text": "We present results comparing the utility of different similarity measures and use of source-pair statistics, as well as different learners for combining the similarity measures and source-pair statistics. Our system was the best performing Link Detection system at TDT2002. We cannot compare our results with the other TDT2002 Link Detection systems because participants in TDT agree not to publish results from another site. We did not participate in TDT2001, but can compare our best system on the TDT2001 test data (TDT3unp) against the results of other TDT2001 systems (we extracted the Primary Link Detection results from slides from the TDT 2001 workshop, available (as of Mar 11, 2004) at: http://www.itl.nist.gov/iaui/894.01/tests/tdt/tdt2001/Pape rPres/Nist-pres/NIST-presentation-v6 files/v3 document .htm. These results are shown in Table 2 . The minimum cost for our system, , is less (better) than that of the TDT2001 systems. For this comparison, we set the bias parameter to 0.2, which reflects the expected number of linked stories computed from the training data. For the following comparative results, we set the bias parameter to reflect the probability of a linked story specified in the task definition (TDT2002), which resulted in a somewhat higher cost.",
"cite_spans": [],
"ref_spans": [
{
"start": 844,
"end": 851,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this section, the effect of combining similarity metrics using an SVM and the effect of using source-pair information is examined. The results in Table 3 are divided into four sections. The upper sections show performance as measured by min DET for single similarity met-rics and the lower sections show performance for combined similarity metrics using an SVM. The columns labeled \"source-pair info used?\" indicate whether sourcepair specific statistics were used as input features to the SVM. The cosine, normalized cosine, Hellinger, Tanimoto, and clarity similarity measures are represented as \"cos\", \"normcos\", \"Hel\", \"Tan\", and \"cla\", respectively. The baseline model for comparison is the normalized cosine similarity without source-pair information (bolded), which is very similar to the most successful story link detection models (Carbonell et al., 2001; . To assess whether the observed differences were significant, we compared models at the .005 significance level using a paired two-sided t-test where the data was randomly partitioned into 10 mutually exclusive sets.",
"cite_spans": [
{
"start": 843,
"end": 867,
"text": "(Carbonell et al., 2001;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison of Similarity Measures and Utility of Source-Pair Information",
"sec_num": "4.3.1"
},
{
"text": "In the upper sections, note that the clarity measure with source-pair specific statistics exhibits the best performance of the five measures, and is competitive with the combination measures in the lower right of the table. Note that the best-performing combination (italicized) did not include clarity, which may be due in part to redundancy with the other measures (Kittler et al., 1998) . Compared to the normalized cosine performance of 0.2732, the improved performance of the cosine and normalized cosine measures when source-pair specific information is used (0.2532 and 0.2533, respectively; p .005 for both comparisons) indicates that simple threshold normalization by the running mean is not optimal.",
"cite_spans": [
{
"start": 367,
"end": 389,
"text": "(Kittler et al., 1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Similarity Measures and Utility of Source-Pair Information",
"sec_num": "4.3.1"
},
{
"text": "Comparison of the upper and lower sections of the table indicates that combination of similarity measures generally yields somewhat improved performance over single similarity link detection systems; the difference between the best upper model vs the best lower model, i.e., \"cla\"' vs \"cos, normcos, Hel, Tan, cla\" without source-pair info, and \"cla\" vs \"cos, normcos, Hel, Tan\" with source-pair info, was significant at p .005 for both comparisons. And comparison of the left and right sections of the table indicates that the use of source-pair specific statistics noticeably improves performance (all models significant at p .005). The lower right section of Table 3 shows a generally modest improvement over the best single metric (i.e., clarity) when using a combination of features with source-pair information.",
"cite_spans": [],
"ref_spans": [
{
"start": 662,
"end": 669,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison of Similarity Measures and Utility of Source-Pair Information",
"sec_num": "4.3.1"
},
{
"text": "These results indicate that although combination of similarity measures improves performance, the use of source-pair specific statistics are a larger factor in improving performance. The SVM effectively uses the the source-pair information to normalize and combine the scores. Once the scores have been normalized, for some measures there is little additional information to be gained from adding additional features, although the combination of at least two measures removes the necessity of selecting the \"best\" measure. For reference to the IR metrics of precision and recall, we present the results for a Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 609,
"end": 616,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Comparison of Similarity Measures and Utility of Source-Pair Information",
"sec_num": "4.3.1"
},
{
"text": "We also investigated the use of other methods for combining similarity measures and using source-pair specific information. Table 5 compares the performance of voting, a C4.5 decision tree, and an SVM. Three sets of similarity measures were compared: 1) cosine, normalized cosine, and Hellinger, 2) cosine, normalized cosine, Hellinger, and Tanimoto (the best performing system in Table 3 ) and 3) the full set of similarity measures. All the SVM systems show significant improved performance at p .005 over the baseline normalized cosine model, which had a cost of 0.2732 (Table 3) ; only one of the decision trees was significantly better at p .05. The poorer performance of voting compared to the baseline may be due in part to dependencies among the different measures. None of the decision tree systems were significantly better than voting at p .05; in comparison, the performance of all SVMs were significantly better than the corresponding voting and tree models at p .005. The voting systems did not use any source-pair information. The decision trees used source-pair information categorically, but did not make use of source-pair statistics. The SVMs used the source-pair statistics, plus categorical source-pair information as input features. Thus, the performance of these systems tends to support the hypothesis that source-pair information, and more specifically, source-pair similarity statistics, contains useful information for the link detection task. That is, the statistics not only differentiate the source pairs, but provide additional information to the classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 381,
"end": 388,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 573,
"end": 582,
"text": "(Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison of Combination Models",
"sec_num": "4.3.2"
},
{
"text": "We have presented a set of enhancements for improving story link detection over the best baseline systems. The enhancements include the combination of different similarity scores and statistical characterization of sourcepair information using machine learning techniques. We observed that the use of statistical characterization of source-pair information had a larger effect in improving the performance of our system than the specific set of similarity measures used. Comparing different methods for combining similarity scores and source-pair information, we observed that simple voting did not always provide improvement over the best cosine similarity based system, decision trees tended to provide better performance, and SVMs provided the best performance of all combination methods evaluated. Our method can be used as postprocessing to the methods developed by other researchers, such as topic-specific models, to create a system with even better performance. Our investigations have focused on one collection drawn from broadcast news and newswire stories in three languages; experiments on a variety of collections would allow for assessment of our results more generally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://www.ldc.upenn.edu/Projects/TDT3/email/email 402. html, accessedMar 11, 2004.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detections, Bounds, and Timelines: UMass and TDT-3",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "Daniella",
"middle": [],
"last": "Malin",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Swan",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of Topic Detection and Tracking Workshop (TDT-3)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Allan, Victor Lavrenko, Daniella Malin, and Rus- sell Swan. 2000. Detections, Bounds, and Timelines: UMass and TDT-3. In Proceedings of Topic Detection and Tracking Workshop (TDT-3), Vienna, Virginia.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "UMass at TDT",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lavernko",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapti",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Topic Detection & Tracking Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Allan, Victor Lavernko, and Ramesh Nallapti. 2002. UMass at TDT 2002. Proceedings of the Topic Detection & Tracking Workshop.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combining the Evidence of Multiple Query Representations for Information Retrieval",
"authors": [
{
"first": "Nicholas",
"middle": [
"J"
],
"last": "Belkin",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"B"
],
"last": "Kantor",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"A"
],
"last": "Fox",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Shaw",
"suffix": ""
}
],
"year": 1995,
"venue": "formation Processing and Management",
"volume": "33",
"issue": "",
"pages": "431--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas J. Belkin, Paul B. Kantor, Edward A. Fox, and J.A. Shaw. 1995. Combining the Evidence of Multiple Query Representations for Information Retrieval. In- formation Processing and Management, 33:3, pp. 431- 448.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Topic-Based Document Segmentation with Probabilistic Latent Semantic analysis",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Francine",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
}
],
"year": 2002,
"venue": "International conference on Information and Knowledge Management (CIKM)",
"volume": "",
"issue": "",
"pages": "211--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Francine Chen, and Ioannis Tsochan- taridis. 2002. Topic-Based Document Segmentation with Probabilistic Latent Semantic analysis. In Interna- tional conference on Information and Knowledge Man- agement (CIKM), McLean, VA, pp. 211-218.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Classification and Regression Trees",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
},
{
"first": "Jerome",
"middle": [
"H"
],
"last": "Friedman",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Olshen",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Stone",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. 1984. Classification and Regres- sion Trees, Wasworth International Group.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Classifier Combination for Improved Lexical Disambiguation",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING/ACL",
"volume": "",
"issue": "",
"pages": "191--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill and Jun Wu. 1998. Classifier Combination for Improved Lexical Disambiguation. In Proceedings of COLING/ACL. pp. 191-195.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "CMU TDT Report. Slides at the TDT-2001 Meeting",
"authors": [
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Chun",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaime Carbonell, Yiming Yang, Ralf Brown, Chun Jin and Jian Zhang. 2001. CMU TDT Report. Slides at the TDT-2001 Meeting. http://www.itl.nist.gov/iaui/894.01/tests/tdt/tdt2001/ paperpres.htm (and select the CMU presentation)",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Story Link Detection and New Even Detection are Asymmetic",
"authors": [
{
"first": "Francine",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ayman",
"middle": [],
"last": "Farahat",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003, Companion Volume",
"volume": "",
"issue": "",
"pages": "13--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francine Chen, Ayman Farahat and Thorsten Brants. 2003. Story Link Detection and New Even Detection are Asymmetic. Proceedings of HLT-NAACL 2003, Companion Volume, pp. 13-15.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The TDT-3 Text and Speech Corpus. Proceedings Topic Detection and Tracking Workshop",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Cieri",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Nii",
"middle": [],
"last": "Martey",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Cieri, David Graff, Nii Martey, and Stephanie Strassel. 2003. The TDT-3 Text and Speech Corpus. Proceedings Topic Detection and Tracking Workshop, 2000.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Support Vector Machines",
"authors": [
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nello Cristianini and John Shawe-Taylor. 2000. Support Vector Machines, Cambridge University Press, Cam- bridge, U.K.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Relevance Feedback and Personalization: A Language Modeling Perspective",
"authors": [
{
"first": "W",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Cronon-Townsend",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
}
],
"year": 2001,
"venue": "DELOS Workshop: Personalization and Recommender Systems in Digital Libraries",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Bruce Croft, Stephen Cronon-Townsend, and Victor Lavrenko. 2001. Relevance Feedback and Personaliza- tion: A Language Modeling Perspective. In DELOS Workshop: Personalization and Recommender Sys- tems in Digital Libraries, pp. 49-54.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ensemble Methods in Machine Learning",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2000,
"venue": "Multiple Classier Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G. Dietterich. 2000. Ensemble Methods in Ma- chine Learning. In Multiple Classier Systems, Cagliari, Italy.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pattern Classification and Scene Analysis",
"authors": [
{
"first": "O",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"E"
],
"last": "Duda",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hart",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard O. Duda and Peter E. Hart. 1973. Pattern Classi- fication and Scene Analysis, John Wiley & Sons, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Making large-Scale SVM Learning Practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1999. Making large-Scale SVM Learning Practical. In Advances in Kernel Methods - Support Vector Learning, B. Schlkopf and C. Burges and A. Smola (ed.), MIT-Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text Categorization with Support Vector Machines: Learning with Many Relevant Features",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the European Conference on Machine Learning (ECML)",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1998. Text Categorization with Sup- port Vector Machines: Learning with Many Relevant Features. Proceedings of the European Conference on Machine Learning (ECML), Springer, pp. 137-142.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On Combining Classifiers",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Kittler",
"suffix": ""
},
{
"first": "Mohamad",
"middle": [],
"last": "Hatef",
"suffix": ""
},
{
"first": "P",
"middle": [
"W"
],
"last": "Robert",
"suffix": ""
},
{
"first": "Jiri",
"middle": [],
"last": "Duin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matas",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "20",
"issue": "3",
"pages": "226--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Kittler, Mohamad Hatef, Robert P.W. Duin, and Jiri Matas. 1998. On Combining Classifiers. IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 20(3), pp. 226-239.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Relevance Models for Topic Detection and Tracking",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Deguzman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Laflamme",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of HLT-2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Lavrenko, James Allan, E. DeGuzman, D. LaFlamme, V. Pollard, and S. Thomas. 2002. Rele- vance Models for Topic Detection and Tracking. In Proceedings of HLT-2002, San Diego, CA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BoosTexter: A Boosting-based System for Text Categorization. Machine Learning",
"authors": [
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2000,
"venue": "TDT2002) The 2002 Topic Detection and Tracking Task Definition and Evaluation Plan",
"volume": "39",
"issue": "",
"pages": "135--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert E. Schapire and Yoram Singer. 2000. BoosTexter: A Boosting-based System for Text Categorization. Ma- chine Learning, 39(2/3), pp. 135-168. (TDT2002) The 2002 Topic Detection and Track- ing Task Definition and Evaluation Plan http://www.itl.nist.gov/iaui/894.01/tests/tdt/tdt2002/ evalplan.htm",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Witten",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian H. Witten and Eibe Frank. 1999. Data Mining: Practi- cal Machine Learning Tools and Techniques with Java Implementations, Morgan Kaufman.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": ""
},
"TABREF1": {
"text": "",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Source Pair Groups</td><td/></tr><tr><td/><td colspan=\"3\">asr:asr asr:text text:text</td></tr><tr><td>English:English</td><td>a</td><td>b</td><td>c</td></tr><tr><td>English:Arabic</td><td>d</td><td>e</td><td>f</td></tr><tr><td>English:Mandarin</td><td>g</td><td>h</td><td>i</td></tr><tr><td>Arabic:Arabic</td><td>j</td><td>k</td><td>l</td></tr><tr><td>Arabic:Mandarin</td><td>m</td><td>n</td><td>o</td></tr><tr><td>Mandarin:Mandarin</td><td>p</td><td>q</td><td>r</td></tr><tr><td colspan=\"4\">confidence, which serves as input to the evaluation pro-</td></tr><tr><td>gram.</td><td/><td/><td/></tr></table>"
},
"TABREF2": {
"text": "Topic-weighted Min Detection Cost for Different Systems",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">system min DET</td></tr><tr><td>A</td><td>0.2368</td></tr><tr><td>B</td><td>0.3439</td></tr><tr><td>C</td><td>0.3175</td></tr><tr><td>D</td><td>0.2606</td></tr><tr><td>E</td><td>0.2342</td></tr></table>"
},
"TABREF3": {
"text": "Topic-weighted Min Detection Cost: Combined",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Similarity Measures and Source-Pair Specific Informa-</td></tr><tr><td colspan=\"3\">tion (baseline system performance shown in bold)</td></tr><tr><td/><td colspan=\"2\">min DET Cost</td></tr><tr><td>similarity measures used</td><td colspan=\"2\">source-pair info used?</td></tr><tr><td/><td>no</td><td>yes</td></tr><tr><td>cos</td><td>0.2801</td><td>0.2532</td></tr><tr><td>normcos</td><td>0.2732</td><td>0.2533</td></tr><tr><td>Hel</td><td>0.3216</td><td>0.2657</td></tr><tr><td>Tan</td><td>0.3008</td><td>0.2748</td></tr><tr><td>cla</td><td>0.2706</td><td>0.2496</td></tr><tr><td>cos, Hel</td><td>0.2791</td><td>0.2467</td></tr><tr><td>normcos, cla</td><td>0.2631</td><td>0.2462</td></tr><tr><td>cos, normcos, cla</td><td>0.2626</td><td>0.2430</td></tr><tr><td>Hel, normcos, Tan</td><td>0.2714</td><td>0.2429</td></tr><tr><td>cos, normcos, Hel</td><td>0.2725</td><td>0.2421</td></tr><tr><td>cos, Hel, Tan, cla</td><td>0.2615</td><td>0.2452</td></tr><tr><td>cos, normcos, Hel, Tan</td><td>0.2736</td><td>0.2418</td></tr><tr><td>cos, normcos, Hel, cla</td><td>0.2614</td><td>0.2431</td></tr><tr><td>cos, normcos, Tan, cla</td><td>0.2623</td><td>0.2431</td></tr><tr><td>cos, normcos, Hel, Tan, cla</td><td>0.2608</td><td>0.2431</td></tr></table>"
},
"TABREF4": {
"text": "Precision and Recall: Combined Similarity Measures and Source-Pair Specific Information",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">similarity measures used source pair precision recall</td></tr><tr><td/><td>info used?</td><td/><td/></tr><tr><td>cos</td><td>no</td><td>87.45</td><td>85.33</td></tr><tr><td>cla</td><td>yes</td><td>87.07</td><td>88.06</td></tr><tr><td>cos, Hel</td><td>yes</td><td>88.83</td><td>86.75</td></tr><tr><td>cos, normcos, Hel, Tan</td><td>yes</td><td>88.37</td><td>87.17</td></tr></table>"
},
"TABREF5": {
"text": "",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">: Topic-weighted Min Detection Cost: Different</td></tr><tr><td colspan=\"2\">Learners for Combining Similarities</td><td/><td/></tr><tr><td/><td/><td>models</td><td/></tr><tr><td>similarity measures used</td><td colspan=\"2\">voting decision</td><td>SVM</td></tr><tr><td/><td/><td>tree</td><td/></tr><tr><td>cos, normcos, Hel</td><td>0.2802</td><td>0.2708</td><td>0.2421</td></tr><tr><td>cos, normcos, Hel, Tan</td><td>0.2810</td><td>0.2516</td><td>0.2418</td></tr><tr><td colspan=\"2\">cos, normcos, Hel, Tan, cla 0.2632</td><td>0.2574</td><td>0.2431</td></tr><tr><td colspan=\"2\">selected number of conditions in</td><td/><td/></tr></table>"
}
}
}
}