ACL-OCL / Base_JSON /prefixN /json /N10 /N10-1026.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N10-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:50:34.453013Z"
},
"title": "Improved Extraction Assessment through Better Language Models",
"authors": [
{
"first": "Arun",
"middle": [],
"last": "Ahuja",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Northwestern University Evanston",
"location": {
"postCode": "60208",
"region": "IL"
}
},
"email": "arun.ahuja@eecs.northwestern.edu"
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Northwestern University Evanston",
"location": {
"postCode": "60208",
"region": "IL"
}
},
"email": "ddowney@eecs.northwestern.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A variety of information extraction techniques rely on the fact that instances of the same relation are \"distributionally similar,\" in that they tend to appear in similar textual contexts. We demonstrate that extraction accuracy depends heavily on the accuracy of the language model utilized to estimate distributional similarity. An unsupervised model selection technique based on this observation is shown to reduce extraction and type-checking error by 26% over previous results, in experiments with Hidden Markov Models. The results suggest that optimizing statistical language models over unlabeled data is a promising direction for improving weakly supervised and unsupervised information extraction.",
"pdf_parse": {
"paper_id": "N10-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "A variety of information extraction techniques rely on the fact that instances of the same relation are \"distributionally similar,\" in that they tend to appear in similar textual contexts. We demonstrate that extraction accuracy depends heavily on the accuracy of the language model utilized to estimate distributional similarity. An unsupervised model selection technique based on this observation is shown to reduce extraction and type-checking error by 26% over previous results, in experiments with Hidden Markov Models. The results suggest that optimizing statistical language models over unlabeled data is a promising direction for improving weakly supervised and unsupervised information extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many weakly supervised and unsupervised information extraction techniques assess the correctness of extractions using the distributional hypothesis-the notion that words with similar meanings tend to occur in similar contexts (Harris, 1985) . A candidate extraction of a relation is deemed more likely to be correct when it appears in contexts similar to those of \"seed\" instances of the relation, where the seeds may be specified by hand (Pa\u015fca et al., 2006) , taken from an existing, incomplete knowledge base (Snow et al., 2006; Pantel et al., 2009) , or obtained in an unsupervised manner using a generic extractor (Banko et al., 2007) . We refer to this technique as Assessment by Distributional Similarity (ADS).",
"cite_spans": [
{
"start": 226,
"end": 240,
"text": "(Harris, 1985)",
"ref_id": "BIBREF4"
},
{
"start": 439,
"end": 459,
"text": "(Pa\u015fca et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 512,
"end": 531,
"text": "(Snow et al., 2006;",
"ref_id": "BIBREF8"
},
{
"start": 532,
"end": 552,
"text": "Pantel et al., 2009)",
"ref_id": "BIBREF6"
},
{
"start": 619,
"end": 639,
"text": "(Banko et al., 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Typically, distributional similarity is computed by comparing co-occurrence counts of extractions and seeds with various contexts found in the corpus. Statistical Language Models (SLMs) include methods for more accurately estimating co-occurrence probabilities via back-off, smoothing, and clustering techniques (e.g. (Chen and Goodman, 1996; Rabiner, 1989; Bell et al., 1990) ). Because SLMs can be trained from only unlabeled text, they can be applied for ADS even when the relations of interest are not specified in advance (Downey et al., 2007) . Unlabeled text is abundant in large corpora like the Web, making nearly-ceaseless automated optimization of SLMs possible. But how fruitful is such an effort likely to be-to what extent does optimizing a language model over a fixed corpus lead to improvements in assessment accuracy?",
"cite_spans": [
{
"start": 318,
"end": 342,
"text": "(Chen and Goodman, 1996;",
"ref_id": "BIBREF2"
},
{
"start": 343,
"end": 357,
"text": "Rabiner, 1989;",
"ref_id": "BIBREF7"
},
{
"start": 358,
"end": 376,
"text": "Bell et al., 1990)",
"ref_id": "BIBREF1"
},
{
"start": 527,
"end": 548,
"text": "(Downey et al., 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we show that an ADS technique based on SLMs is improved substantially when the language model it employs becomes more accurate. In a large-scale set of experiments, we quantify how language model perplexity correlates with ADS performance over multiple data sets and SLM techniques. The experiments show that accuracy over unlabeled data can be used for selecting among SLMs-for an ADS approach utilizing Hidden Markov Models, this results in an average error reduction of 26% over previous results in extraction and type-checking tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We begin by formally defining the extraction and typechecking tasks we consider, then discuss statistical language models and their utilization for extraction assessment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction Assessment with Language Models",
"sec_num": "2"
},
{
"text": "The extraction task we consider is formalized as follows: given a corpus, a target relation R, a list of seed instances S R , and a list of candidate extractions U R , the task is to order elements of U R such that correct instances for R are ranked above extraction errors. Let U Ri denote the set of the ith arguments of the extractions in U R , and let S Ri be defined similarly for the seed set S R . For relations of arity greater than one, we consider the typechecking task, an important sub-task of extraction (Downey et al., 2007) . The typechecking task is to rank extractions with arguments that are of the proper type for a relation above type errors. As an example, the extraction Founded(Bill Gates, Oracle) is type correct, but is not correct for the extraction task.",
"cite_spans": [
{
"start": 517,
"end": 538,
"text": "(Downey et al., 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction Assessment with Language Models",
"sec_num": "2"
},
{
"text": "A Statistical Language Model (SLM) is a probability distribution P (w) over word sequences w = (w 1 , ..., w r ). The most common SLM techniques are n-gram models, which are Markov models in which the probability of a given word is dependent on only the previous n\u22121 words. The accuracy of an n-gram model of a corpus depends on two key factors: the choice of n, and the smoothing technique employed to assign probabilities to word sequences seen infrequently in training. We experiment with choices of n from 2 to 4, and two popular smoothing approaches, Modified Kneser-Ney (Chen and Goodman, 1996) and Witten-Bell (Bell et al., 1990) .",
"cite_spans": [
{
"start": 586,
"end": 600,
"text": "Goodman, 1996)",
"ref_id": "BIBREF2"
},
{
"start": 617,
"end": 636,
"text": "(Bell et al., 1990)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Language Models",
"sec_num": "2.1"
},
{
"text": "Unsupervised Hidden Markov Models (HMMs) are an alternative SLM approach previously shown to offer accuracy and scalability advantages over ngram models in ADS (Downey et al., 2007) . An HMM models a sentence w as a sequence of observations w i each generated by a hidden state variable t i . Here, hidden states take values from {1, . . . , T }, and each hidden state variable is itself generated by some number k of previous hidden states. Formally, the joint distribution of a word sequence w given a corresponding state sequence t is:",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Downey et al., 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Language Models",
"sec_num": "2.1"
},
{
"text": "P (w|t) = i P (w i |t i )P (t i |t i\u22121 , . . . , t i\u2212k ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Language Models",
"sec_num": "2.1"
},
{
"text": "The distributions on the right side of Equation 1 are learned from the corpus in an unsupervised manner using Expectation-Maximization, such that words distributed similarly in the corpus tend to be generated by similar hidden states (Rabiner, 1989 ).",
"cite_spans": [
{
"start": 234,
"end": 248,
"text": "(Rabiner, 1989",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Language Models",
"sec_num": "2.1"
},
{
"text": "The Assessment by Distributional Similarity (ADS) technique is to rank extractions in U R in decreasing order of distributional similarity to the seeds, as estimated from the corpus. In our experiments, we utilize an ADS approach previously proposed for HMMs (Downey et al., 2007) and adapt it to also apply to n-gram models, as detailed below. Define a context of an extraction argument e i to be a string containing the m words preceding and m words following an occurrence of e i in the corpus. Let C i = {c 1 , c 2 , ..., c |C i | } be the union of all contexts of extraction arguments e i and seed arguments s i for a given relation R. We create a probabilistic context vector for each extraction e i where the j-th dimension of the vector is the probability of the context surrounding given the extraction, P (c j |e i ), computed from the language model. 1 We rank the extractions in U R according to how similar their arguments' contextual distributions, P (c|e i ), are to those of the seed arguments. Specifically, extractions are ranked according to:",
"cite_spans": [
{
"start": 259,
"end": 280,
"text": "(Downey et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 862,
"end": 863,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performing ADS with SLMs",
"sec_num": "2.2"
},
{
"text": "f (e) = e i \u2208e KL( w \u2208S Ri P (c|w ) |S Ri | , P (c|e i )) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performing ADS with SLMs",
"sec_num": "2.2"
},
{
"text": "where KL represents KL Divergence, and the outer sum is taken over arguments e i of the extraction e. For HMMs, we alternatively rank extractions using the HMM state distributions P (t|e i ) in place of the probabilistic context vectors P (c|e i ). Our experiments show that state distributions are much more accurate for ADS than are HMM context vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performing ADS with SLMs",
"sec_num": "2.2"
},
{
"text": "In this section, we present experiments showing that SLM accuracy correlates strongly with ADS performance. We also show that SLM performance can be used for model selection, leading to an ADS technique that outperforms previous results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We experiment with a wide range of n-gram and HMM models. The n-gram models are trained using the SRILM toolkit (Stolcke, 2002 variety of HMM configurations over a large corpus requires a scalable training architecture. We constructed a parallel HMM codebase using the Message Passing Interface (MPI), and trained the models on a supercomputing cluster. All language models were trained on a corpus of 2.8M sentences of Web text (about 60 million tokens). SLM performance is measured using the standard perplexity metric, and assessment accuracy is measured using area under the precision-recall curve (AUC), a standard metric for ranked lists of extractions. We evaluated performance on three distinct data sets. The first two data sets evaluate ADS for unsupervised information extraction, and were taken from (Downey et al., 2007) . The first, Unary, was an extraction task for unary relations (Company, Country, Language, Film) and the second, Binary, was a type-checking task for binary relations (Conquered, Founded, Headquartered, Merged). The 10 most frequent extractions served as bootstrapped seeds. The two test sets contained 361 and 265 extractions, respectively. The third data set, Wikipedia, evaluates ADS on weaklysupervised extraction, using seeds and extractions taken from Wikipedia 'List of' pages (Pantel et al., 2009) . Seed sets of various sizes (5, 10, 15 and 20) were randomly selected from each list, and we present results averaged over 10 random samplings. Other members of the seed list were added to a test set as correct extractions, and elements from other lists were added as errors. The data set included 2264 extractions across 36 unary relations, including Composers and US Internet Companies.",
"cite_spans": [
{
"start": 112,
"end": 126,
"text": "(Stolcke, 2002",
"ref_id": "BIBREF9"
},
{
"start": 812,
"end": 833,
"text": "(Downey et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 1319,
"end": 1340,
"text": "(Pantel et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Methodology",
"sec_num": "3.1"
},
{
"text": "The first question we investigate is whether optimizing individual language models leads to better performance in ADS. We measured the correlation between SLM perplexity and ADS performance as training proceeds in HMMs, and as n and the smoothing technique vary in the n-gram models. Table 1 shows that as the SLM becomes more accurate (i.e. as perplexity decreases), ADS performance increases. The correlation is strong (averaging -0.742) and is consistent across model configurations and data sets. The low positive correlation for the ngram models on Wikipedia is likely due to a \"floor effect\"; the models have low performance overall on the difficult Wikipedia data set. The lowestperplexity n-gram model (Mod Kneser-Ney smoothing with n=3, KN3) does exhibit the best IE performance, at 0.039 (the average performance of the HMM models is more than twice this, at 0.084). Figure 1 shows the relationship between SLM and ADS performance in detail for the best-performing HMM configuration.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 1",
"ref_id": null
},
{
"start": 877,
"end": 885,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Optimizing Language Models for IE",
"sec_num": "3.2"
},
{
"text": "Different language models can be configured in different ways: for example, HMMs require choices for the hyperparameters k and T . Here, we show that SLM perplexity can be used to select a high-quality model configuration for ADS using only unlabeled data. We evaluate on the Unary and Binary data sets, since they have been employed in previous work on our corpora. Figure 2 shows that for HMMs, ADS performance increases as perplexity decreases across various model configurations (a similar relationship holds for n-gram models). A model selection technique that picks the HMM model with lowest perplexity (HMM 1-100) results in better ADS performance than previous results. As shown in Table 2, HMM 1-100 reduces error over the HMM-T model in (Downey et al., 2007) by 26%, on average. The experiments also reveal an important difference between the HMM and n-gram approaches. While KN3 is more accurate in SLM than our HMM models, it performs worse in ADS on average. For example, HMM 1-25 underperforms KN3 in perpexity, at 537.2 versus 227.1, but wins in ADS, 0.880 to 0.853. We hypothesize that this is because the latent state distributions in the HMMs provide a more informative distributional similarity measure. Indeed, when we compute distributional similarity for HMMs using probabilistic context vectors as opposed to state distributions, ADS performance for HMM 1-25 decreases to 5.8% below that of KN3.",
"cite_spans": [
{
"start": 747,
"end": 768,
"text": "(Downey et al., 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 367,
"end": 375,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Selection",
"sec_num": "3.3"
},
{
"text": "We presented experiments showing that estimating distributional similarity with more accurate statistical language models results in more accurate extrac- tion assessment. We note that significantly larger, more powerful language models are possible beyond those evaluated here, which (based on the trajectory observed in Figure 2 ) may offer significant improvements in assessment accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 330,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "For example, for context cj = \"I visited in July\" and extraction ei = \"Boston,\" P (cj|ei) is P(\"I visited Boston in July\") / P(\"Boston\"), where each string probability is computed using the language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Open information extraction from the Web",
"authors": [
{
"first": "M",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "Procs. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Banko, M. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the Web. In Procs. of IJCAI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Text Compression",
"authors": [
{
"first": "T",
"middle": [
"C"
],
"last": "Bell",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Cleary",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. C. Bell, J. G. Cleary, and I. H. Witten. 1990. Text Compression. Prentice Hall, January.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Joshua Goodman. 1996. An empir- ical study of smoothing techniques for language mod- eling. In Proc. of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sparse information extraction: Unsupervised language models to the rescue",
"authors": [
{
"first": "D",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Schoenmackers",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Downey, S. Schoenmackers, and O. Etzioni. 2007. Sparse information extraction: Unsupervised language models to the rescue. In Proc. of ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Distributional structure",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1985,
"venue": "The Philosophy of Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Harris. 1985. Distributional structure. In J. J. Katz, editor, The Philosophy of Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Names and similarities on the web: Fact extraction in the fast lane",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bigham",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lifchits",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2006,
"venue": "Procs. of ACL/COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Pa\u015fca, D. Lin, J. Bigham, A. Lifchits, and A. Jain. 2006. Names and similarities on the web: Fact extrac- tion in the fast lane. In Procs. of ACL/COLING 2006.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Web-scale distributional similarity and entity set expansion",
"authors": [
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Crestan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Borkovsky",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Popescu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vyas",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Pantel, E. Crestan, A. Borkovsky, A. M. Popescu, and V. Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proc. of EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A tutorial on Hidden Markov Models and selected applications in speech recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. R. Rabiner. 1989. A tutorial on Hidden Markov Models and selected applications in speech recogni- tion. Proceedings of the IEEE, 77(2):257-286.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantic taxonomy induction from heterogenous evidence",
"authors": [
{
"first": "R",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2006,
"venue": "COLING/ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Snow, D. Jurafsky, and A. Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In COLING/ACL 2006.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICSLP",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Proceedings of ICSLP, volume 2.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "HMM 1-100 Performance. Information Extraction performance (in AUC) increases as SLM accuracy improves (perplexity decreases).",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Model Selection for HMMs. SLM performance is a good predictor of extraction performance across model configurations.",
"uris": null
},
"TABREF2": {
"num": null,
"html": null,
"text": "Extraction Performance Results in AUC for Individual Relations. The lowest-perplexity HMM, 1-100, outperforms the HMM-T model from previous work.",
"type_str": "table",
"content": "<table/>"
}
}
}
}