| { |
| "paper_id": "S10-1018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:27:54.256354Z" |
| }, |
| "title": "SUCRE: A Modular System for Coreference Resolution", |
| "authors": [ |
| { |
| "first": "Hamidreza", |
| "middle": [], |
| "last": "Kobdani", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Stuttgart", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "kobdani@ims.uni-stuttgart.de" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Stuttgart", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents SUCRE, a new software tool for coreference resolution and its feature engineering. It is able to separately do noun, pronoun and full coreference resolution. SUCRE introduces a new approach to the feature engineering of coreference resolution based on a relational database model and a regular feature definition language. SUCRE successfully participated in SemEval-2010 Task 1 on Coreference Resolution in Multiple Languages (Recasens et al., 2010) for gold and regular closed annotation tracks of six languages. It obtained the best results in several categories, including the regular closed annotation tracks of English and German.", |
| "pdf_parse": { |
| "paper_id": "S10-1018", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents SUCRE, a new software tool for coreference resolution and its feature engineering. It is able to separately do noun, pronoun and full coreference resolution. SUCRE introduces a new approach to the feature engineering of coreference resolution based on a relational database model and a regular feature definition language. SUCRE successfully participated in SemEval-2010 Task 1 on Coreference Resolution in Multiple Languages (Recasens et al., 2010) for gold and regular closed annotation tracks of six languages. It obtained the best results in several categories, including the regular closed annotation tracks of English and German.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In this paper, we introduce a new software tool for coreference resolution. Coreference resolution is the process of finding discourse entities (markables) referring to the same real-world entity or concept. In other words, this process groups the markables of a document into equivalence classes (coreference entities) so that all markables in an entity are coreferent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are various publicly available systems that perform coreference resolution, such as BART (Versley et al., 2008) and GUITAR (Steinberger et al., 2007) . A considerable engineering effort is needed for the full coreference resolution task, and a significant part of this effort concerns feature engineering. Thus, a system which is able to extract the features based on a feature definition language can help the researcher reduce the implementation effort needed for feature extraction. Most methods of coreference resolution, if providing a baseline, usually use a feature set similar to (Soon et al., 2001) or (Ng and Cardie, 2002) and do the feature extraction in the preprocessing stage. SUCRE has been developed to provide a more flexible method for feature engineering of coreference resolution. It has a novel approach to model an unstructured text corpus in a structured framework by using a relational database model and a regular feature definition language to define and extract the features. Relational databases are a well-known technology for structured data modeling and are supported by a wide array of software and tools. Converting a text corpus to/from its equivalent relational database model is straightforward in our framework.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 117, |
| "text": "(Versley et al., 2008)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 129, |
| "end": 155, |
| "text": "(Steinberger et al., 2007)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 594, |
| "end": 613, |
| "text": "(Soon et al., 2001)", |
| "ref_id": null |
| }, |
| { |
| "start": 617, |
| "end": 638, |
| "text": "(Ng and Cardie, 2002)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A regular language for feature definition is a very flexible method to extract different features from text. In addition to features defined directly in SUCRE, it accepts also externally extracted/generated features. Its modular architecture makes it possible to use any externally available classification method too. In addition to link features (features related to a markable pair), it is also possible to define other kinds of features: atomic word and markable features. This approach to feature engineering is suitable not only for knowledge-rich but also for knowledge-poor datasets. It is also language independent. The results of SUCRE in SemEval-2010 Task 1 show the promise of our framework.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The architecture of SUCRE has two main parts: preprocessing and coreference resolution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architecture", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In preprocessing the text corpus is converted to a relational database model. These are the main functionalities in this stage: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architecture", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The Relational Database model of text thev corpus is an easy to generate format. Three tables are needed to have a minimum running system: Word, Markable and Link. Table 1 presents the database model of the text corpus. In the word table, Word-ID is the index of the word, starting from the beginning of the corpus. It is used as the primary key to uniquely identify each token. Document-ID, Paragraph-ID and Sentence-ID are each counted from the beginning of the corpus, and also act as the foreign keys pointing to the primary keys of the document, paragraph and sentence tables, which are optional (the system can also work without them). It is obvious that the raw text as well as any other format of the corpus can be generated from the word table. Any word features (Word-Feature-#X columns) can be defined and will then be added to the word table in preprocessing. In the markable table, Markable-ID is the primary key. Begin-Word-ID, End-Word-ID and Head-Word-ID refer to the word table. Like the word features, the markable features are not mandatory and in the preprocessing we can decide which features are added to the table. In the link table, Link-ID is the primary key; First-Markable-ID and Second-Markable-ID refer to the markable table.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 171, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relational Database Model of Text Corpus", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For training, the system generates a positive training instance for each adjacent coreferent markable pair and negative training instances for a markable m and all markables disreferent with m that occur before m (Soon et al., 2001 ). For decoding it generates all the possible links inside a window of 100 markables.", |
| "cite_spans": [ |
| { |
| "start": 213, |
| "end": 231, |
| "text": "(Soon et al., 2001", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Generator", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "There are two main categories of features in SUCRE: Atomic Features and Link Features", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We first explain atomic features in detail and then turn to link features and the extraction method we use.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Atomic Features: The current version of SUCRE supports the atomic features of words and markables but in the next versions we are going to extend it to sentences, paragraphs and documents. An atomic feature is an attribute. For example the position of the word in the corpus is an atomic word feature. Atomic word features are stored in the columns of the word table called Word-Feature-X.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In addition to word position in the corpus, document number, paragraph number and sentence number, the following are examples of atomic word features which can be extracted in preprocessing: Part of speech tag, Grammatical Gender (male, female or neutral), Natural Gender (male or female), Number (e.g. singular, plural or both), Semantic Class, Type (e.g. pronoun types: personal, reflexive, demonstrative ...), Case (e.g. nominative, accusative, dative or genitive in German) and Pronoun Person (first, second or third). Other possible atomic markable features include: number of words in markable, named entity, alias, syntactic role and semantic class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "For sentences, the following could be extracted: number of words in the sentence and sentence type (e.g. simple, compound or complex). For paragraphs these features are possible: number of words and number of sentences in the paragraph. Finally, examples of document features include document type (e.g. news, article or book), number of words, sentences and paragraphs in the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Link Features: Link features are defined over a pair of markables. For link feature extraction, the head words of the markables are usually used, but in some cases the head word may not be a suitable choice. For example, consider the two markables the books and a book. In both cases book is the head word, but to distinguish which markable is definite and which indefinite, the article must be taken into account. Now consider the two markables the university student from Germany and the university student from France. In this case, the head words and the first four words of each markable are the same but they can not be coreferent; this can be detected only by looking at the last words. Sometimes we need to consider all words in the two markables, or even define a feature for a markable as a unit. To cover all such cases we need a regular feature definition language with some keywords to select different word combinations of two markables. For this purpose, we define the following variables. m1 is the first markable in the pair. m1b, m1e and m1h are the first, last and head words of the first markable in the pair. m1a refers to all words of the first markable in the pair. m2, m2b, m2e, m2h and m2a have the same definitions as above but for the second markable in the pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In addition to the above keywords there are some other keywords that this paper does not have enough space to mention (e.g. for accessing the constant values, syntax relations or roles). The currently available functions are: exact-and substring matching (in two forms: case-sensitive and case-insensitive), edit distance, alias, word relation, markable parse tree path, absolute value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Two examples of link features are as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\u2022 (seqmatch(m1a, m2a) > 0) && (m1h.f 0 == f 0.N ) && (m2h.f 0 == f 0.N )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "means that there is at least one exact match between the words of the markables and that the head words of both are nouns (f0 means Word-Feature-0, which is part of speech in our system).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\u2022 (abs(m2b.stcnum \u2212 m1b.stcnum) == 0) && (m2h.f 3 == f 3.ref lexive) means that two markables are in the same sentence and that the type of the second markable head word is reflexive (f3 means Word-Feature-3, which is morphological type in our system).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Link Feature Extractor", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "There are four classifiers integrated in SUCRE: Decision-Tree, Naive-Bayes, Support Vector Machine (Joachims, 2002) and Maximum-Entropy (Tsuruoka, 2006) . When we compared these classifiers, the best results, which are reported in Section 3, were achieved with the Decision-Tree.", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 115, |
| "text": "(Joachims, 2002)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 136, |
| "end": 152, |
| "text": "(Tsuruoka, 2006)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "In decoding, the coreference chains are created. SUCRE uses best-first clustering for this purpose. It searches for the best predicted antecedent from right-to-left starting from the end of the document. Table 2 shows the results of SUCRE and the best competitor system on the test portions of the six languages from SemEval-2010 Task 1. Four different evaluation metrics were used to rank the participating systems: MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , CEAF (Luo, 2005) and BLANC (Recasens and Hovy, in prep) . SUCRE has the best results in regular closed annotation track of English and German (for all metrics). Its results for gold closed annotation track of both English and German are the best in MUC and BLANC scoring metrics (MUC: English +27.1 German +32.5, BLANC: English +9.5 German +9.0) and for CEAF and B 3 (CEAF: English -1.3 German -4.8, B 3 : English -2.1 German -4.8); in comparison to the second ranked system, the performance is clearly better in the first case and slightly better in the second. This result shows that SUCRE has been optimized in a way that achieves good results on the four different scoring metrics. We view this good performance as a demonstration of the strength of SUCRE: our method of feature extraction, definition and tuning is uniform and can be optimized and applied to all languages and tracks.", |
| "cite_spans": [ |
| { |
| "start": 421, |
| "end": 442, |
| "text": "(Vilain et al., 1995)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 449, |
| "end": 474, |
| "text": "(Bagga and Baldwin, 1998)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 482, |
| "end": 493, |
| "text": "(Luo, 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 504, |
| "end": 532, |
| "text": "(Recasens and Hovy, in prep)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 204, |
| "end": 211, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "Results of SUCRE show a correlation between the MUC and BLANC scores (the best MUC scores of all tracks and the best BLANC scores in 11 tracks of a total 12), in our opinion this correlation is not because of the high similarity between MUC and BLANC, but it is because of the balanced scores. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this paper, we have presented a new modular system for coreference resolution. In comparison with the existing systems the most important advantage of our system is its flexible method of feature engineering based on relational database and a regular feature definition language. There are four classifiers integrated in SUCRE: Decision-Tree, Naive-Bayes, SVM and Maximum-Entropy. The system is able to separately do noun, pronoun and full coreference resolution. The system uses bestfirst clustering. It searches for the best predicted antecedent from right-to-left starting from the end of the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Algorithms for scoring coreference chains", |
| "authors": [ |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Bagga", |
| "suffix": "" |
| }, |
| { |
| "first": "Breck", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference", |
| "volume": "", |
| "issue": "", |
| "pages": "563--566", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In In The First Interna- tional Conference on Language Resources and Eval- uation Workshop on Linguistics Coreference, pages 563-566.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Learning to Classify Text Using Support Vector Machines, Methods, Theory, and Algorithms", |
| "authors": [ |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thorsten Joachims. 2002. Learning to Classify Text Using Support Vector Machines, Methods, Theory, and Algorithms. Kluwer/Springer.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "On coreference resolution performance metrics", |
| "authors": [ |
| { |
| "first": "Xiaoqiang", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Process- ing, pages 25-32, Morristown, NJ, USA. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Improving machine learning approaches to coreference resolution", |
| "authors": [ |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vincent Ng and Claire Cardie. 2002. Improving ma- chine learning approaches to coreference resolution. In Proceedings of the ACL, pages 104-111.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "BLANC: Implementing the Rand Index for Coreference Evaluation", |
| "authors": [ |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marta Recasens and Eduard Hovy. in prep. BLANC: Implementing the Rand Index for Coreference Eval- uation.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Task 1: Coreference resolution in multiple languages", |
| "authors": [ |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Semeval", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 5th International Workshop on Semantic Evaluations (SemEval-2010)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "SemEval-2010 Task 1: Coreference resolution in multiple languages. In Proceedings of the 5th International Workshop on Semantic Evaluations (SemEval-2010), Uppsala, Sweden.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A machine learning approach to coreference resolution of noun phrases", |
| "authors": [], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "521--544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning ap- proach to coreference resolution of noun phrases. In Computational Linguistics, pages 521-544.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Two uses of anaphora resolution in summarization", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Steinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mijail", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "In Information Processing and Management, Special issue on Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "1663--1680", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Josef Steinberger, Massimo Poesio, Mijail A. Kabad- jovb, and Karel Jezek. 2007. Two uses of anaphora resolution in summarization. In Information Pro- cessing and Management, Special issue on Summa- rization, pages 1663-1680.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A simple c++ library for maximum entropy classification. Tsujii laboratory", |
| "authors": [ |
| { |
| "first": "Yoshimasa", |
| "middle": [], |
| "last": "Tsuruoka", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshimasa Tsuruoka. 2006. A simple c++ library for maximum entropy classification. Tsujii labora- tory, Department of Computer Science, University of Tokyo.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Bart: A modular toolkit for coreference resolution", |
| "authors": [ |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Versley", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Eidelman", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Jern", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaofeng", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 46nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "9--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yannick Versley, Simone Paolo Ponzetto, Massimo Poesio, Vladimir Eidelman, Alan Jern, Jason Smith, and Xiaofeng Yang. 2008. Bart: A modular toolkit for coreference resolution. In Proceedings of the 46nd Annual Meeting of the Association for Com- putational Linguistics, pages 9-12.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A modeltheoretic coreference scoring scheme", |
| "authors": [ |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Vilain", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Aberdeen", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "MUC6 '95: Proceedings of the 6th conference on Message understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In MUC6 '95: Proceedings of the 6th conference on Message understanding, pages 45-52, Morristown, NJ, USA. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "text": "Relational Database Model of Text Corpus4. Extracting atomic markable featuresAfter converting (modeling) the text corpus to the database, coreference resolution can be performed. Its functional components are:", |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>1. Relational Database Model of Text Corpus</td></tr><tr><td>2. Link Generator</td></tr><tr><td>3. Link Feature Extractor</td></tr><tr><td>4. Learning (Applicable on Train Data)</td></tr><tr><td>5. Decoding (Applicable on Test Data)</td></tr></table>", |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Results of SUCRE and the best competitor system. Bold F1 scores indicate that the result is the best SemEval result. MD: Markable Detection, ca: Catalan, de: German, en:English, es: Spanish, it: Italian, nl: Dutch", |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "num": null |
| } |
| } |
| } |
| } |