| { |
| "paper_id": "C12-1028", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:23:53.378953Z" |
| }, |
| "title": "Analysis and Enhancement of Wikification for Microblogs with Context Expansion", |
| "authors": [ |
| { |
| "first": "Testuinguruaren", |
| "middle": [], |
| "last": "Hedapenaren", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Analisia", |
| "middle": [], |
| "last": "Eta", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Hobekuntza", |
| "middle": [], |
| "last": "Mikroblogak", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Disambiguation to Wikipedia (D2W) is the task of linking mentions of concepts in text to their corresponding Wikipedia entries. Most previous work has focused on linking terms in formal texts (e.g. newswire) to Wikipedia. Linking terms in short informal texts (e.g. tweets) is difficult for systems and humans alike as they lack a rich disambiguation context. We first evaluate an existing Twitter dataset as well as the D2W task in general. We then test the effects of two tweet context expansion methods, based on tweet authorship and topic-based clustering, on a state-of-the-art D2W system and evaluate the results.", |
| "pdf_parse": { |
| "paper_id": "C12-1028", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Disambiguation to Wikipedia (D2W) is the task of linking mentions of concepts in text to their corresponding Wikipedia entries. Most previous work has focused on linking terms in formal texts (e.g. newswire) to Wikipedia. Linking terms in short informal texts (e.g. tweets) is difficult for systems and humans alike as they lack a rich disambiguation context. We first evaluate an existing Twitter dataset as well as the D2W task in general. We then test the effects of two tweet context expansion methods, based on tweet authorship and topic-based clustering, on a state-of-the-art D2W system and evaluate the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Determining the correct meaning of each word in a natural language text is a prerequisite for proper understanding. Disambiguation to Wikipedia (D2W) (Mihalcea and Csomai, 2007) , the process of linking each concept mention in a text to a concept referent (i.e. a Wikipedia page), is a framework that supports the word sense disambiguation (WSD) task . For example, consider the sentence, \"BP said Halliburton destroyed Gulf Spill evidence\". A D2W system should break the text into concept mentions and return a unique identifier (an article title, in the case of Wikipedia) for each concept. The intended meaning of each concept mention can be inferred in terms of its surface form and its context. Table 1 : Desired D2W output D2W may benefit both human end-users and natural language processing (NLP) systems. When a document is Wikified a reader can more easily grasp its contents as information about related topics is readily accessible 2 . From a system-to-system perspective, a disambiguated corpus has the meanings of many of its terms grounded in a structurally rich ontology, and indeed there is evidence that D2W output (Ratinov and Roth, 2012; Vitale et al., 2012) can improve NLP systems. Given a concept mention in a source text, and Wikipedia, D2W operates over a representation of the following:", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 177, |
| "text": "(Mihalcea and Csomai, 2007)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1132, |
| "end": 1156, |
| "text": "(Ratinov and Roth, 2012;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1157, |
| "end": 1177, |
| "text": "Vitale et al., 2012)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 700, |
| "end": 707, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. the content of the text, and how its elements are related to the concept mention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2. the content of Wikipedia, and how its concepts are related to one another.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "3. how individual elements of the text are related to elements of Wikipedia.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "4. a method for generating candidate concepts for the concept mention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Each of these items may be represented using the output of Natural Language Processing (NLP) techniques applied to the source text and Wikipedia, and/or an analysis of built-in structure (e.g. TF-IDF, Information Extraction techniques, relationships between documents, structural features of Wikipedia such as links, info boxes, and categories). Most successful D2W applications enumerate potential concept referents for a given concept mention based on the anchor text of already existing links within Wikipedia, as well as information from redirects and disambiguation pages. Context is extracted from throughout the document where a target concept mention occurs, which is then compared against Wikipedia content to narrow the hypothesis space of potential concepts. The task is therefore more challenging when concept mentions occur in short texts containing informal language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Over 300 million Twitter users generate over 400 million tweets (posts) daily 3 4 . The microblogging genre presents unique challenges for NLP tasks. Twitter posts (tweets) are limited to 140 characters and informal language is often used. Contextual evidence is important for accurate D2W, but for tweets it is scattered among various knowledge sources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work we explore ways in which the disambiguation context of concept mentions in tweets can be enhanced. The novel contributions of the paper are as follows. First, we provide a qualitative analysis of a hand-annotated data set (Meij et al., 2012) and infer some properties of the contextual evidence most likely sought by annotators. Two sources of additional context useful for disambiguation are identified: tweets from the same author, and topically related tweets. In addition, we evaluate the contribution of these additional context types to the performance of GLOW, a state-of-the-art D2W system (Ratinov et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 235, |
| "end": 254, |
| "text": "(Meij et al., 2012)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 611, |
| "end": 633, |
| "text": "(Ratinov et al., 2011)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task of linking expressions to Wikipedia concepts has received increased attention over the past several years, as the linking of all concept mentions in a single text (Mihalcea and Csomai, 2007; Milne and Witten, 2008a,b; Kulkarni et al., 2009; He et al., 2011; Ratinov et al., 2011) , the linking of a cluster of co-referent named entity mentions spread throughout different documents (Entity Linking) (McNamee and Dang, 2009; Ji et al., 2010 Ji et al., , 2011 Zhang et al., 2011; , or the linking of a whole tweet to a single concept (Genc et al., 2011) . Most D2W work has been performed on newswire collections, and most work on tweets has been limited to a particular type of concept mention. For example, the Online Reputation Management Task (Amig\u00f3 et al., 2010) focused on filtering tweets containing company name to extract only those tweets that were actually related to the company.", |
| "cite_spans": [ |
| { |
| "start": 172, |
| "end": 199, |
| "text": "(Mihalcea and Csomai, 2007;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 200, |
| "end": 226, |
| "text": "Milne and Witten, 2008a,b;", |
| "ref_id": null |
| }, |
| { |
| "start": 227, |
| "end": 249, |
| "text": "Kulkarni et al., 2009;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 250, |
| "end": 266, |
| "text": "He et al., 2011;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 267, |
| "end": 288, |
| "text": "Ratinov et al., 2011)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 408, |
| "end": 432, |
| "text": "(McNamee and Dang, 2009;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 433, |
| "end": 448, |
| "text": "Ji et al., 2010", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 449, |
| "end": 466, |
| "text": "Ji et al., , 2011", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 467, |
| "end": 486, |
| "text": "Zhang et al., 2011;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 541, |
| "end": 560, |
| "text": "(Genc et al., 2011)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 754, |
| "end": 774, |
| "text": "(Amig\u00f3 et al., 2010)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For an n-gram deemed a concept mention, most D2W systems define candidate target concepts as a subset of those that were ever linked to using the n-gram in question as anchor text, from within Wikipedia itself (though (Zhou et al., 2010) expanded this set using search engine click results). The relative frequency with which a given n-gram links to each target concept is referred to as its commonness distribution 5 . Disambiguation is then couched as reranking, computed based on similarity between the concept mention along with its surrounding context, and a candidate concept, The systems of (Ferragina and Scaiella, 2010; Ratinov et al., 2011; Milne and Witten, 2008a; Cucerzan, 2007; Han and Zhao, 2009) take into account the coherence of all concepts linked to in a given document, based on concept similarity. (Meij et al., 2012) created the hand-labeled dataset that we use in our work. Their best performing system based on random forests outperforms commonness accord, though it does not ensure any global coherence over the concepts assigned to a given tweet.", |
| "cite_spans": [ |
| { |
| "start": 218, |
| "end": 237, |
| "text": "(Zhou et al., 2010)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 598, |
| "end": 628, |
| "text": "(Ferragina and Scaiella, 2010;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 629, |
| "end": 650, |
| "text": "Ratinov et al., 2011;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 651, |
| "end": 675, |
| "text": "Milne and Witten, 2008a;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 676, |
| "end": 691, |
| "text": "Cucerzan, 2007;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 692, |
| "end": 711, |
| "text": "Han and Zhao, 2009)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 820, |
| "end": 839, |
| "text": "(Meij et al., 2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Some TAC-KBP Entity Linking (Ji et al., 2011) systems utilized all entities in the context of a given query, disambiguating all entities simultaneously using a graph-based re-ranking algorithm (Fernandez et al., 2010; Radford et al., 2010; Cucerzan, 2011; Guo et al., 2011; or a collaborative/ensemble ranking algorithm (Pennacchiotti and Pantel, 2009; Chen and Ji, 2011; Kozareva et al., 2011) to ensure global consistency. (McNamee et al., 2011) demonstrated that co-occurring named entities are particularly helpful for Cross-lingual Entity Linking (CLEL). None of the TAC-KBP systems performed full-document D2W to include concept mentions of different types, including non-entities.", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 45, |
| "text": "(Ji et al., 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 193, |
| "end": 217, |
| "text": "(Fernandez et al., 2010;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 218, |
| "end": 239, |
| "text": "Radford et al., 2010;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 240, |
| "end": 255, |
| "text": "Cucerzan, 2011;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 256, |
| "end": 273, |
| "text": "Guo et al., 2011;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 320, |
| "end": 352, |
| "text": "(Pennacchiotti and Pantel, 2009;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 353, |
| "end": 371, |
| "text": "Chen and Ji, 2011;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 372, |
| "end": 394, |
| "text": "Kozareva et al., 2011)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 425, |
| "end": 447, |
| "text": "(McNamee et al., 2011)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For a given concept mention, all-concept D2W work we are aware of makes use of context that is part of or derived from its containing document, whereas we explore ways to obtain supporting context in the form of additional (tweet) documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Although there is a consensus that WSD is best suited for evaluation in vivo (i.e. as a component of another system), a reliable gold standard data set for in vitro evaluation is desirable, even if the output is not intended for a human end-user (Navigli, 2009) . While annotation reliability depends in part on robust guidelines designed to maximize inter-annotator agreement (IAA), IAA tends to degrade as the sense repository becomes more fine-grained (Navigli, 2009) , as is the case in D2W. On one hand, if a D2W task is limited to named entities, and the set of mentions to be linked is given in advance, agreement can be rather high -e.g. 91.53%, 87.5%, and 92.98% was observed for Person, Geo-political, and Organization type entities in the TAC2010 data (Ji et al., 2010) -in spite of a sense repository which is a priori quite vast. In contrast, the task of linking whichever concept mentions appear important in a corpus of very small documents should prove difficult, as it is more demanding in spite of a dearth of contextual evidence. A D2W task may be characterized along two dimensions: whether concept mentions to be disambiguated are given in advance, and whether the target domain of concepts consists of all of Wikipedia or from a limited subset (e.g. only named entities). We refer to the task of linking whichever concept mentions appear important to a (largely) unrestricted domain of concepts (i.e. all Wikipedia pages) as open-ended concept linking.", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 261, |
| "text": "(Navigli, 2009)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 455, |
| "end": 470, |
| "text": "(Navigli, 2009)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 763, |
| "end": 780, |
| "text": "(Ji et al., 2010)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of human annotation task", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Annotating every word without regard to its syntactic or semantic category, or its prominence in the discourse, is probably unnecessary for any application (Navigli, 2009) . The criteria for determining which concept mentions to annotate must be specified in terms of (1) the properties of the target domain of concepts, (2) whether a concept exists in the target domain, and (3) the extent to which a mention is deemed ambiguous. A concept mention can be said to lack a (Wikipedia) concept referent in two distinct ways: it may be deemed unlinkable because the string in question, in the context in question, does not refer to a valid concept (i.e. one that could, in principle, appear in Wikipedia). On the other hand, the mention may refer to a valid concept, but there is not yet a corresponding Wikipedia page (see (Lin et al., 2012) for further discussion). Similarly, a concept mention can qualify as ambiguous in two ways: it may obviously refer to some valid concept, but even if each candidate has a corresponding Wikipedia page, the intended concept may be impossible to determine; on the other hand, the (Wikipedia-independent) concept being referred to may be clear, but there may be more than one (Wikipedia) concept that constitutes a correct answer in accordance with the annotation guidelines (e.g. concepts for which article mergers have been suggested might be considered equivalent, for annotation purposes; also c.f. \"Gators\" and \"Pine nut\" in section 3.2 regarding taxonomic granularity). Concept mentions that unambiguously refer to a Wikipedia concept may still present difficulties. Specification of which concepts constitute valid targets must be done in terms of the property space of all concepts, which is arguably quite complex. In the case of D2W a concept's content derives not only from explicit (e.g. infobox, category, and link structure) but implicit (article text) facts, and may be difficult to separate from personal knowledge and experience with the (Wikipedia-independent) concept in question. Such a separation potentially limits annotation richness but may reduce inconsistency across annotators. Furthermore, determining which mentions to annotate depends not only on the properties of potential target concepts but on the prominence of the mention in question in the context in which it occurs. Perhaps a concept mentioned in passing, which does not pertain to the main point, should not be annotated. Finally, a concept might be relevant to an entire tweet though not denoted by any word or phrase therein. For example, 2011 Tohoku earthquake and tsunami is clearly related to the tweet, \"my thoughts and prayers go out to the Japanese people\". We are aware of no annotation schemes that account for all of these variables, and leave a more precise formulation to future work.", |
| "cite_spans": [ |
| { |
| "start": 156, |
| "end": 171, |
| "text": "(Navigli, 2009)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 820, |
| "end": 838, |
| "text": "(Lin et al., 2012)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of human annotation task", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Annotators use information from different sources when annotating a concept mention. When short and informal texts such as tweets are analyzed in isolation, identifying the context necessary to disambiguate the concept mentions therein is non-trivial. Informative context for a given concept mention might be derived from the mention alone, within the tweet, or within the authors other tweets. Information about the author in general, his or her interests, recent events in the author's life, , and world knowledge may be informative as well. We inferred that annotators made use of several different sources of information, often simultaneously, and that world knowledge is supplemented by information acquired from Wikipedia during annotation . We aim to determine what sort of additional tweet context might have provided for an improved disambiguation context 6 . In what follows we give examples in which annotators either (1) appeared to use, or (2) failed to take advantage of, a given type of contextual support, along with analysis. Table 2 : Context type used by annotators regardless of context. That \"Hawks\" refers to a sports team is implied by \"Slump\" and the pattern \"Go ... !\", but \"Hawks\" also may refer to the teams Fukuoka Softbank Hawks or Chicago Blackhawks, in addition to the correct referent Atlanta Hawks. However, only the Atlanta Hawks have players named Jeff (Teague) and Damien (Wilkins), and knowing this requires either being a member of a subculture that possesses enough knowledge to make this distinction, or having searched for this information, which can be done with a Wikipedia search and very few clicks. That \"Gators\" refers to a sports team is implied by \"Go ... !\". Whether the mention can be reliably linked to Florida Gators men's basketball may depend on mentions in other tweets written by the same author. In the first supporting tweet, \"Sweet 16\" refers to NCAA Men's Division I Basketball Championship as opposed to Sweet Sixteen (birthday), as evidenced by the sports context; the situation is analogous for \"March Madness\" in the second supporting tweet. A candidate target like Sweet Sixteen (KHSAA State Basketball Championship), a less prominent basketball tournament, is ruled out by the presence of \"March Madness\" and \"Gators\" (as both are associated with only the NCAA tournament). In addition, time of publication and author attributes provide ample evidence, independent of these supporting tweets: the tweet date was March 18th, during the NCAA Division I Men's Basketball Tournament, and the author played basketball at the University of Florida. Commonness alone would not suffice as \"Gators\" links most commonly to Florida Gators, the Wikipedia page about the University of Florida's athletics in general, which is not specific enough 7 . Some additional source of information is required to link to Florida Gators men's basketball. Table 3 : Context type not used by annotators \"Detroit Tigers\" is unambiguously associated with Detroit Tigers. The given annotation for \"nuts\" is Nut (fruit), which is reasonable, but Pine nut is more appropriate as it is the nut ingredient used in pesto according to Wikipedia 8 . Ben Rhodes was the deputy National Security Advisor (NSA) to Barack Obama in March of 2011. This is not clear from the tweet text, but supporting tweets each provide evidence in favor of the target Ben Rhodes (speechwriter). The American Political context indicates the target concept for \"Clinton\" is either Bill Clinton or Hillary Rodham Clinton. To inter that Hillary Clinton went on such a trip at the time of publication requires either American political knowledge or access to the URL in the tweet.", |
| "cite_spans": [ |
| { |
| "start": 865, |
| "end": 866, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1043, |
| "end": 1050, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 2898, |
| "end": 2905, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Information potentially used by annotators", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We observe that world knowledge, including what can quickly be obtained by looking through Wikipedia, helps annotation. Many such on-the-fly inferences would be difficult to make automatically, thus additional textual context is needed in order to generate a more comprehensive disambiguation context. We consider two methods for providing such content: (1) disambiguating mentions in the context of all tweets in the dataset by the same author, and (2) disambiguating mentions in the context of all tweets in the same cluster (section 4.2.1) 9 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information potentially used by annotators", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Some D2W systems aim to maximize the global coherence of their output, i.e., the concepts linked to in a given source document. Essentially, some measure of relatedness among these concepts informs the selection process for a given concept mention. A relatedness metric based on the Wikipedia link structure can leverage the co-occurrence of concept mentions in a document to the extent that the relationships expressed therein are captured in the links between their referent concepts. Concept mentions in microblog messages often lack explicit supporting context, therefore systems and annotators alike must look elsewhere for disambiguation context. We hypothesize that with the right additional context, given the resulting enriched disambiguation context, a D2W system that relies on optimizing its output for global coherence should perform better. In our experiments we do this in two ways: to a given tweet, we (1) append additional tweets by the same author, and (2) append tweets based on a clustering algorithm. We constrain the term disambiguation context in what follows to a set of concepts, each deemed a candidate referent of any concept mention in the source document. This definition is analogous to that used in previous sections; world knowledge, including that gained by reading tweets and examining Wikipedia, is represented approximately, via the extension of the disambiguation context that results from augmenting tweets with related tweets to create multi-tweet documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global coherence", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Enforcing constraints can be potentially harmful. The system of (Milne and Witten, 2008a) performs poorly on the tweet dataset because it relies on unambiguous concept mentions for disambiguation, the guaranteed existence of which is implausible for the microblog genre (Meij et al., 2012) . TAGME (Ferragina and Scaiella, 2010) begins with commonness but enforces global coherence through a \"voting\" scheme in which the score associated with an n-gram m and a target concept t is derived from the vote of each other n-gram m in the tweet. The vote of m is the average of the relatedness scores (Milne and Witten, 2008b) between each of its candidate concepts t with t, weighted according to COM M ON N ESS(m , t ), and though links may be pruned, this system performs poorly on the tweet dataset as well (Meij et al., 2012) . GLOW (Ratinov et al., 2011) , on the other hand, optimizes for global coherence using two supervised classifiers, and is conducive to a balanced disambiguation context, neither prohibitively small, nor large and noisy. Their notion of disambiguation context consists of the top candidates returned by a local model (described below) that for a given concept mention takes into account surrounding textual context while remaining agnostic to candidate concepts for surrounding mentions. A global model finalizes linking choices so as to optimize global coherence of the output. We chose to use GLOW because of its state-of-the-art performance on benchmark D2W datasets and its focus on a balanced disambiguation context.", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 89, |
| "text": "(Milne and Witten, 2008a)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 270, |
| "end": 289, |
| "text": "(Meij et al., 2012)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 298, |
| "end": 328, |
| "text": "(Ferragina and Scaiella, 2010)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 595, |
| "end": 620, |
| "text": "(Milne and Witten, 2008b)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 805, |
| "end": 824, |
| "text": "(Meij et al., 2012)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 832, |
| "end": 854, |
| "text": "(Ratinov et al., 2011)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global coherence", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The pipeline consists of three phases: first a tweet document is generated, then the document is fed to the D2W system, and finally results are extracted from the D2W system output.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pipeline", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The first phase consists of grouping individual tweets into documents. We create tweet documents for each experimental case, as described in Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 141, |
| "end": 148, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tweet document creation", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Each document consists of a single tweet By author", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tweet document content By file", |
| "sec_num": null |
| }, |
| { |
| "text": "Each document consists of all tweets by a given author By cluster", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tweet document content By file", |
| "sec_num": null |
| }, |
| { |
| "text": "Each document consists of all tweets in the same cluster Table 4 : Description of experimental cases All tweets are pre-processed such that URLs are removed, and the @ and # characters are removed from user mentions and hashtags respectively. Tweets in documents are ordered chronologically by publication date, and those labeled ambiguous or non-referential are omitted.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 57, |
| "end": 64, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tweet document content By file", |
| "sec_num": null |
| }, |
| { |
| "text": "A number of well-known probabilistic topic modeling approaches such as Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , have been explored to discover topics from a set of documents. However, due to the shortness and lack of context, these topic modeling approaches may not work well with tweets.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 132, |
| "text": "(Hofmann, 1999)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 171, |
| "end": 190, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tweet document content By file", |
| "sec_num": null |
| }, |
| { |
| "text": "To overcome this difficulty, we explicitly smooth the topic distributions of tweets by building linkages between tweets, weighted by cosine similarity in terms of TF-IDF. A random walk-based approach is used to propagate the topic distribution probabilities across the linkages:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tweet document content By file", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P(z k |x i ) = x j \u2208X w ji P(z k |x j ), P(z k |x i ) = (1 \u2212 \u03bb)P(z k |x i ) + \u03bbP (z k |x i ) iP (z k |x i )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Tweet document content By file", |
| "sec_num": null |
| }, |
| { |
| "text": "where P(z k |x i ) is the probability of topic z k for tweet x i , w i j is the similarity between x i and x j , and \u03bb is a parameter that controls the balance between the previous topic distribution P(z k |x i ) and propagated topic distribution. We utilize PLSA to initialize the topic distributions. We cluster tweets using this PLSA+Random Walk-based Propagation (PRP) method by assigning a tweet x i to the topic z k that maximizes P(z k |x i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tweet document content By file", |
| "sec_num": null |
| }, |
| { |
| "text": "In the second phase we use GLOW (Ratinov et al., 2011 ), a D2W system that disambiguates terms by attempting to optimize the global coherence of its output. Given a document d consisting of mentions M = {m 1 , . . . , m N }, the system output consists of an N -tuple of target concepts, \u0393 =< t 1 , . . . , t N >, a subset of all available concepts T = {t 1 , . . . , t |T | }. Formally, one element of T is a null concept t , such that linking m to t is akin to not linking m at all. Local feature functions \u03c6 assign < m, t > pairs a high score to the extent that the context surrounding m is similar to t, and are meant to measure the likelihood that m links to t irrespective of the concepts referred to by m's surrounding mentions. Global feature functions \u03c8 assign a high score to \u0393 to the extent that its contents are coherent. Coherence is calculated on a pairwise basis. Each global feature is either the Pointwise mutual information (PMI) or normalized Google distance (NGD) of a pair of concepts in the set, calculated in terms of the sets of concepts that either (1) link to each concept in the pair, (2) are linked to from each concept in the pair, or (3) are in the intersection of the sets defined in (1) and (2), for each concept in the pair 10 . Thus, GLOW attempts to solve the following optimization problem for a given document d:", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 53, |
| "text": "(Ratinov et al., 2011", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GLOW: a D2W system", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u0393 * = arg max \u0393 [ N i=1 \u03c6(m i , t i ) + \u03c8(\u0393)]", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "GLOW: a D2W system", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Where \u0393 * is the optimal output. This problem is NP hard, so inter-concept relatedness is calculated pairwise to reduce complexity, reformulating the problem as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GLOW: a D2W system", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u0393 * \u2248 arg max \u0393 N i\u22121 [\u03c6(m i , t i )] + t j \u2208\u0393 [\u03c8(t i , t j )]", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "GLOW: a D2W system", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "The optimization is performed in two stages. First, in the ranker stage, \u0393 * is found but without allowing any mention to be linked to t . Next, in the linker stage, whether each mention's top candidate should be replaced by t is determined. In the system output, mentions linked to t have a negative linker score while others have a positive linker score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GLOW: a D2W system", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "For a given case, each tweet document d is fed to the D2W system separately, the output of which consists of mentions that were linked (including those ultimately linked to t and their associated target concepts). Each mention is associated with a linker score -the confidence associated with the choice to link that term -while each of its candidate target concepts is associated with a ranker score -the confidence associated with that particular concept. Thus for each linked mention m di we have its result tuple, R(m d i ) which consists of a linker score and a list of k targets, ordered according to their ranker score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting output", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "R(m d i ) =< ls(m d i ), (< t 1 m di , rs(t 1 m di ) >, . . . , < t k m di , rs(t k m di ) >) >", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Extracting output", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "We abbreviate the first and second elements of ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting output", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "R(m di ) as R(m d i ) ls and R(m d i ) rs .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting output", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "R(s d ) =< ma x R(m di )\u2208R s d R(m d i ) ls , R(m di )\u2208R s d R(m di ) rs >", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Extracting output", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "In other words for any surface string, we consider all target concepts and associated ranker scores, and associate the string with the highest linker score of any matching mention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting output", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "Output aggregation is informed by two parameters: longest-n-gram, a binary parameter indicating whether or not the \"longest n-gram heuristic\" is used (as opposed to \"all terms\"), and a linker score threshold \u03bb. If the longest n-gram heuristic is used, then if both \"Houston Rockets\" and \"Rockets\" are disambiguated, for example, \"Rockets\" will be ignored. Finally, R(s d ) will only be included in the final output if R(s d ) ls > \u03bb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting output", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "In this section we describe the dataset, provide a critical evaluation, and explain how system output is evaluated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and scoring metric", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We use the dataset described in (Meij et al., 2012) , which we refer to as gold1. A random sample of verified twitter accounts were selected, and up to their 20 most recent tweets were extracted. The original dataset had 562 tweets, but due to tweets having been deleted, the dataset consists of 502 tweets from 28 authors. Annotators used an interface enabling them to read and annotate tweets, searching Wikipedia as needed, and were instructed to, where possible, indicate which concepts were \"contained in, meant by, or relevant\" to a particular tweet. Alternatively they were permitted to label tweets as ambiguous or as having referents outside of Wikipedia; 127 tweets were labeled as such and discarded 11 . The gold standard consists of the union of annotations from two annotators which amounts to 812 annotations (not including discarded tweets). URLs were removed entirely while mentions and hashtags were edited to remove leading @ and # characters respectively 12 .", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 51, |
| "text": "(Meij et al., 2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction, content, and annotation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Some system errors are the result of human annotation omissions (Meij et al., 2012) . There were 229 false positives when applying the GLOW to single tweets, using the longest n-gram heuristic, with the linker score threshold at -0.04. We looked at each one and rated it incorrect (110), partially correct (49), or correct (70). False positives deemed correct (FPDC) were labeled as follows: \"@\" (2), \"#\" (13), \"lol\" (5), \"replace\" (6), \"new\" (35), \"equivalent\" (9).", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 83, |
| "text": "(Meij et al., 2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System false positives", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The gold2 dataset is the result of adding all FPDC to gold1. For each FPDC type we provide representative system results followed by analysis. Table 5 gives some examples of each FPDC type, along with from the system output or gold1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 143, |
| "end": 150, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System false positives", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "FPDC labeled new consist of a mention that annotators previously did not link and a target concept deemed correct. Table 5 gives three examples; \"support\", in this case, is an example of an analogous annotation in gold1. In the first a song was omitted in one tweet whereas in another a song was linked, and similarly so for the dates in the second example and its counterpart. In the third, a governmental acronym and an associated term are omitted, whereas in its counterpart they are annotated.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 115, |
| "end": 122, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System false positives", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "FPDC labeled replace consist of mentions that were originally annotated, but we believe the annotation provided by GLOW was significantly better. Table 5 contains three examples; \"support\", in this case, illustrates the change made by the system. In the first example some evidence was available in the tweet itself (though more conclusive evidence is available in the author's other tweets, as alluded to in section 3.2). In the second example note that Grammy Nominees is an album containing Grammy-nominated songs for a given year, but the URL in the tweet links to a page where only the album \"Infinite Arms\" can be purchased, revealing that the original annotation is incorrect (note that annotators did not have access to URLs in tweets). In the third the original annotation is too general. Note that the vast majority of false positives deemed partially correct are of this type.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 146, |
| "end": 153, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System false positives", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "FPDC labeled eq are instances where GLOW's target was deemed equivalent to the target in the original annotation. Table 5 lists three such examples followed by justification. FPDC labeled @ were user mentions that were not annotated, even though the user is identifiable and is prominent enough to have a Wikipedia page. FPDC labeled # were hash marked mentions that were not annotated. FPDC labeled lol were mentions expressing that the user laughed, e.g. \"lol\", \"ROFL\", \"LMAO\", etc. Annotating such mentions depends on whether we want to annotate actions the user indicates he or she performs in conjunction with the tweet.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 114, |
| "end": 121, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System false positives", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Note that these omissions and errors drawn from a subset of those mentions whose annotation was corrected by GLOW; however, other errors and omissions exist (e.g. when both humans and GLOW made mistakes). The purpose of this analysis is not to discredit the dataset. Classification of annotations or omissions as erroneous is highly subjective in that it depends on both the user's interpretation of the annotation guidelines, which in this case were rather open-ended, along with their own world knowledge. We believe the formation of guidelines and annotation methods that are more robust to such discrepancies is an important avenue of research. The situation in Libya is of great concern. NATO can act as an enabler and coordinator if and when member states will take action @ RT @user: Tweets to 6.5 million followers in the name of #girlseducation: Thanks @Shakira, @user and @user! URL Obama set to deliver a response on #Libya soon ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System false positives", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this section we present and discuss experimental results. For each case we generate tweet documents (see section 4.2.1), each of which is fed to the D2W system, and final output is extracted from system output (see section 4.2.3). We calculate precision, recall, and MRR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Output is evaluated against gold1 and gold2 (see section 5). Final output for a tweet document distinguishes identical mentions allowing each tweet to be associated with a list of targets. n/a n/a 28 28 n/a n/a longest ngram n/a n/a 28 50 n/a n/a ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metric", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P = N S i |T (x i ) \u2229 G(x i )| N S (6) R = N G i |T (x i ) \u2229 G(x i )| N G (7) F 1 = 2PR P + R", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Evaluation metric", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Where N S is the number of \u2329m, t\u232a pairs in the system output, each x i is a tweet, T (x) contains the top target concept from each mention in tweet x, G(x) contains each concept associated with x by an annotator, and N G is the total number of gold standard annotations. Mean Reciprocal Rank (MRR) is calculated over all gold annotation tuples \u2329x, t\u232a \u2208 G as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metric", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "M RR = 1 |G| |G| i=1 1 r ank i", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Evaluation metric", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Where r ank i is r if < t r i , rs(t r i ) > is in R(s d ) rs , where t i is the target of the ith gold annotation < x i , t i >, and d is the document that contains x i . Otherwise, 1/r ank i = 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metric", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "In order to investigate the most effective way to extend tweet context to improve D2W, we augmented single tweets using either the by author or by cluster methods (see Table 4) For the case of single tweets, each tweet was input one at a time into GLOW. For cases where tweets were aggregated, a document containing the tweets, delimited by a line break and in chronological order by publication date, was input into GLOW. Table 6 presents the results of applying these different methods to augment tweets. By author outperforms by cluster. Table 7 shows details for the top performing systems of each type. The systems that achieve the top Mean Reciprocal Rank (MRR), as well as MRR for the systems with the top F measure, are shown in Table 8 . The by file system performs the worst in each category. By author improves recall while by cluster improves precision. The Wilcoxon matched pairs signed rank text shows that improvement in f-measure from by file to by author method was significant (p < .01); improvement from by file to by cluster was significant as well (p < .013) 13 . The Adjusted Rand Index (ARI) is a measure of cluster similarity, corrected for chance. The ARI between the top author based and cluster based methods is low (.0128), indicating that there is very little overlap.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 168, |
| "end": 176, |
| "text": "Table 4)", |
| "ref_id": null |
| }, |
| { |
| "start": 423, |
| "end": 430, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 541, |
| "end": 548, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 737, |
| "end": 744, |
| "text": "Table 8", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Detailed results for the highest performing systems are shown in Table 7 . The differences in output moving from by file to by author systems consisted of 23 gains and 12 losses. Gains resulted for the following reasons: (i) because the top candidate was correct in both cases but in the by author case the linker score exceeded 0.0, but in the by file case it did not exceed -0.4; (ii) the top candidate was incorrect in the by file case but correct in the by author case; (iii) a surface-identical mention in another tweet either had a better linker score and/or it was linked to the correct target 14 . Some gains were deemed neutral (4) or bad (1), meaning that we deemed the change made incorrect, contrary to gold1. Examples of changes are illustrated in Table 9 and explained below. Losses were categorized in an analogous way. Table 9 : Gains from by file to by author system 13 We randomly split tweets into 17 groups, yielding 17 lists of annotations. We calculated F-measure for each group using both methods and the resulting F-measure pairs served as input to the test.", |
| "cite_spans": [ |
| { |
| "start": 884, |
| "end": 886, |
| "text": "13", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 72, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 761, |
| "end": 768, |
| "text": "Table 9", |
| "ref_id": null |
| }, |
| { |
| "start": 835, |
| "end": 842, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "14 Gains are determined with respect to the gold standard. The best by file and by author systems had linker score thresholds of -0.4 and 0.0, respectively. The first change is due to additional supporting context in the author's other tweets, which include entities from modern politics (e.g. politician names and organizations). This additional context alleviates the noisy mention \"Allies\" which is strongly associated with World War II and hence Empire of Japan. In the second case the author had later mentioned \"Whistler\", a popular winter sports destination, near mentions of \"slopes\", \"snowboarding\", and \"jet lag\". In the third case, the author frequently mentions \"St. Louis\" in other tweets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "D2W systems that attempt to maximize the global coherence of output have been successful in formal genres, but the required supporting concept mentions are hidden in the Twitter domain. Our approach to this apparent data sparsity is orthogonal to that taken by (Meij et al., 2012) , who designed features in terms of individual n-grams and candidate concepts, rarely dependent on the entire tweet (5 out of 33), never attempting to achieve global coherence. We showed that for a given tweet, adding tweets based on both authorship and topical similarity provided GLOW sufficient information to enhance the disambiguation context for concept mentions therein, yielding statistically significant gains over the by file base.", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 280, |
| "text": "(Meij et al., 2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We have provided a qualitative analysis of an existing hand-labeled dataset, which raised questions about both definition and evaluation of the D2W task, elucidating various sources of difficulty. In future work we plan to generate comprehensive annotation and evaluation guidelines for D2W. Second, it is clear that sometimes there is more than one appropriate target concept for a given concept mention. In some cases two concepts are equally plausible targets (Devil vs. Satan for the n-gram \"the devil\"), while in other cases returning a concept slightly higher up in the is-a taxonomic structure would plausibly still be useful for downstream applications (e.g. returning Florida Gators instead of the more accurate Florida Gators men's basketball, given only \"go Gators!!\"). We plan to explore principled criteria for Wikipedia concept equivalence that go beyond the provided redirects, as well as evaluation methods that do not penalize such \"not so bad\" deviation from human annotation. Finally, we plan to evaluate the effects of expanding tweet context based on Twitter-centric features such as the mention/retweet structure and hashtags, as well as websites linked to from within tweets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We use \"concept\" in both the usual sense and to refer to a Wikipedia page about a concept. 2 http://en.wikipedia.org/wiki/Wikipedia:Glossary#Wikify.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://blog.twitter.com/2011/08/your-world-more-connected.html as of August 2011. 4 http://www.mediabistro.com/alltwitter/twitter400milliontweets_b23744 as of August 2012.5 For an n-gram m, conceptt \u2208 T , COM M ON N ESS(m, t) = c(m\u2192t) t \u2208T c(m\u2192t ) , where c(m \u2192 t) denotes the number of times m serves as a hyperlink to the concept t.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that in general by disambiguation context we mean all information that is applicable to the disambiguation task. Later in our description of GLOW 4.1 we take a narrower definition of this term.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We base this judgement on the Gricean maxim of quantity: \"Be as informative as required\" (c.f. http://plato.stanford.edu/entries/implicature/). We leave an analysis in this vane to future work.8 Pesto may be made with other nuts, but according to the article Pesto this does not correspond with the classic recipe. The existence of multiple correct options for candidate targets at varying taxonomic levels makes evaluation more difficult because some arbitrary choices about what constitutes \"close enough\" or \"specific enough\" must be made.9 Other dimensions in terms of which tweets could be clustered to filter out noise include hashtags, timestamps and the mention/retweet structure for the tweet in question. Unfortunately Twitter API restrictions render these extensions slightly less accessible for older tweets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See(Ratinov et al., 2011) for a detailed explanation including the adaptations of PMI and NGD used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We acknowledge that ignoring non-referential tweets makes the task easier. Work that focuses on a system's ability to ignore irrelevant content is needed. Tweets were deemed ambiguous if annotators identified more than one correct answer, a case our system did not accommodate.12 @ and # characters were visible to human annotators, who were asked to ignore hash tagged terms unless their meaning is obvious; they were stripped during pre-processing. For further details and access to the dataset: http://ilps.science.uva.nl/resources/wsdm2012-adding-semantics-to-microblog-posts/(Meij et al., 2012).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF-09-2-0053, the U.S. NSF Grants IIS-0953149 and IIS-1144111 and the U.S. DARPA BOLT program. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Weps-3 evaluation campaign: Overview of the online reputation management task", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Amig\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Artiles", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gonzalo", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Spina", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Corujo", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "CLEF 2010 (Notebook Papers/LABs/Workshops)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amig\u00f3, E., Artiles, J., Gonzalo, J., Spina, D., Liu, B., and Corujo, A. (2010). Weps-3 evaluation campaign: Overview of the online reputation management task. In CLEF 2010 (Notebook Papers/LABs/Workshops).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Latent dirichlet allocation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jordan", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "I" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "J. Mach. Learn. Res", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent dirichlet allocation. J. Mach. Learn. Res., 3.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Collaborative ranking: A case study on entity linking", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. EMNLP2011", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, Z. and Ji, H. (2011). Collaborative ranking: A case study on entity linking. In Proc. EMNLP2011.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Large-scale named entity disambiguation based on wikipedia data", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Cucerzan", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cucerzan, S. (2007). Large-scale named entity disambiguation based on wikipedia data. In EMNLP-CoNLL 2007.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Tac entity linking by performing full-document entity extraction and disambiguation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Cucerzan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. TAC 2011 Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cucerzan, S. (2011). Tac entity linking by performing full-document entity extraction and disambiguation. In Proc. TAC 2011 Workshop.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Webtlab: A cooccurence-based approach to kbp 2010 entity-linking task", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Fernandez", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "A" |
| ], |
| "last": "Fisteus", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Sanchez", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. TAC 2010 Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fernandez, N., Fisteus, J. A., Sanchez, L., and Martin, E. (2010). Webtlab: A cooccurence-based approach to kbp 2010 entity-linking task. In Proc. TAC 2010 Workshop.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Tagme: on-the-fly annotation of short text fragments (by wikipedia entities)", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Ferragina", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Scaiella", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 19th ACM international conference on Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "1625--1628", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ferragina, P. and Scaiella, U. (2010). Tagme: on-the-fly annotation of short text fragments (by wikipedia entities). In Proceedings of the 19th ACM international conference on Information and knowledge management, pages 1625-1628. ACM.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Discovering context: classifying tweets through a semantic transform based on wikipedia", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Genc", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Sakamoto", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "V" |
| ], |
| "last": "Nickerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 6th international conference on Foundations of augmented cognition: directing the future of adaptive systems, FAC'11", |
| "volume": "", |
| "issue": "", |
| "pages": "484--492", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Genc, Y., Sakamoto, Y., and Nickerson, J. V. (2011). Discovering context: classifying tweets through a semantic transform based on wikipedia. In Proceedings of the 6th international conference on Foundations of augmented cognition: directing the future of adaptive systems, FAC'11, pages 484-492.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A graph-based method for entity linking", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. IJCNLP2011", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guo, Y., Che, W., Liu, T., and Li, S. (2011). A graph-based method for entity linking. In Proc. IJCNLP2011.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A generative entity-mention model for linking entities with knowledge base", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. ACL2011", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han, X. and Sun, L. (2011). A generative entity-mention model for linking entities with knowledge base. In Proc. ACL2011.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Collective entity linking in web text: A graph-based method", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. SIGIR2011", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han, X., Sun, L., and Zhao, J. (2011). Collective entity linking in web text: A graph-based method. In Proc. SIGIR2011.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Named entity disambiguation by leveraging wikipedia semantic knowledge", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 18th ACM conference on Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han, X. and Zhao, J. (2009). Named entity disambiguation by leveraging wikipedia seman- tic knowledge. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM 2009.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Generating links to background knowledge: A case study using narrative radiology reports", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "De Rijke", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sevenster", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Van Ommering", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Qian", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 20th ACM international conference on Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "1867--1876", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "He, J., de Rijke, M., Sevenster, M., van Ommering, R., and Qian, Y. (2011). Generating links to background knowledge: A case study using narrative radiology reports. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 1867-1876. ACM.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Probabilistic latent semantic indexing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hofmann", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '99", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hofmann, T. (1999). Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '99.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Overview of the tac 2011 knowledge base population track", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Dang", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Text Analysis Conference (TAC)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ji, H., Grishman, R., and Dang, H. (2011). Overview of the tac 2011 knowledge base population track. In Text Analysis Conference (TAC) 2011.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Overview of the tac 2010 knowledge base population track", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Dang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Griffitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellis", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ji, H., Grishman, R., Dang, H., Griffitt, K., and Ellis, J. (2010). Overview of the tac 2010 knowledge base population track. In Text Analysis Conference (TAC) 2010.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Class label enhancement via related instances", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Voevodski", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Teng", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. EMNLP2011", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kozareva, Z., Voevodski, K., and Teng, S. (2011). Class label enhancement via related instances. In Proc. EMNLP2011.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Collective annotation of wikipedia entities in web text", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Ramakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Chakrabarti", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "KDD", |
| "volume": "", |
| "issue": "", |
| "pages": "457--466", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kulkarni, S., Singh, A., Ramakrishnan, G., and Chakrabarti, S. (2009). Collective annotation of wikipedia entities in web text. In KDD, pages 457-466.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "No noun phrase left behind: Detecting and typing unlinkable entities", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mausam", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "893--903", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, T., Mausam, and Etzioni, O. (2012). No noun phrase left behind: Detecting and typing unlinkable entities. In EMNLP-CoNLL, pages 893-903.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Overview of the tac 2009 knowledge base population track", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Mcnamee", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Dang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McNamee, P. and Dang, H. (2009). Overview of the tac 2009 knowledge base population track. In Text Analysis Conference (TAC) 2009.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Cross-language entity linking", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Mcnamee", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Mayfield", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lawrie", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "W" |
| ], |
| "last": "Oard", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Doermann", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. IJCNLP2011", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McNamee, P., Mayfield, J., Lawrie, D., Oard, D. W., and Doermann, D. (2011). Cross-language entity linking. In Proc. IJCNLP2011.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Adding semantics to microblog posts", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Meij", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Weerkamp", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "De Rijke", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the fifth ACM international conference on Web search and data mining, WSDM '12", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meij, E., Weerkamp, W., and de Rijke, M. (2012). Adding semantics to microblog posts. In Proceedings of the fifth ACM international conference on Web search and data mining, WSDM '12, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Wikify!: linking documents to encyclopedic knowledge", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Csomai", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "CIKM", |
| "volume": "7", |
| "issue": "", |
| "pages": "233--242", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihalcea, R. and Csomai, A. (2007). Wikify!: linking documents to encyclopedic knowledge. In CIKM, volume 7, pages 233-242.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Learning to link with wikipedia", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Milne", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceeding of the 17th ACM conference on Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "509--518", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milne, D. and Witten, I. (2008a). Learning to link with wikipedia. In Proceeding of the 17th ACM conference on Information and knowledge management, pages 509-518. ACM.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Learning to link with wikipedia", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Milne", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "An effective, low-cost measure of semantic relatedness obtained from wikipedia links. the Wikipedia and AI Workshop of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milne, D. and Witten, I. (2008b). Learning to link with wikipedia. In An effective, low-cost measure of semantic relatedness obtained from wikipedia links. the Wikipedia and AI Workshop of AAAI.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Word Sense Disambiguation: a survey", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ACM Computing Surveys", |
| "volume": "41", |
| "issue": "2", |
| "pages": "1--69", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Navigli, R. (2009). Word Sense Disambiguation: a survey. ACM Computing Surveys, 41(2):1- 69.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Entity extraction via ensemble semantics", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pennacchiotti", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. EMNLP2009", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pennacchiotti, M. and Pantel, P. (2009). Entity extraction via ensemble semantics. In Proc. EMNLP2009.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Cmcrc at tac10: Document-level entity linking with graph-based re-ranking", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Hachey", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nothman", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Honnibal", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. TAC 2010 Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Radford, W., Hachey, B., Nothman, J., Honnibal, M., and Curran, J. R. (2010). Cmcrc at tac10: Document-level entity linking with graph-based re-ranking. In Proc. TAC 2010 Workshop.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Learning-based multi-sieve co-reference resolution with knowledge", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ratinov, L. and Roth, D. (2012). Learning-based multi-sieve co-reference resolution with knowledge. In Proc. EMNLP.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Local and global algorithms for disambiguation to wikipedia", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Downey", |
| "suffix": "" |
| }, |
| { |
| "first": "Anderson", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. of the Annual Meeting of the Association of Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ratinov, L., Roth, D., Downey, D., and Anderson, M. (2011). Local and global algorithms for disambiguation to wikipedia. In Proc. of the Annual Meeting of the Association of Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Classification of short texts by deploying topical annotations", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Vitale", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Ferragina", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Scaiella", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ECIR", |
| "volume": "", |
| "issue": "", |
| "pages": "376--387", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vitale, D., Ferragina, P., and Scaiella, U. (2012). Classification of short texts by deploying topical annotations. In ECIR, pages 376-387.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "A wikipedia-lda model for entity linking with batch size changing", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "L" |
| ], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. IJCNLP2011", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, W., Su, J., and Tan, C. L. (2011). A wikipedia-lda model for entity linking with batch size changing. In Proc. IJCNLP2011.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Resolving surface forms to wikipedia topics", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Nie", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Rouhani-Kalleh", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Vasile", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Gaffney", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10", |
| "volume": "", |
| "issue": "", |
| "pages": "1335--1343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhou, Y., Nie, L., Rouhani-Kalleh, O., Vasile, F., and Gaffney, S. (2010). Resolving surface forms to wikipedia topics. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 1335-1343.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "type_str": "table", |
| "html": null, |
| "text": "What a good feeling. Keep it going... Go Gators!!!' A2: What's good everyone, catching up on these Tourney games and already see some upsets... March Madness! Go Gators!", |
| "content": "<table><tr><td>Type</td><td>Tweet text</td><td>Mention</td></tr><tr><td>Mention Alone</td><td>Are you a college kid who likes drinking, dressing up, and making</td><td>St. Patrick's Day</td></tr><tr><td/><td>irish immigrants roll in their graves? Then St. Patrick's Day is for</td><td/></tr><tr><td/><td>you!</td><td/></tr><tr><td>Within Tweet</td><td>Slump is over! Way to ball out Jeff and Damian. Much needed win.</td><td>Hawks</td></tr><tr><td/><td>Go Hawks!!</td><td/></tr><tr><td>Within Author's Tweets</td><td>Go Gators!!!</td><td>Gators</td></tr><tr><td/><td>A1: Sweet 16!</td><td/></tr></table>", |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "html": null, |
| "text": "illustrates annotation errors; presumably, annotators did not take advantage of the type of context in question.", |
| "content": "<table><tr><td>Type</td><td>Tweet text</td><td>Mention</td></tr><tr><td>Mention Alone</td><td>So excited to announce I'll be singing \"God Bless America\" during the</td><td>Detroit Tigers</td></tr><tr><td/><td>7th Inning Stretch at the Detroit Tigers..</td><td/></tr><tr><td>Within Tweet</td><td>Making pesto! I had to soak my nuts for 3 hours</td><td>nuts</td></tr><tr><td>Within Author's Tweets</td><td>It was a pool report typo. Here is exact Rhodes quote: \"this is not</td><td>Rhodes</td></tr><tr><td/><td>gonna be a couple of weeks. It will be a period of days.\"</td><td/></tr><tr><td/><td>A1: At a WH briefing here in Santiago, NSA spox Rhodes came with</td><td/></tr><tr><td/><td>a litany of pushback on idea WH didn't consult with Congress.</td><td/></tr><tr><td/><td>A2: Rhodes singled out a Senate resolution that passed on March 1st</td><td/></tr><tr><td/><td>which denounced Khaddafy's atrocities. WH says UN rez incorporates</td><td/></tr><tr><td/><td>it</td><td/></tr><tr><td>URL Content</td><td>Awesome post from wolfblitzercnn: Behind the scenes on Clinton's</td><td>Clinton</td></tr><tr><td/><td>Mideast trip -URL -#cnn</td><td/></tr></table>", |
| "num": null |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "html": null, |
| "text": "The output for each set of surface-identical mentions in d is then aggregated into one result tuple as follows. For a surface string s d associated with one or more mentions in d, the set of associated result tuples is denoted R s d . Then R(s d ), the result tuple for s d , is defined by:", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "html": null, |
| "text": "A mention is underlined to indicate it was annotated.", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: Overview of different methods</td></tr><tr><td>Precision (P), recall (R), and F-measure (F1) are calculated on a by-tweet basis as follows:</td></tr></table>", |
| "num": null |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td/><td/><td/><td/><td>Total</td><td/><td/><td/></tr><tr><td>System</td><td>Correct</td><td>Missed</td><td>Positives</td><td>Output</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>by file</td><td>307</td><td>505</td><td>228</td><td>535</td><td>0.5738</td><td>0.3781</td><td>0.4558</td></tr><tr><td>by author</td><td>318</td><td>494</td><td>193</td><td>511</td><td>0.6223</td><td>0.3916</td><td>0.4807</td></tr><tr><td>by cluster</td><td>309</td><td>503</td><td>180</td><td>489</td><td>0.6319</td><td>0.3805</td><td>0.4750</td></tr></table>", |
| "num": null |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "html": null, |
| "text": "Detailed results by system type using the optimal parameters for each", |
| "content": "<table><tr><td/><td/><td colspan=\"2\">MRR1</td><td colspan=\"2\">MRR2</td></tr><tr><td/><td/><td>Best</td><td>Best F</td><td>Best</td><td>Best F</td></tr><tr><td/><td/><td>Params</td><td>Params</td><td>Params</td><td>Params</td></tr><tr><td>by File</td><td>All terms Longest ngram</td><td>44.20% 40.75%</td><td>41.62% 39.70%</td><td>43.77% 40.50%</td><td>41.29% 39.53%</td></tr><tr><td>by Author</td><td>All terms Longest ngram</td><td>45.82% 42.23%</td><td>42.27% 40.21%</td><td>45.44% 42.06%</td><td>42.03% 40.05%</td></tr><tr><td>by Cluster</td><td>All terms Longest ngram</td><td>44.89% 41.52%</td><td>41.86% 39.35%</td><td>44.42% 41.32%</td><td>41.56% 39.25%</td></tr></table>", |
| "num": null |
| }, |
| "TABREF10": { |
| "type_str": "table", |
| "html": null, |
| "text": "Best MRR & MRR for parameters yielding best F1", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF11": { |
| "type_str": "table", |
| "html": null, |
| "text": "Section of I-55 Closed Until Monday: I-55 will be closed in both directions between Carondelet and the 4500 block of... doesn't expect harmful levels of radiation to hit the U.S. ... public health experts say no precautionary measures needed", |
| "content": "<table><tr><td>Tweet</td><td>By file</td><td/><td>By author</td><td>Type</td></tr><tr><td>Japan is one of NATOs global partners. On behalf of our Allies</td><td>Empire</td><td>of</td><td>Japan</td><td>Good change</td></tr><tr><td>I want to extend our heartfelt condolences to those who have</td><td>Japan</td><td/><td/><td/></tr><tr><td>lost loved ones</td><td/><td/><td/><td/></tr><tr><td>Ejoying myself in Whistler!</td><td>Whistler,</td><td/><td>Whistler,</td><td>Greater LS for</td></tr><tr><td/><td>British</td><td/><td>British</td><td>identical men-</td></tr><tr><td/><td>Columbia</td><td/><td>Columbia</td><td>tion</td></tr><tr><td colspan=\"3\">RT @kmoxnews: Carondelet,</td><td>Carondelet,</td><td>Context</td></tr><tr><td/><td>St. Louis</td><td/><td>St. Louis</td><td/></tr><tr><td colspan=\"3\">Obama says he Ionizing radi-</td><td>Radiation</td><td>Neutral</td></tr><tr><td/><td>ation</td><td/><td/><td>Change</td></tr><tr><td>Making pesto! I had to soak my nuts for 3 hours!</td><td>Pine nut</td><td/><td>Nut (fruit)</td><td>Bad change</td></tr></table>", |
| "num": null |
| } |
| } |
| } |
| } |