| { |
| "paper_id": "Q19-1035", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:09:15.232456Z" |
| }, |
| "title": "Models of Generic, Habitual, and Episodic Statements", |
| "authors": [ |
| { |
| "first": "Venkata", |
| "middle": [], |
| "last": "Govindarajan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Rochester", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [ |
| "Steven" |
| ], |
| "last": "White", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Rochester", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a novel semantic framework for modeling linguistic expressions of generalizationgeneric, habitual, and episodic statements-as combinations of simple, real-valued referential properties of predicates and their arguments. We use this framework to construct a dataset covering the entirety of the Universal Dependencies English Web Treebank. We use this dataset to probe the efficacy of type-level and token-level information-including handengineered features and static (GloVe) and contextual (ELMo) word embeddings-for predicting expressions of generalization.", |
| "pdf_parse": { |
| "paper_id": "Q19-1035", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a novel semantic framework for modeling linguistic expressions of generalizationgeneric, habitual, and episodic statements-as combinations of simple, real-valued referential properties of predicates and their arguments. We use this framework to construct a dataset covering the entirety of the Universal Dependencies English Web Treebank. We use this dataset to probe the efficacy of type-level and token-level information-including handengineered features and static (GloVe) and contextual (ELMo) word embeddings-for predicting expressions of generalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Natural language allows us to convey not only information about particular individuals and events, as in Example (1), but also generalizations about those individuals and events, as in (2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) a. Mary ate oatmeal for breakfast today. b. The students completed their assignments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(2) a. Mary eats oatmeal for breakfast. b. The students always complete their assignments on time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This capacity for expressing generalization is extremely flexible-allowing for generalizations about the kinds of events that particular individuals are habitually involved in, as in (2), as well as characterizations about kinds of things, as in (3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(3) a. Bishops move diagonally. b. Soap is used to remove dirt.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Such distinctions between episodic statements (1), on the one hand, and habitual (2) and generic (or characterizing) statements (3), on the other, have a long history in both the linguistics and artificial intelligence literatures (see Carlson, 2011; Maienborn et al., 2011; Leslie and Lerner, 2016) . Nevertheless, few modern semantic parsers make a systematic distinction (cf. Abzianidze and Bos, 2017) . This is problematic, because the ability to accurately capture different modes of generalization is likely key to building systems with robust common sense reasoning (Zhang et al., 2017a; Bauer et al., 2018) : Such systems need some source for general knowledge about the world (McCarthy, 1960 (McCarthy, , 1980 (McCarthy, , 1986 Minsky, 1974; Schank and Abelson, 1975; Hobbs et al., 1987; Reiter, 1987) and natural language text seems like a prime candidate. It is also surprising, because there is no dearth of data relevant to linguistic expressions of generalization (Doddington et al., 2004; Cybulska and Vossen, 2014b; Friedrich et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 236, |
| "end": 250, |
| "text": "Carlson, 2011;", |
| "ref_id": null |
| }, |
| { |
| "start": 251, |
| "end": 274, |
| "text": "Maienborn et al., 2011;", |
| "ref_id": null |
| }, |
| { |
| "start": 275, |
| "end": 299, |
| "text": "Leslie and Lerner, 2016)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 379, |
| "end": 404, |
| "text": "Abzianidze and Bos, 2017)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 573, |
| "end": 594, |
| "text": "(Zhang et al., 2017a;", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 595, |
| "end": 614, |
| "text": "Bauer et al., 2018)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 685, |
| "end": 700, |
| "text": "(McCarthy, 1960", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 701, |
| "end": 718, |
| "text": "(McCarthy, , 1980", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 719, |
| "end": 736, |
| "text": "(McCarthy, , 1986", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 737, |
| "end": 750, |
| "text": "Minsky, 1974;", |
| "ref_id": null |
| }, |
| { |
| "start": 751, |
| "end": 776, |
| "text": "Schank and Abelson, 1975;", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 777, |
| "end": 796, |
| "text": "Hobbs et al., 1987;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 797, |
| "end": 810, |
| "text": "Reiter, 1987)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 978, |
| "end": 1003, |
| "text": "(Doddington et al., 2004;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1004, |
| "end": 1031, |
| "text": "Cybulska and Vossen, 2014b;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1032, |
| "end": 1055, |
| "text": "Friedrich et al., 2015)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One obstacle to further progress on generalization is that current frameworks tend to take standard descriptive categories as sharp classesfor example, EPISODIC, GENERIC, HABITUAL for statements and KIND, INDIVIDUAL for noun phrases. This may seem reasonable for sentences like (1a), where Mary clearly refers to a particular individual, or (3a), where Bishops clearly refers to a kind; but natural text is less forgiving (Grimm, 2014 (Grimm, , 2016 (Grimm, , 2018 . Consider the underlined arguments in (4): Do they refer to kinds or individuals?", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 215, |
| "text": "KIND, INDIVIDUAL", |
| "ref_id": null |
| }, |
| { |
| "start": 422, |
| "end": 434, |
| "text": "(Grimm, 2014", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 435, |
| "end": 449, |
| "text": "(Grimm, , 2016", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 450, |
| "end": 464, |
| "text": "(Grimm, , 2018", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(4) a. I will manage client expectations. b. The atmosphere may not be for everyone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To remedy this, we propose a novel framework for capturing linguistic expressions of generalization. Taking inspiration from decompositional semantics (Reisinger et al., 2015; White et al., 2016) , we suggest that linguistic expressions of generalization should be captured in a continuous multi-label system, rather than a multi-class system. We do this by decomposing categories such as EPISODIC, HABITUAL, and GENERIC into simple referential properties of predicates and their arguments. Using this framework ( \u00a73), we develop an annotation protocol, which we validate ( \u00a74) and compare against previous frameworks ( \u00a75). We then deploy this framework ( \u00a76) to construct a new large-scale dataset of annotations covering the entire Universal Dependencies Nivre et al., 2015) English Web Treebank (UD-EWT; Bies et al., 2012; -yielding the Universal Decompositional Semantics-Genericity (UDS-G) dataset. 1 Through exploratory analysis of this dataset, we demonstrate that this multi-label framework is well-motivated ( \u00a77). We then present models for predicting expressions of linguistic generalization that combine hand-engineered type and token-level features with static and contextual learned representations ( \u00a78). We find that (i) referential properties of arguments are easier to predict than those of predicates; and that (ii) contextual learned representations contain most of the relevant information for both arguments and predicates ( \u00a79).", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 175, |
| "text": "(Reisinger et al., 2015;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 176, |
| "end": 195, |
| "text": "White et al., 2016)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 758, |
| "end": 777, |
| "text": "Nivre et al., 2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 808, |
| "end": 826, |
| "text": "Bies et al., 2012;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 905, |
| "end": 906, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "c. Thanks again for great customer service!", |
| "sec_num": null |
| }, |
| { |
| "text": "Most existing annotation frameworks aim to capture expressions of linguistic generalization using multi-class annotation schemes. We argue that this reliance on multi-class annotation schemes is problematic on the basis of descriptive and theoretical work in the linguistics literature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "One of the earliest frameworks explicitly aimed at capturing expressions of linguistic generalization was developed under the ACE-2 program (Mitchell et al., 2003; Doddington et al., 2004 , and see Reiter and Frank, 2010) . This framework associates entity mentions with discrete labels for whether they refer to a specific member of the set in question (SPECIFIC) or any member of the set in question (GENERIC). The ACE-2005 Multilingual Training Corpus (Walker et al., 2006 extends these annotation guidelines, providing two additional classes: (i) negatively quantified entries (NEG) for referring to empty sets and (ii) underspecified entries (USP), where the referent is ambiguous between GENERIC and SPECIFIC.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 163, |
| "text": "(Mitchell et al., 2003;", |
| "ref_id": null |
| }, |
| { |
| "start": 164, |
| "end": 187, |
| "text": "Doddington et al., 2004", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 198, |
| "end": 221, |
| "text": "Reiter and Frank, 2010)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 417, |
| "end": 425, |
| "text": "ACE-2005", |
| "ref_id": null |
| }, |
| { |
| "start": 426, |
| "end": 475, |
| "text": "Multilingual Training Corpus (Walker et al., 2006", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The existence of the USP label already portends an issue with multi-class annotation schemes, which have no way of capturing the well-known phenomena of taxonomic reference (see Carlson and Pelletier, 1995 , and references therein), abstract/event reference (Grimm, 2014 (Grimm, , 2016 (Grimm, , 2018 , and weak definites (Carlson et al., 2006) . For example, wines in (5) refers to particular kinds of wine; service in (6) refers to an abstract entity/event that could be construed as both particular-referring, in that it is the service at a specific restaurant, and kind-referring, in that it encompasses all service events at that restaurant; and bus in (7) refers to potentially multiple distinct buses that are grouped into a kind by the fact that they drive a particular line.", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 205, |
| "text": "Carlson and Pelletier, 1995", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 258, |
| "end": 270, |
| "text": "(Grimm, 2014", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 271, |
| "end": 285, |
| "text": "(Grimm, , 2016", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 286, |
| "end": 300, |
| "text": "(Grimm, , 2018", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 322, |
| "end": 344, |
| "text": "(Carlson et al., 2006)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(5) That vintner makes three different wines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(6) The service at that restaurant is excellent. 7That bureaucrat takes the 90 bus to work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This deficit is remedied to some extent in the ARRAU (Poesio et al., 2018 , and see Mathew, 2009; Louis and Nenkova, 2011) and ECB+ (Cybulska and Vossen, 2014a,b) corpora. The ARRAU corpus is mainly intended to capture anaphora resolution, but following the GNOME guidelines (Poesio, 2004) , it also annotates entity mentions for a GENERIC attribute, sensitive to whether the mention is in the scope of a relevant semantic operator (e.g., a conditional or quantifier) and whether the nominal refers to a type of object whose genericity is left underspecified, such as a substance. The ECB+ corpus is an extension of the EventCorefBank (ECB; Bejan and Harabagiu, 2010; Lee et al., 2012) , which annotates Google News texts for event coreference in accordance with the TimeML specification (Pustejovsky et al., 2003) , and is an improvement in the sense that, in addition to entity mentions, event mentions may be labeled with a GENERIC class.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 73, |
| "text": "(Poesio et al., 2018", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 84, |
| "end": 97, |
| "text": "Mathew, 2009;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 98, |
| "end": 122, |
| "text": "Louis and Nenkova, 2011)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 132, |
| "end": 162, |
| "text": "(Cybulska and Vossen, 2014a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 275, |
| "end": 289, |
| "text": "(Poesio, 2004)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 668, |
| "end": 685, |
| "text": "Lee et al., 2012)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 788, |
| "end": 814, |
| "text": "(Pustejovsky et al., 2003)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The ECB+ approach is useful, since episodic, habitual, and generic statements can straightforwardly be described using combinations of event and entity mention labels. For example, in ECB+, episodic statements involve only non-generic entity and event mentions; habitual statements involve a generic event mention and at least one non-generic entity mention; and generic statements involve generic event mentions and at least one generic entity mention. This demonstrates the strength of decomposing statements into properties of the events and entities they describe; but there remain difficult issues arising from the fact that the decomposition does not go far enough. One is that, like ACE-2/2005 and ARRAU, ECB+ does not make it possible to capture taxonomic and abstract reference or weak definites; another is that, because ECB+ treats generics as mutually exclusive from other event classes, it is not possible to capture that events and states in those classes can themselves be particular or generic. This is well known for different classes of events, such as those determined by a predicate's lexical aspect (Vendler, 1957) ; but it is likely also important for distinguishing more particular stagelevel properties (e.g., availability (8)) from more generic individual-level properties (e.g., strength (9)) (Carlson, 1977) . This situation is improved upon in the Richer Event Descriptions (RED; O'Gorman et al., 2016) and Situation Entities (SitEnt; Friedrich and Palmer, 2014a,b; Friedrich et al., 2015; Friedrich and Pinkal, 2015b,a; Friedrich et al., 2016) frameworks, which annotate both NPs and entire clauses for genericity. In particular, SitEnt, which is used to annotate MASC (Ide et al., 2010) and Wikipedia, has the nice property that it recognizes the existence of abstract entities and lexical aspectual class of clauses' main verbs, along with habituality and genericity. This is useful because, in addition to decomposing statements using the genericity of the main referent and event, this framework recognizes that lexical aspect is an independent phenomenon. In practice, however, the annotations produced by this framework are mapped into a multi-class scheme containing only the high-level GENERIC-HABITUAL-EPISODIC distinction-alongside a conceptually independent distinction among illocutionary acts.", |
| "cite_spans": [ |
| { |
| "start": 1120, |
| "end": 1135, |
| "text": "(Vendler, 1957)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 1319, |
| "end": 1334, |
| "text": "(Carlson, 1977)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1402, |
| "end": 1407, |
| "text": "(RED;", |
| "ref_id": null |
| }, |
| { |
| "start": 1408, |
| "end": 1430, |
| "text": "O'Gorman et al., 2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 1463, |
| "end": 1493, |
| "text": "Friedrich and Palmer, 2014a,b;", |
| "ref_id": null |
| }, |
| { |
| "start": 1494, |
| "end": 1517, |
| "text": "Friedrich et al., 2015;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1518, |
| "end": 1548, |
| "text": "Friedrich and Pinkal, 2015b,a;", |
| "ref_id": null |
| }, |
| { |
| "start": 1549, |
| "end": 1572, |
| "text": "Friedrich et al., 2016)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 1698, |
| "end": 1716, |
| "text": "(Ide et al., 2010)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A potential argument in favor of mapping into a multi-class scheme is that, if it is sufficiently elaborated, the relevant decomposition may be recoverable. But regardless of such an elaboration, uncertainty about which class any particular entity or event falls into cannot be ignored. Some ex-amples may just not have categorically correct answers; and even if they do, annotator uncertainty and bias may obscure them. To account for this, we develop a novel annotation framework that both (i) explicitly captures annotator confidence about the different referential properties discussed above and (ii) attempts to correct for annotator bias using standard psycholinguistic methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We divide our framework into two protocols-the argument and predicate protocols-that probe properties of individuals and situations (i.e., events or states) referred to in a clause. Drawing inspiration from prior work in decompositional semantics (White et al., 2016) , a crucial aspect of our framework is that (i) multiple properties can be simultaneously true for a particular individual or situation (event or state); and (ii) we explicitly collect confidence ratings for each property. This makes our framework highly extensible, because further properties can be added without breaking a strict multi-class ontology.", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 267, |
| "text": "(White et al., 2016)", |
| "ref_id": "BIBREF60" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Drawing inspiration from the prior literature on generalization discussed in \u00a71 and \u00a72, we focus on properties that lie along three main axes: whether a predicate or its arguments refer to (i) instantiated or spatiotemporally delimited (i.e., particular) situations or individuals; (ii) classes of situations (i.e., hypothetical situations) or kinds of individuals; and/or (iii) intangible (i.e., abstract concepts or stative situations).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This choice of axes is aimed at allowing our framework to capture not only the standard EPISODIC-HABITUAL-GENERIC distinction, but also phenomena that do not fit neatly into this distinction, such as taxonomic reference, abstract reference, and weak definites. The idea here is similar to prior decompositional semantics work on semantic protoroles (Reisinger et al., 2015; White et al., 2016 White et al., , 2017 , which associates categories like AGENT or PATIENT with sets of more basic properties, such as volitionality, causation, change-of-state, and so forth, and is similarly inspired by classic theoretical work (Dowty, 1991) .", |
| "cite_spans": [ |
| { |
| "start": 349, |
| "end": 373, |
| "text": "(Reisinger et al., 2015;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 374, |
| "end": 392, |
| "text": "White et al., 2016", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 393, |
| "end": 413, |
| "text": "White et al., , 2017", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 621, |
| "end": 634, |
| "text": "(Dowty, 1991)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In our framework, prototypical episodics, habituals, and generics correspond to sets of properties that the referents of a clause's head predicate and arguments have-namely, clausal categories are built up from properties of the predicates that head them along with those predicates' arguments. For instance, prototypical episodic statements, like those in (1), have arguments that only refer to particular, non-kind, non-abstract individuals and a predicate that refers to a particular event or (possibly) state; prototypical habitual statements, like those in (2) have arguments that refer to at least one particular, non-kind, non-abstract individual and a predicate that refers to a non-particular, dynamic event; and prototypical generics, like those in (3), have arguments that only refer to kinds of individuals and a predicate that refers to non-particular situations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "It is important to note that these are all prototypical properties of episodic, habitual, or generic statements, in the same way that volitionality is a prototypical property of agents and change-ofstate is a prototypical property of patients. That is, our framework explicitly allows for bleed between categories because it only commits to the referential properties, not the categories themselves. It is this ambivalence toward sharp categories that also allows our framework to capture phenomena that fall outside the bounds of the standard three-way distinction. For instance, taxonomic reference, as in (5), and weak definites, as in (7), prototypically involve an argument being both particular-and kind-referring; stage-level properties, as in (8), prototypically involve particular, non-dynamic situations, while individual-level properties, as in (9), prototypically involve non-particular, nondynamic situations. Figure 1 shows examples of the argument protocol (top) and predicate protocol (bottom), whose implementation is based on the event factuality annotation protocol described by White et al. (2016) and Rudinger et al. (2018) . Annotators are presented with a sentence with one or many words highlighted, followed by statements pertaining to the highlighted words in the context of the sentence. They are then asked to fill in the statement with a binary response saying whether it does or does not hold and to give their confidence on a 5-point scale-not at all confident (1), not very confident (2), somewhat confident (3), very confident (4), and totally confident (5).", |
| "cite_spans": [ |
| { |
| "start": 1098, |
| "end": 1117, |
| "text": "White et al. (2016)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 1122, |
| "end": 1144, |
| "text": "Rudinger et al. (2018)", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 923, |
| "end": 931, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To demonstrate the efficacy of our framework for use in bulk annotation (reported in \u00a76), we conduct a validation study on both our predicate and argument protocols. The aim of these studies is to establish that annotators display reasonable agreement when annotating for the properties in each protocol, relative to their reported confidence. We expect that, the more confident both annotators are in their annotation, the more likely it should be that annotators agree on those annotations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Framework Validation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To ensure that the findings from our validation studies generalize to the bulk annotation setting, we simulate the bulk setting as closely as possible: (i) randomly sampling arguments and predicates for annotation from the same corpus we conduct the bulk annotation on UD-EWT; and (ii) allowing annotators to do as many or as few annotations as they would like. This design makes standard measures of interannotator agreement somewhat difficult to accurately compute, because different pairs of annotators may annotate only a small number of overlapping items (arguments/ predicates), so we turn to standard statistical methods from psycholinguistics to assist in estimation of interannotator agreement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Framework Validation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We extracted predicates and their arguments from the gold UD parses from UD-EWT using PredPatt (White et al., 2016; Zhang et al., 2017b) . From the UD-EWT training set, we then randomly sampled 100 arguments from those headed by a DET, NUM, NOUN, PROPN, or PRON and 100 predicates from those headed by a ADJ, NOUN, NUM, DET, PROPN, PRON, VERB, or AUX.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 115, |
| "text": "(White et al., 2016;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 116, |
| "end": 136, |
| "text": "Zhang et al., 2017b)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Annotators A total of 44 annotators were recruited from Amazon Mechanical Turk to annotate arguments; and 50 annotators were recruited to annotate predicates. In both cases, arguments and predicates were presented in batches of 10, with each predicate and argument annotated by 10 annotators.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Confidence normalization Because different annotators use the confidence scale in different ways (e.g., some annotators use all five options while others only ever respond with totally confident (5)) we normalize the confidence ratings for each property using a standard ordinal scale normalization technique known as ridit scoring (Agresti, 2003) . In ridit scoring, ordinal labels are mapped to (0, 1) using the empirical cumulative distribution function of the ratings given by each annotator. Specifically, for the responses y (a) given by annotator a, ridit y (a) y", |
| "cite_spans": [ |
| { |
| "start": 332, |
| "end": 347, |
| "text": "(Agresti, 2003)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 531, |
| "end": 534, |
| "text": "(a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "(a) i = ECDF y (a) y (a) i \u2212 1 + 0.5 \u00d7 ECDF y (a) y (a) i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Ridit scoring has the effect of reweighting the importance of a scale label based on the frequency with which it is used. For example, insofar as an annotator rarely uses extreme values, such as not at all confident or totally confident, the annotator is likely signaling very low or very high confidence, respectively, when they are used; and insofar as an annotator often uses extreme values, the annotator is likely not signaling particularly low or particularly high confidence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Interannotator Agreement (IAA) Common IAA statistics, such as Cohen's or Fleiss' \u03ba, rely on the ability to compute both an expected agreement p e and an observed agreement p o , with \u03ba \u2261 p o \u2212p e 1\u2212p e . Such a computation is relatively straightforward when a small number of annotators annotate many items, but when many annotators each annotate a small number of items pairwise, p e and p o can be difficult to estimate accurately, especially for annotators that only annotate a few items total. Further, there is no standard way to incorporate confidence ratings, like the ones we collect, into these IAA measures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "To overcome these obstacles, we use a combination of mixed and random effects models (Gelman and Hill, 2014) , which are extremely common in the analysis of psycholinguistic data (Baayen, 2008) , to estimate p e and p o for each property. The basic idea behind using these models is to allow our estimates of p e and p o to be sensitive to the number of items annotators annotated as well as how annotators' confidence relates to agreement.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 108, |
| "text": "(Gelman and Hill, 2014)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 179, |
| "end": 193, |
| "text": "(Baayen, 2008)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "To estimate p e for each property, we fit a random effects logistic regression to the binary responses for that property, with random intercepts for both annotator and item. The fixed intercept estimate\u03b2 0 for this model is an estimate of the log-odds that the average annotator would answer true on that property for the average item; and the random intercepts give the distribution of actual annotator (\u03c3 ann ) or item (\u03c3 item ) biases. Table 1 gives the estimates for each property. We note a substantial amount of variability in the bias different annotators have for answering true on many of these properties. This variability is evidenced by the fact that\u03c3 ann and\u03c3 item are similar across properties, and it suggests the need to adjust for annotator biases when analyzing these data, which we do both here and for our bulk annotation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 439, |
| "end": 446, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "To compute p e from these estimates, we use a parametric bootstrap. On each replicate, we sample annotator biases b 1 , b 2 independently from N (\u03b2 0 ,\u03c3 ann ), then compute the expected probability of random agreement in the standard way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c0 1 \u03c0 2 + (1 \u2212 \u03c0 1 )(1 \u2212 \u03c0 2 ), where \u03c0 i = logit \u22121 (b i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "We compute the mean across 9,999 such replicates to obtain p e , shown in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 74, |
| "end": 81, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "To estimate p o for each property in a way that takes annotator confidence into account, we first compute, for each pair of annotators, each item they both annotated, and each property they annotated that item on, whether or not they agree in their annotation. We then fit separate mixed effects logistic regressions for each property to this agreement variable, with a fixed intercept \u03b2 0 and slope \u03b2 conf for the product of the annotators' confidence for that item and random intercepts for both annotator and item. 2 We find, for all properties, that there is a reliable increase (i.e., a positive\u03b2 conf ) in agreement as annotators' confidence ratings go up (ps < 0.001). This corroborates our prediction that annotators should have higher agreement for things they are confident about. It also suggests the need to incorporate confidence ratings into the annotations our models are trained on, which we do in our normalization of the bulk annotation responses.", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 519, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "From the fixed effects, we can obtain an estimate of the probability of agreement for the average pair of annotators at each confidence level between 0 and 1. We compute two versions of \u03ba based on such estimates: \u03ba low , which corresponds to 0 confidence for at least one annotator in a pair, and \u03ba high , which corresponds to perfect confidence for both. Table 2 shows these \u03ba estimates.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 356, |
| "end": 363, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "As implied by reliably positive\u03b2 conf s, we see that \u03ba high is greater than \u03ba low for all properties. Further, with the exception of DYNAMIC, \u03ba high is generally comparable to the \u03ba estimates reported in annotations by trained annotators using a multi-class framework. For instance, compare the metrics in Table 2 to \u03ba ann in Table 3 (see \u00a75 for details), which gives the Fleiss' \u03ba metric for clause types in the SitEnt dataset (Friedrich et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 428, |
| "end": 452, |
| "text": "(Friedrich et al., 2016)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 306, |
| "end": 313, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 326, |
| "end": 333, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicate and argument extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "To demonstrate that our framework subsumes standard distinctions (e.g., EPISODIC v. HABITUAL v. GENERIC) we conduct a study comparing annotations assigned under our multi-label framework to those assigned under a framework that recognizes such multi-class distinctions. We choose the the SitEnt framework for this comparison, because 2 We use the product of annotator confidences because it is large when both annotators have high confidence and small when either annotator has low confidence and always remains between 0 (lowest confidence) and 1 (highest confidence). Table 3 : Predictability of standard ontology using our property set in a kernelized support vector classifier.", |
| "cite_spans": [ |
| { |
| "start": 334, |
| "end": 335, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 570, |
| "end": 577, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to Standard Ontology", |
| "sec_num": "5" |
| }, |
| { |
| "text": "it assumes a categorical distinction between GENERIC, HABITUAL (their GENERALIZING), EPISODIC (their EVENTIVE), and STATIVE clauses (Friedrich and Palmer, 2014a,b; Friedrich et al., 2015; Friedrich and Pinkal, 2015b,a; Friedrich et al., 2016) . 3 SitEnt is also a useful comparison because it was constructed by highly trained annotators who had access to the entire document containing the clause being annotated, thus allowing us to assess both how much it matters that we use only very lightly trained annotators and do not provide document context.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 163, |
| "text": "(Friedrich and Palmer, 2014a,b;", |
| "ref_id": null |
| }, |
| { |
| "start": 164, |
| "end": 187, |
| "text": "Friedrich et al., 2015;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 188, |
| "end": 218, |
| "text": "Friedrich and Pinkal, 2015b,a;", |
| "ref_id": null |
| }, |
| { |
| "start": 219, |
| "end": 242, |
| "text": "Friedrich et al., 2016)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 245, |
| "end": 246, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to Standard Ontology", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Predicate and argument extraction For each of GENERIC, HABITUAL, STATIVE, and EVENTIVE, we randomly sample 100 clauses from SitEnt such that (i) that clause's gold annotation has that category; and (ii) all SitEnt annotators agreed on that annotation. We annotate the mainReferent of these clauses (as defined by SitEnt) in our argument protocol and the mainVerb in our predicate protocol, providing annotators only the sentence containing the clause.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to Standard Ontology", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Annotators 42 annotators were recruited from Amazon Mechanical Turk to annotate arguments, and 45 annotators were recruited to annotate predicates-both in batches of 10, with each predicate and argument annotated by 5 annotators.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to Standard Ontology", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Annotation normalization As noted in \u00a74, different annotators use the confidence scale differently and have different biases for responding true or false on different properties (see Table 1 ). To adjust for these biases, we construct a normalized score for each predicate and argument using mixed effects logistic regressions. These mixed effects models all had (i) a hinge loss with margin set to the normalized confidence rating; (ii) fixed effects for property (PARTICULAR, KIND, and ABSTRACT for arguments; PARTICULAR, HYPOTHETICAL, and DYNAMIC for predicates) token, and their interaction; and (iii) by-annotator random intercepts and random slopes for property with diagonal covariance matrices. The rationale behind (i) is that true should be associated with positive values; false should be associated with negative values; and the confidence rating should control how far from zero the normalized rating is, adjusting for the biases of annotators that responded to a particular item. The resulting response scale is analogous to current approaches to event factuality annotation (Lee et al., 2015; Stanovsky et al., 2017; Rudinger et al., 2018) . We obtain a normalized score from these models by setting the Best Linear Unbiased Predictors for the by-annotator random effects to zero and using the Best Linear Unbiased Estimators for the fixed effects to obtain a real-valued label for each token on each property. This procedure amounts to estimating a label for each property and each token based on the ''average annotator.'' Quantitative comparison To compare our annotations to the gold situation entity types from SitEnt, we train a support vector classifier with a radial basis function kernel to predict the situation entity type of each clause on the basis of the normalized argument property annotations for that clause's mainReferent and the normalized predicate property annotations for that clause's mainVerb. The hyperparameters for this support vector classifier were selected using exhaustive grid search over the regularization parameter \u03bb \u2208 {1, 10, 100, 1000} and bandwidth \u03c3 \u2208 10 \u22122 , 10 \u22123 , 10 \u22124 , 10 \u22125 in a 5-fold crossvalidation (CV). This 5-fold CV was nested within a 10-fold CV, from which we calculate metrics. Table 3 reports the precision, recall, and F-score computed using the held-out set in each fold of the 10-fold CV. For purposes of comparison, it also gives the Fleiss' \u03ba reported by Friedrich et al. (2016) for each property (\u03ba ann ) as well as Cohen's \u03ba between our model predictions on the held-out folds and the gold SitEnt annotations (\u03ba mod ). One way to think about \u03ba mod is that it tells us what agreement we would expect if we used our model as an annotator instead of highly trained humans.", |
| "cite_spans": [ |
| { |
| "start": 1089, |
| "end": 1107, |
| "text": "(Lee et al., 2015;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1108, |
| "end": 1131, |
| "text": "Stanovsky et al., 2017;", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 1132, |
| "end": 1154, |
| "text": "Rudinger et al., 2018)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 2434, |
| "end": 2457, |
| "text": "Friedrich et al. (2016)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 183, |
| "end": 190, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 2251, |
| "end": 2258, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to Standard Ontology", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We see that our model's agreement (\u03ba mod ) tracks interannotator agreement (\u03ba ann ) surprisingly well. Indeed, in some cases, such as for GENERIC, our model's agreement is within a few points of interannotator agreement. This pattern is surprising, because our model is based on annotations by very lightly trained annotators who have access to very limited context compared with the annotators of SitEnt, who receive the entire document in which a clause is found. Indeed, our model has access to even less context than it could otherwise have on the basis of our framework, since we only annotate one of the potentially many arguments occurring in a clause; thus, the metrics in Table 3 are likely somewhat conservative. This pattern may further suggest that, although having extra context for annotating complex semantic phenomena is always preferable, we still capture useful information by annotating only isolated sentences. Figure 2 shows the mean normalized value for each property in our framework broken out by clause type. As expected, we see that episodics tend to have particular-referring arguments and predicates, whereas generics tend to have kind-referring arguments and non-particular predicates. Also as expected, episodics and habituals tend to refer to situations that are more dynamic than statives and generics. But although it makes sense that generics would be, on average, near zero for dynamicity-since generics can be about both dynamic and non-dynamic situations-it is less clear why statives are not more negative. This pattern may arise in some way from the fact that there is relatively lower agreement on dynamicity, as noted in \u00a74.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 681, |
| "end": 688, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 931, |
| "end": 939, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to Standard Ontology", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We use our annotation framework to collect annotations of predicates and arguments on UD-EWT using the PredPatt system-thus yielding the Universal Decompositional Semantics-Genericity (UDS-G) dataset. Using UD-EWT in conjunction with PredPatt has two main advantages over other similar corpora: (i) UD-EWT contains text from multiple genres-not just newswire-with gold standard Universal Dependency parses; and (ii) there are now a wide variety of other semantic annotations on top of UD-EWT that use the PredPatt standard (White et al., 2016; Rudinger et al., 2018; Vashishtha et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 523, |
| "end": 543, |
| "text": "(White et al., 2016;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 544, |
| "end": 566, |
| "text": "Rudinger et al., 2018;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 567, |
| "end": 591, |
| "text": "Vashishtha et al., 2019)", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bulk Annotation", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Predicate and argument extraction PredPatt identifies 34,025 predicates and 56,246 arguments of those predicates from 16,622 sentences. Based on analysis of the data from our validation study ( \u00a74) and other pilot experiments (not reported here), we developed a set of heuristics for filtering certain tokens that PredPatt identifies as predicates and arguments, either because we found that there was little variability in the label assigned to particular subsets of tokens-for example, pronominal arguments (such as I, we, he, she, etc.) are almost always labeled particular, non-kind, and non-abstract (with the exception of you and they, which can be kind-referring)-or because it is not generally possible to answer questions about those tokens (e.g., adverbial predicates are excluded). Based on these filtering heuristics, we retain 37,146 arguments and 33,114 predicates for annotation. Table 4 compares these numbers against the resources described in \u00a72.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 895, |
| "end": 902, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bulk Annotation", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Annotators We recruited 482 annotators from Amazon Mechanical Turk to annotate arguments, and 438 annotators were recruited to annotate predicates. Arguments and predicates in the UD-EWT validation and test sets were annotated by three annotators each; and those in the UD-EWT train set were annotated by one each. All annotations were performed in batches of 10.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bulk Annotation", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We use the annotation normalization procedure described in \u00a75, fit separately to our train and development splits, on the one hand, and our test split, on the other. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation normalization", |
| "sec_num": null |
| }, |
| { |
| "text": "Before presenting models for predicting our properties, we conduct an exploratory analysis to demonstrate that the properties of the dataset relate to other token-and type-level semantic properties in intuitive ways. Figure 3 plots the normalized ratings for the argument (left) and predicate (right) protocols. Each point corresponds to a token and the density plots visualize the number of points in a region.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 225, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Arguments We see that arguments have a slight tendency (Pearson correlation \u03c1 = \u22120.33) to refer to either a kind or a particular-for example, place in (10) falls in the lower right quadrant (particularreferring) and transportation in (11) falls in the upper left quadrant (kind-referring)-though there are a not insignificant number of arguments that refer to something that is both-for example, registration in (12) falls in the upper right quadrant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "(10) I think this place is probably really great especially judging by the reviews on here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "(11) What made it perfect was that they offered transportation so that [...] (12) Some places do the registration right at the hospital [...] We also see that there is a slight tendency for arguments that are neither particular-referring (\u03c1 = \u22120.28) nor kind-referring (\u03c1 = \u22120.11) to be abstract-referring-for example, power in (13) falls in the lower left quadrant (only abstractreferring)-but that there are some arguments that refer to abstract particulars and some that refer to abstract kinds-for example, both reputation 14and argument (15) are abstract, but reputation falls in the lower right quadrant, while argument falls in the upper left.", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 76, |
| "text": "[...]", |
| "ref_id": null |
| }, |
| { |
| "start": 136, |
| "end": 141, |
| "text": "[...]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "(13) Power be where power lies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "(14) Meanwhile, his reputation seems to be improving, although Bangs noted a ''pretty interesting social dynamic.'' (15) The Pew researchers tried to transcend the economic argument.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Predicates We see that there is effectively no tendency (\u03c1 = 0.00) for predicates that refer to particular situations to refer to dynamic eventsfor example, faxed in (16) falls in the upper right quadrant (particular-and dynamic-referring), while available in (17) falls in the lower right quadrant (particular-and non-dynamic-referring). We consider two forms of predicate and argument representations to predict the three attributes in our framework: hand-engineered features and learned features. For both, we contrast both type-level information and token-level information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Hand-engineered features We consider five sets of type-level hand-engineered features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "1. Concreteness Concreteness ratings for root argument lemmas in the argument protocol from the concreteness database (Brysbaert et al., 2014) and the mean, maximum, and minimum concreteness rating of a predicate's arguments in the predicate protocol.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 142, |
| "text": "(Brysbaert et al., 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "2. Eventivity Eventivity and stativity ratings for the root predicate lemma in the predicate protocol and the predicate head of the root argument in the argument protocol from the LCS database (Dorr, 2001 ).", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 204, |
| "text": "(Dorr, 2001", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "3. VerbNet Verb classes from VerbNet (Schuler, 2005) for root predicate lemmas.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 52, |
| "text": "(Schuler, 2005)", |
| "ref_id": "BIBREF53" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "4. FrameNet Frames evoked by root predicate lemmas in the predicate protocol and for both the root argument lemma and its predicate head in the argument protocol from FrameNet (Baker et al., 1998) .", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 196, |
| "text": "(Baker et al., 1998)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The union of WordNet (Fellbaum, 1998) supersenses (Ciaramita and Johnson, 2003) for all WordNet senses the root argument or predicate lemmas can have.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 37, |
| "text": "(Fellbaum, 1998)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 50, |
| "end": 79, |
| "text": "(Ciaramita and Johnson, 2003)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "And we consider two sets of token-level handengineered features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "1. Syntactic features POS tags, UD morphological features, and governing dependencies were extracted using PredPatt for the predicate/argument root and all of its dependents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "2. Lexical features Function words (determiners, modals, auxiliaries) in the dependents of the arguments and predicates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Learned features For our type-level learned features, we use the 42B uncased GloVe embeddings for the root of the annotated predicate or argument (Pennington et al., 2014) . For our tokenlevel learned features, we use 1,024-dimensional ELMo embeddings (Peters et al., 2018) . To obtain the latter, the UD-EWT sentences are passed as input to the ELMo three-layered biLM, and we extract the output of all three hidden layers for the root of the annotated predicates and arguments, giving us 3,072-dimensional vectors for each.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 171, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 252, |
| "end": 273, |
| "text": "(Peters et al., 2018)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Labeling models For each protocol, we predict the three normalized properties corresponding to the annotated token(s) using different subsets of the above features. The feature representation is used as the input to a multilayer perceptron with ReLU nonlinearity and L1 loss. The number of hidden layers and their sizes are hyperparameters that we tune on the development set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Implementation For all experiments, we use stochastic gradient descent to train the multilayer perceptron parameters with the Adam optimizer (Kingma and Ba, 2015), using the default learning rate in pytorch (10 \u22123 ). We performed ablation experiments on the four major classes of features discussed above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Hyperparameters For each of the ablation experiments, we ran a hyperparameter grid search over hidden layer sizes (one or two hidden layers with sizes h 1 , h 2 \u2208 {512, 256, 128, 64, 32}; h 2 at most half of h 1 ), L2 regularization penalty l \u2208 0, 10 \u22125 , 10 \u22124 , 10 \u22123 , and the dropout probability d \u2208 {0.1, 0.2, 0.3, 0.4, 0.5}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Development For all models, we train for at most 20 epochs with early stopping. At the end of each epoch, the L1 loss is calculated on the development set, and if it is higher than the previous epoch, we stop training, saving the parameter values from the previous epoch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Evaluation Consonant with work in event factuality prediction, we report Pearson correlation (\u03c1) and proportion of mean absolute error (MAE) explained by the model, which we refer to as R1 on analogy with the variance explained R2 = \u03c1 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "R1 = 1 \u2212 MAE p model MAE p baseline", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "where MAE p baseline is always guessing the median for property p. We calculate R1 across properties (wR1) by taking the mean R1 weighted by the MAE for each property.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "These metrics together are useful, because \u03c1 tells us how similar the predictions are to the true values, ignoring scale, and R1 tells us how close the predictions are to the true values, after accounting for variability in the data. We focus mainly on differences in relative performance among our models, but for comparison, stateof-the-art event factuality prediction systems obtain \u03c1 \u2248 0.77 and R1 \u2248 0.57 for predicting event factuality on the predicates we annotate (Rudinger et al., 2018) . Table 5 contains the results on the test set for both the argument (top) and predicate (bottom) protocols. We see that (i) our models are generally better able to predict referential properties of arguments than those of predicates; (ii) for both predicates and arguments, contextual learned representations contain most of the relevant information for both arguments and predicates, though the addition of hand-engineered features can give a slight performance boost, particularly for the predicate properties; and (iii) the proportion of absolute error explained is significantly lower than what we might expect from the variance explained implied by the correlations. We discuss (i) and (ii) here, deferring discussion of (iii) to \u00a710. This seems likely to be a product of abstract reference being fairly strongly associated with particular lexical items, while most arguments can refer to particulars and kinds (and which they refer to is context-dependent). And in light of the relatively good performance of contextual learned features alone, it suggests that these contextual learned features-in contrast to the hand-engineered token-level features-are able to use this information coming from the lexical item.", |
| "cite_spans": [ |
| { |
| "start": 471, |
| "end": 494, |
| "text": "(Rudinger et al., 2018)", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 497, |
| "end": 504, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "WordNet", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Interestingly, however, the models with both contextual learned features (ELMo) and handengineered token-level features perform slightly better than those without the hand-engineered features across the board, suggesting that there is some (small) amount of contextual information relevant to generalization that the contextual learned features are missing. This performance boost may be diminished by improved contextual encoders, such as BERT (Devlin et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 445, |
| "end": 466, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "9" |
| }, |
| { |
| "text": "We see a pattern similar to the one observed for the argument properties mirrored in the predicate properties: Whereas type-level hand-engineered and learned features perform relatively poorly for properties such as IS.PARTICULAR and IS.HYPOTHETICAL, they are able to predict IS.DYNAMIC relatively well compared with the models with all features. The converse of this also holds: Token-level hand-engineered features are better able to predict IS.PARTICULAR and IS.HYPOTHETICAL, but perform relatively poorly on their own for IS.DYNAMIC.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate properties", |
| "sec_num": null |
| }, |
| { |
| "text": "One caveat here is that, unlike for IS.ABSTRACT, type-level learned features (GloVe) alone perform quite poorly for IS.DYNAMIC, and the difference between the models with only type-level handengineered features and the ones with only token-level hand-engineered features is less stark for IS.DYNAMIC than for IS.ABSTRACT. This may suggest that, though IS.DYNAMIC is relatively constrained by the lexical item, it may be more contextually determined than IS.ABSTRACT. Another Table 5. major difference between the argument properties and the predicate properties is that IS.PARTICULAR is much more difficult to predict than IS.HYPOTHETICAL. This contrasts with IS.PARTICULAR for arguments, which is easier to predict than IS.KIND. Figure 4 plots the true (normalized) property values for the argument (top) and predicate (bottom) protocols from the development set against the values predicted by the models highlighted in blue in Table 5 . Points are colored by the part-of-speech of the argument or predicate root.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 475, |
| "end": 483, |
| "text": "Table 5.", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 730, |
| "end": 738, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 930, |
| "end": 937, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicate properties", |
| "sec_num": null |
| }, |
| { |
| "text": "We see two overarching patterns. First, our models are generally reluctant to predict values outside the [\u22121, 1] range, despite the fact that there are not an insignificant number of true values outside this range. This behavior likely contributes to the difference we saw between the \u03c1 and R1 metrics, wherein R1 was generally worse than we would expect from \u03c1. This pattern is starkest for IS.PARTICULAR in the predicate protocol, where predictions are nearly all constrained to [0, 1] . Second, the model appears to be heavily reliant on part-of-speech information-or some semantic information related to part-of-speech-for making predictions. This behavior can be seen in the fact that, though common noun-rooted arguments get relatively variable predictions, pronoun-and proper noun-rooted arguments are almost always predicted to be particular, non-kind, non-abstract; and though verb-rooted predicates also get relatively variable predictions, common noun-, adjective-, and proper noun-rooted predicates are almost always predicted to be non-dynamic.", |
| "cite_spans": [ |
| { |
| "start": 481, |
| "end": 484, |
| "text": "[0,", |
| "ref_id": null |
| }, |
| { |
| "start": 485, |
| "end": 487, |
| "text": "1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "10" |
| }, |
| { |
| "text": "Argument protocol Proper nouns tend to refer to particular, non-kind, non-abstract entities, but they can be kind-referring, which our models miss: iPhone in (20) and Marines in (19) were predicted to have low kind score and high particular score, while annotators label these arguments as non-particular and kind-referring. This similarly holds for pronouns. As mentioned in \u00a76, we filtered out several pronominal arguments, but certain pronouns-like you, they, yourself, themselves-were not filtered because they can have both particular-and kind-referring uses. Our models fail to capture instances where pronouns are labeled kind-referring (e.g., you in (21) and (22)) consistently predicting low IS.KIND scores, likely because they are rare in our data. This behavior is not seen with common nouns: The model correctly predicts common nouns in certain contexts as non-particular, non-abstract, and kind-referring (e.g., food in (23) and men in (24)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "10" |
| }, |
| { |
| "text": "(23) Kitchen puts out good food [...] (24) just saying most men suck! Predicate protocol As in the argument protocol, general trends associated with part-of-speech are exaggerated by the model. We noted in \u00a77 that annotators tend to annotate hypothetical predicates as non-particular and vice-versa (\u03c1 = \u22120.25), but the model's predictions are anti-correlated to a much greater extent (\u03c1 = \u22120.79). For example, annotators are more willing to say a predicate can refer to particular, hypothetical situations (25) or a non-particular, non-hypothetical situation (26).", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 37, |
| "text": "[...]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "10" |
| }, |
| { |
| "text": "(25) Read the entire article [...] (26) it s illegal to sell stolen property, even if you don't know its stolen.", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 34, |
| "text": "[...]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "10" |
| }, |
| { |
| "text": "The model also had a bias towards particular predicates referring to dynamic predicates (\u03c1 = 0.34)-a correlation not present among annotators. For instance, is closed in (27) was annotated as particular but non-dynamic but predicted by the model to be particular and dynamic; and helped in (28) was annotated as nonparticular and dynamic, but the model predicted particular and dynamic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "10" |
| }, |
| { |
| "text": "(27) library is closed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "10" |
| }, |
| { |
| "text": "(28) I have a new born daughter and she helped me with a lot.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "10" |
| }, |
| { |
| "text": "We have proposed a novel semantic framework for modeling linguistic expressions of generalization as combinations of simple, real-valued referential properties of predicates and their arguments. We used this framework to construct a dataset covering the entirety of the Universal Dependencies English Web Treebank and probed the ability of both hand-engineered and learned typeand token-level features to predict the annotations in this dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "11" |
| }, |
| { |
| "text": "Marvin Minsky. 1974. A framework for representing knowledge. MIT-AI Laboratory Memo 306.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "11" |
| }, |
| { |
| "text": "Data, code, protocol implementation, and task instructions provided to annotators are available at decomp.io.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "SitEnt additionally assumes three other classes, contrasting with the four above: IMPERATIVE, QUESTION, and REPORT. We ignore clauses labeled with these categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank three anonymous reviewers and Chris Potts for useful comments on this paper as well as Scott Grimm and the FACTS.lab at the University of Rochester for useful comments on the framework and protocol design. This research was supported by the University of Rochester, JHU HLTCOE, and DARPA AIDA. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Towards universal semantic tagging", |
| "authors": [ |
| { |
| "first": "Lasha", |
| "middle": [], |
| "last": "Abzianidze", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IWCS 2017, 12th International Conference on Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lasha Abzianidze and Johan Bos. 2017. Towards universal semantic tagging. In IWCS 2017, 12th International Conference on Computational Semantics, Short papers.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Categorical Data Analysis, 482", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Agresti", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Agresti. 2003. Categorical Data Analysis, 482. John Wiley & Sons.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Analyzing Linguistic Data: A Practical Introduction to Statistics using R", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "H" |
| ], |
| "last": "Baayen", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R.H. Baayen. 2008. Analyzing Linguistic Data: A Practical Introduction to Statistics using R. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Berkeley FrameNet Project", |
| "authors": [ |
| { |
| "first": "Collin", |
| "middle": [ |
| "F" |
| ], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fillmore", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "B" |
| ], |
| "last": "Lowe", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "86--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Proj- ect. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics -Volume 1, pages 86-90, Montreal.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Commonsense for generative multihop question answering tasks", |
| "authors": [ |
| { |
| "first": "Lisa", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yicheng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "4220--4230", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi- hop question answering tasks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4220-4230, Brussels.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Unsupervised event coreference resolution with rich linguistic features", |
| "authors": [ |
| { |
| "first": "Adrian", |
| "middle": [], |
| "last": "Cosmin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanda", |
| "middle": [], |
| "last": "Bejan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harabagiu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1422", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cosmin Adrian Bejan and Sanda Harabagiu. 2010. Unsupervised event coreference resolution with rich linguistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412-1422, Uppsala.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "English Web Treebank LDC2012T13. Linguistic Data Consortium", |
| "authors": [ |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Bies", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Mott", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Warner", |
| "suffix": "" |
| }, |
| { |
| "first": "Seth", |
| "middle": [], |
| "last": "Kulick", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English Web Treebank LDC2012T13. Linguistic Data Consortium, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Concreteness ratings for 40 thousand generally known English word lemmas", |
| "authors": [ |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Brysbaert", |
| "suffix": "" |
| }, |
| { |
| "first": "Amy", |
| "middle": [ |
| "Beth" |
| ], |
| "last": "Warriner", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Kuperman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Behavior Research Methods", |
| "volume": "46", |
| "issue": "3", |
| "pages": "904--911", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3):904-911.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Weak definite noun phrases", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Sussman", |
| "suffix": "" |
| }, |
| { |
| "first": "Natalie", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Tanenhaus", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of NELS 36", |
| "volume": "", |
| "issue": "", |
| "pages": "179--196", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Carlson, Rachel Sussman, Natalie Klein, and Michael Tanenhaus. 2006. Weak definite noun phrases. In Proceedings of NELS 36, pages 179-196, Amherst, MA.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Reference to Kinds in English", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [ |
| "N" |
| ], |
| "last": "Carlson", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg N. Carlson. 1977. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts, Amherst.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The Generic Book", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Gregory", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [ |
| "Jeffry" |
| ], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pelletier", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gregory N. Carlson and Francis Jeffry Pelletier. 1995. The Generic Book, The University of Chicago Press.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Supersense tagging of unknown nouns in WordNet", |
| "authors": [ |
| { |
| "first": "Massimiliano", |
| "middle": [], |
| "last": "Ciaramita", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "168--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Massimiliano Ciaramita and Mark Johnson. 2003. Supersense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Con- ference on Empirical Methods in Natural Language Processing, pages 168-175.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Guidelines for ECB+ annotation of events and their coreference", |
| "authors": [ |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Cybulska", |
| "suffix": "" |
| }, |
| { |
| "first": "Piek", |
| "middle": [], |
| "last": "Vossen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agata Cybulska and Piek Vossen. 2014a. Guidelines for ECB+ annotation of events and their coreference. Technical Report NWR- 2014-1, VU University Amsterdam.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Using a sledgehammer to crack a nut? Lexical diversity and event coreference resolution", |
| "authors": [ |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Cybulska", |
| "suffix": "" |
| }, |
| { |
| "first": "Piek", |
| "middle": [], |
| "last": "Vossen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agata Cybulska and Piek Vossen. 2014b. Using a sledgehammer to crack a nut? Lexical diversity and event coreference resolution. In Proceedings of the Ninth International Con- ference on Language Resources and Evaluation (LREC'14), Reykjavik.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Universal Stanford dependencies: A cross-linguistic typology", |
| "authors": [ |
| { |
| "first": "Marie-Catherine De", |
| "middle": [], |
| "last": "Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Natalia", |
| "middle": [], |
| "last": "Silveira", |
| "suffix": "" |
| }, |
| { |
| "first": "Katri", |
| "middle": [], |
| "last": "Haverinen", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
| "volume": "", |
| "issue": "", |
| "pages": "4585--4592", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marie-Catherine De Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Universal Stanford dependencies: A cross-linguistic typology. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4585-4592, Reykjavik.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, MN.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "The Automatic Content Extraction (ACE) program-Tasks, data, and evaluation", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "R" |
| ], |
| "last": "Doddington", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [ |
| "A" |
| ], |
| "last": "Przybocki", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [ |
| "A" |
| ], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Strassel", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [ |
| "M" |
| ], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George R. Doddington, Alexis Mitchell, Mark A. Przybocki, Lance A. Ramshaw, Stephanie Strassel, and Ralph M. Weischedel. 2004. The Automatic Content Extraction (ACE) program-Tasks, data, and evaluation. In Pro- ceedings of the Fourth International Con- ference on Language Resources and Evaluation (LREC'04), Lisbon.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Thematic proto-roles and argument selection", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Dowty", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Language", |
| "volume": "67", |
| "issue": "3", |
| "pages": "547--619", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Dowty. 1991. Thematic proto-roles and argument selection. Language, 67(3):547-619.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "WordNet: An Electronic Lexical Database", |
| "authors": [ |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database, MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Automatic prediction of aspectual class of verbs in context", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "517--523", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich and Alexis Palmer. 2014a. Automatic prediction of aspectual class of verbs in context. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 517-523, Baltimore, MD.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Situation entity annotation", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of LAW VIII -The 8th Linguistic Annotation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "149--158", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich and Alexis Palmer. 2014b. Situation entity annotation. In Proceedings of LAW VIII -The 8th Linguistic Annotation Workshop, pages 149-158, Dublin.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Annotating genericity: A survey, a scheme, and a corpus", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Melissa", |
| "middle": [ |
| "Peate" |
| ], |
| "last": "S\u00f8rensen", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of The 9th Linguistic Annotation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "21--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich, Alexis Palmer, Melissa Peate S\u00f8rensen, and Manfred Pinkal. 2015. Annotating genericity: A survey, a scheme, and a corpus. In Proceedings of The 9th Linguistic Annotation Workshop, pages 21-30, Denver, CO.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Situation entity types: Automatic classification of clause-level aspect", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1757--1768", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich, Alexis Palmer, and Manfred Pinkal. 2016. Situation entity types: Automatic classification of clause-level aspect. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1757-1768, Berlin.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Automatic recognition of habituals: A threeway classification of clausal aspect", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2471--2481", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich and Manfred Pinkal. 2015a. Automatic recognition of habituals: A three- way classification of clausal aspect. In Pro- ceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2471-2481, Lisbon.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Discourse-sensitive automatic identification of generic expressions", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1272--1281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich and Manfred Pinkal. 2015b. Discourse-sensitive automatic identification of generic expressions. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1272-1281, Beijing.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Data Analysis using Regression and Multilevel-Hierarchical Models", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Gelman", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Gelman and Jennifer Hill. 2014. Data Analysis using Regression and Multilevel- Hierarchical Models. Cambridge University Press, New York City.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Individuating the abstract", |
| "authors": [ |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Grimm", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of Sinn und Bedeutung 18", |
| "volume": "", |
| "issue": "", |
| "pages": "182--200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott Grimm. 2014. Individuating the abstract. In Proceedings of Sinn und Bedeutung 18, pages 182-200, Bayonne and Vitoria-Gasteiz.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Crime investigations: The countability profile of a delinquent noun", |
| "authors": [ |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Grimm", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.4148/1944-3676.1111" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott Grimm. 2016. Crime investigations: The countability profile of a delinquent noun. Baltic International Yearbook of Cognition, Logic and Communication, 11. doi:10.4148/1944- 3676.1111.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Grammatical number and the scale of individuation", |
| "authors": [ |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Grimm", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Language", |
| "volume": "94", |
| "issue": "3", |
| "pages": "527--574", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott Grimm. 2018. Grammatical number and the scale of individuation. Language, 94(3): 527-574.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Commonsense metaphysics and lexical semantics", |
| "authors": [ |
| { |
| "first": "Jerry", |
| "middle": [ |
| "R" |
| ], |
| "last": "Hobbs", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Davies", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Edwards", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Laws", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Computational Linguistics", |
| "volume": "13", |
| "issue": "3-4", |
| "pages": "241--250", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jerry R. Hobbs, William Croft, Todd Davies, Douglas Edwards, and Kenneth Laws, 1987. Commonsense metaphysics and lexical seman- tics. Computational Linguistics, 13(3-4):241-250.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "The manually annotated sub-corpus: A community resource for and by the people", |
| "authors": [ |
| { |
| "first": "Nancy", |
| "middle": [], |
| "last": "Ide", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "Collin", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Passonneau", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACL 2010 Conference Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "68--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nancy Ide, Christiane Fellbaum, Collin Baker, and Rebecca Passonneau. 2010. The manually annotated sub-corpus: A community resource for and by the people. In Proceedings of the ACL 2010 Conference Short Papers, pages 68-73, Uppsala.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of 3rd International Conference on Learning Representations (ICLR 2015)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Joint entity and event coreference resolution across documents", |
| "authors": [ |
| { |
| "first": "Heeyoung", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "" |
| }, |
| { |
| "first": "Angel", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "489--500", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500, Jeju Island.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Event detection and factuality assessment with non-expert supervision", |
| "authors": [ |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yejin", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1643--1648", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervi- sion. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643-1648, Lisbon.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Generic generalizations", |
| "authors": [ |
| { |
| "first": "Sarah-Jane", |
| "middle": [], |
| "last": "Leslie", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lerner", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "The Stanford Encyclopedia of Philosophy", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sarah-Jane Leslie and Adam Lerner. 2016. Generic generalizations, Edward N. Zalta, editor, The Stanford Encyclopedia of Phil- osophy, Winter 2016 edition. Metaphysics Research Lab, Stanford University.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Automatic identification of general and specific sentences by leveraging discourse annotations", |
| "authors": [ |
| { |
| "first": "Annie", |
| "middle": [], |
| "last": "Louis", |
| "suffix": "" |
| }, |
| { |
| "first": "Ani", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "605--613", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annie Louis and Ani Nenkova. 2011. Automatic identification of general and specific sentences by leveraging discourse annotations. In Proceed- ings of 5th International Joint Conference on Natural Language Processing, pages 605-613, Chiang Mai.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Semantics: An International Handbook of Natural Language Meaning", |
| "authors": [], |
| "year": 2011, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claudia Maienborn, Klaus von Heusinger, and Paul Portner, editors. 2011. Semantics: An International Handbook of Natural Language Meaning, volume 2. Mouton de Gruyter, Berlin.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Supervised categorization of habitual versus episodic sentences", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "A" |
| ], |
| "last": "Mathew", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas A. Mathew. 2009. Supervised catego- rization of habitual versus episodic sentences. Master's thesis, Georgetown University.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Programs with Common Sense, RLE and MIT Computation Center", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| } |
| ], |
| "year": 1960, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John McCarthy. 1960. Programs with Common Sense, RLE and MIT Computation Center.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Circumscription-A form of nonmonotonic reasoning", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Artificial Intelligence", |
| "volume": "13", |
| "issue": "1-2", |
| "pages": "27--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John McCarthy. 1980. Circumscription-A form of nonmonotonic reasoning. Artificial Intelli- gence, 13(1-2):27-39.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Applications of circumscription to formalizing common sense knowledge", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Artificial Intelligence", |
| "volume": "28", |
| "issue": "", |
| "pages": "89--116", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John McCarthy. 1986. Applications of cir- cumscription to formalizing common sense knowledge. Artificial Intelligence, 28:89-116.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "GloVe: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "2227--2237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep con- textualized word representations. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, LA.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Discourse annotation and semantic annotation in the GNOME corpus", |
| "authors": [ |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 2004 ACL Workshop on Discourse Annotation", |
| "volume": "", |
| "issue": "", |
| "pages": "72--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Massimo Poesio. 2004. Discourse annotation and semantic annotation in the GNOME corpus. In Proceedings of the 2004 ACL Workshop on Discourse Annotation, pages 72-79, Barcelona.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Anaphora resolution with the ARRAU corpus", |
| "authors": [ |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Grishina", |
| "suffix": "" |
| }, |
| { |
| "first": "Varada", |
| "middle": [], |
| "last": "Kolhatkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Nafise", |
| "middle": [], |
| "last": "Moosavi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ina", |
| "middle": [], |
| "last": "Roesiger", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roussel", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabian", |
| "middle": [], |
| "last": "Simonjetz", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Uma", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Juntao", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Zinsmeister", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference", |
| "volume": "", |
| "issue": "", |
| "pages": "11--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Massimo Poesio, Yulia Grishina, Varada Kolhatkar, Nafise Moosavi, Ina Roesiger, Adam Roussel, Fabian Simonjetz, Alexandra Uma, Olga Uryupina, Juntao Yu, and Heike Zinsmeister. 2018. Anaphora resolution with the ARRAU corpus. In Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference, pages 11-22, New Orleans, LA.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "The TIMEBANK corpus", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Pustejovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| }, |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Sauri", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "See", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Gaizauskas", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Setzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Dragomir", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Sundheim", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Day", |
| "suffix": "" |
| }, |
| { |
| "first": "Lisa", |
| "middle": [], |
| "last": "Ferro", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcia", |
| "middle": [], |
| "last": "Lazo", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of Corpus Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "647--656", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, and Marcia Lazo. 2003. The TIMEBANK corpus. In Proceedings of Corpus Linguistics, pages 647-656, Lancaster.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Semantic protoroles", |
| "authors": [ |
| { |
| "first": "Drew", |
| "middle": [], |
| "last": "Reisinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Ferraro", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Harman", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Rawlins", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "475--488", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto- roles. Transactions of the Association for Computational Linguistics, 3:475-488.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Identifying generic noun phrases", |
| "authors": [ |
| { |
| "first": "Nils", |
| "middle": [], |
| "last": "Reiter", |
| "suffix": "" |
| }, |
| { |
| "first": "Anette", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "40--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nils Reiter and Anette Frank. 2010. Identifying generic noun phrases. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 40-49, Uppsala.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Nonmonotonic reasoning", |
| "authors": [ |
| { |
| "first": "Raymond", |
| "middle": [], |
| "last": "Reiter", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Annual Review of Computer Science", |
| "volume": "2", |
| "issue": "", |
| "pages": "147--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raymond Reiter. 1987. Nonmonotonic reasoning, In J. F. Traub, N. J. Nilsson, and B. J. Grosz, editors, Annual Review of Computer Science, volume 2, pages 147-186. Annual Reviews Inc.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Neural models of factuality", |
| "authors": [ |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [ |
| "Steven" |
| ], |
| "last": "White", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "731--744", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 731-744, New Orleans, LA.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Scripts, plans, and knowledge", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Roger", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "P" |
| ], |
| "last": "Schank", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Abelson", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Proceedings of the 4th International Joint Conference on Artificial Intelligence", |
| "volume": "1", |
| "issue": "", |
| "pages": "151--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roger C. Schank and Robert P. Abelson. 1975. Scripts, plans, and knowledge. In Proceedings of the 4th International Joint Conference on Artificial Intelligence -Volume 1, pages 151-157.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "VerbNet: A broadcoverage, comprehensive verb lexicon", |
| "authors": [ |
| { |
| "first": "Karin Kipper", |
| "middle": [], |
| "last": "Schuler", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karin Kipper Schuler. 2005. VerbNet: A broad- coverage, comprehensive verb lexicon. Ph.D. thesis, Computer and Information Science Department, Universiy of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "A gold standard dependency corpus for English", |
| "authors": [ |
| { |
| "first": "Natalia", |
| "middle": [], |
| "last": "Silveira", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "Miriam", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Eval- uation (LREC'14).", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Integrating deep linguistic features in factuality prediction over unified datasets", |
| "authors": [ |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Stanovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Judith", |
| "middle": [], |
| "last": "Eckle-Kohler", |
| "suffix": "" |
| }, |
| { |
| "first": "Yevgeniy", |
| "middle": [], |
| "last": "Puzikov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "352--357", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabriel Stanovsky, Judith Eckle-Kohler, Yevgeniy Puzikov, Ido Dagan, and Iryna Gurevych. 2017. Integrating deep linguistic fea- tures in factuality prediction over unified data- sets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 352-357, Vancouver.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Fine-grained temporal relation extraction", |
| "authors": [ |
| { |
| "first": "Siddharth", |
| "middle": [], |
| "last": "Vashishtha", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [ |
| "Steven" |
| ], |
| "last": "White", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-grained temporal relation extraction. arXiv, cs.CL/ 1902.01390v2.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Verbs and times. Philosophical", |
| "authors": [ |
| { |
| "first": "Zeno", |
| "middle": [], |
| "last": "Vendler", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "Review", |
| "volume": "66", |
| "issue": "2", |
| "pages": "143--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeno Vendler. 1957. Verbs and times. Philo- sophical Review, 66(2):143-160.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "ACE 2005 multilingual training corpus LDC2006T06. Linguistic Data Consortium", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Strassel", |
| "suffix": "" |
| }, |
| { |
| "first": "Julie", |
| "middle": [], |
| "last": "Medero", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuaki", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus LDC2006T06. Linguistic Data Consortium, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "The semantic proto-role linking model", |
| "authors": [ |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Steven White", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Rawlins", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter", |
| "volume": "2", |
| "issue": "", |
| "pages": "92--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aaron Steven White, Kyle Rawlins, and Benjamin Van Durme. 2017. The semantic proto-role linking model. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, volume 2, pages 92-98, Valencia.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Universal decompositional semantics on universal dependencies", |
| "authors": [ |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Steven White", |
| "suffix": "" |
| }, |
| { |
| "first": "Drew", |
| "middle": [], |
| "last": "Reisinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Keisuke", |
| "middle": [], |
| "last": "Sakaguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Vieira", |
| "suffix": "" |
| }, |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Rawlins", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1713--1723", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Pro- ceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1713-1723, Austin, TX.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Ordinal common-sense inference", |
| "authors": [ |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Duh", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "379--395", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017a. Ordinal common-sense inference. Transactions of the Association for Computational Linguistics, 5:379-395.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "An evaluation of PredPatt and Open IE via stage 1 semantic role labeling", |
| "authors": [ |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IWCS 2017, 12th International Conference on Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sheng Zhang, Rachel Rudinger, and Benjamin Van Durme. 2017b. An evaluation of PredPatt and Open IE via stage 1 semantic role labeling. In IWCS 2017, 12th International Conference on Computational Semantics, Short papers, Montpellier.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Those firemen are available. (9) Those firemen are strong.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "Examples of argument protocol (top) and predicate protocol (bottom).", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Mean property value for each clause type.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "Distribution of normalized annotations in argument (left) and predicate (right) protocols.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "True (normalized) property values for argument (top) and predicate (bottom) protocols in the development set plotted against values predicted by models highlighted in blue in", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "text": "The US Marines took most of Fallujah Wednesday, but still face[...] (20) I'm writing an essay...and I need to know if the iPhone was the first Smart Phone.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "text": "21) I like Hayes Street Grill....another plus, it's right by Civic Center, so you can take a romantic walk around the Opera House, City Hall, Symphony Auditorium[...] (22) What would happen if you flew the flag of South Vietnam in Modern day Vietnam?", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "num": null, |
| "text": "Property\u03b2 0\u03c3ann\u03c3item Bias (log-odds) for answering true.", |
| "html": null, |
| "content": "<table><tr><td>Argument</td><td>Is.Particular Is.Kind Is.Abstract</td><td>0.49 1.15 1.76 \u22120.31 1.23 1.34 \u22121.29 1.27 1.70</td></tr><tr><td>Predicate</td><td colspan=\"2\">Is.Particular Is.Dynamic Is.Hypothetical \u22120.78 1.24 0.90 0.98 0.91 0.72 0.24 0.82 0.59</td></tr></table>" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "num": null, |
| "text": "Interannotator agreement scores.", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "num": null, |
| "text": "Survey of genericity annotated corpora for English, including our new corpus (in bold).", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "num": null, |
| "text": "Correlation (\u03c1) and MAE explained (R1) on test split for argument (top) and predicate (bottom) protocols. Bolded numbers give the best result in the column; the models highlighted in blue are the ones analyzed in \u00a710.Argument properties While type-level handengineered and learned features perform relatively poorly for properties such as IS.PARTICULAR and IS.KIND for arguments, they are able to predict IS.ABSTRACT relatively well compared to the models with all features. The converse of this also holds: Token-level hand-engineered features are better able to predict IS.PARTICULAR and IS.KIND, but perform relatively poorly on their own for", |
| "html": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |