| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:11:05.219019Z" |
| }, |
| "title": "AutoAspect: Automatic Annotation of Tense and Aspect for Uniform Meaning Representations", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Colorado at Boulder Boulder", |
| "location": { |
| "region": "CO", |
| "country": "USA" |
| } |
| }, |
| "email": "daniel.chen-1@colorado.edu" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Colorado at Boulder Boulder", |
| "location": { |
| "region": "CO", |
| "country": "USA" |
| } |
| }, |
| "email": "martha.palmer@colorado.edu" |
| }, |
| { |
| "first": "Meagan", |
| "middle": [], |
| "last": "Vigus", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of New Mexico Albuquerque", |
| "location": { |
| "region": "NM", |
| "country": "USA" |
| } |
| }, |
| "email": "mvigus@unm.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present AutoAspect, a novel, rule-based annotation tool for labeling tense and aspect. The pilot version annotates English data. The aspect labels are designed specifically for Uniform Meaning Representations (UMR), an annotation schema that aims to encode crosslingual semantic information. The annotation tool combines syntactic and semantic cues to assign aspects on a sentence-by-sentence basis, following a sequence of rules that each output a UMR aspect. Identified events proceed through the sequence until they are assigned an aspect. We achieve a recall of 76.17% for identifying UMR events and an accuracy of 62.57% on all identified events, with high precision values for 2 of the aspect labels.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present AutoAspect, a novel, rule-based annotation tool for labeling tense and aspect. The pilot version annotates English data. The aspect labels are designed specifically for Uniform Meaning Representations (UMR), an annotation schema that aims to encode crosslingual semantic information. The annotation tool combines syntactic and semantic cues to assign aspects on a sentence-by-sentence basis, following a sequence of rules that each output a UMR aspect. Identified events proceed through the sequence until they are assigned an aspect. We achieve a recall of 76.17% for identifying UMR events and an accuracy of 62.57% on all identified events, with high precision values for 2 of the aspect labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "As the field of Natural Language Processing advances, there are increasing demands for more sophisticated applications and richer representations. Abstract Meaning Representations (AMR; Banarescu et al. 2013) , and their more recent crosslingual incarnation as Uniform Meaning Representations (UMR; Van Gysel et al. 2021) , are a response to that demand. AMR/UMRs provide an abstract, directed acyclic graph representation of a complete sentence, focusing on the underlying \"who\" did \"what\" to \"whom\" elements of the events being described. The more information that can be associated with those events, in terms of whether they have been completed, or whether they have achieved their intended results, the better.", |
| "cite_spans": [ |
| { |
| "start": 186, |
| "end": 208, |
| "text": "Banarescu et al. 2013)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 293, |
| "end": 298, |
| "text": "(UMR;", |
| "ref_id": null |
| }, |
| { |
| "start": 299, |
| "end": 321, |
| "text": "Van Gysel et al. 2021)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The increased richness of UMR Tense, Aspect and Modality annotations, as described below, can more clearly identify the completion and achievement of events in a cross-lingual context, provid-ing a firmer baseline for comparing typologically distinct languages. Automating such a complex semantic processing task provides valuable qualitative and temporal crosslingual features that applications like translation models and virtual assistants can utilize to more accurately capture the semantic nuances of events. Given the substantial amounts of English AMR annotation, the question immediately arises of how to efficiently add these new annotation features to pre-existing English AMRs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper describes an implementation of an automatic system that relies on VerbNet, a rich lexical resource, as the basis for categorizing event descriptions according to the Aspect guidelines discussed below. Our initial results are quite promising, and there are obvious next steps to take.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Previous automatic annotation models operate on different definitions of aspect. In Friedrich et al. (2016) , clauses are annotated for situation entity types, which capture some of the same semantic distinctions as the UMR aspect annotation scheme, including state and habitual. Friedrich et al. (2016) also include modal distinctions in their annotation, such as questions and imperatives. Unlike Friedrich et al. (2016) , the UMR aspect annotation distinguishes between different types of dynamic (non-stative) clauses (Activity, Endeavor, and Performance); these are all annotated as Event in Friedrich et al. (2016) . Friedrich and Gateva (2017) annotate a binary telicity distinction: telic vs. atelic. The UMR aspect annotation annotates a three-way distinction for non-stative events, which takes both the qualitative and temporal dimensions of event semantics into account.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 107, |
| "text": "Friedrich et al. (2016)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 280, |
| "end": 303, |
| "text": "Friedrich et al. (2016)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 399, |
| "end": 422, |
| "text": "Friedrich et al. (2016)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 597, |
| "end": 620, |
| "text": "Friedrich et al. (2016)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 623, |
| "end": 650, |
| "text": "Friedrich and Gateva (2017)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The UMR aspect annotation consists of a feature assigned to events that indicates their internal qualitative and temporal structure (Van Gysel et al., 2019 , 2021 . Every node classified as an \"event\" in UMR receives an aspect annotation. UMR defines \"event\" based on the typological prototype of a verb as described in Croft (2001) , and Croft (in press). UMR events exhibit either the prototypical information-packaging of a verb, predication (as opposed to modification and reference), or the prototypical semantic class of a verb, a process (as opposed to a property or an entity).", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 155, |
| "text": "(Van Gysel et al., 2019", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 156, |
| "end": 162, |
| "text": ", 2021", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 320, |
| "end": 332, |
| "text": "Croft (2001)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The aspectual distinctions made in UMR are based on Croft (2012)'s two-dimensional analysis of aspect and build on the aspect annotations from Donatelli et al. (2018 Donatelli et al. ( , 2019 . Croft (2012) analyzes aspectual structure as having both a temporal dimension and a qualitative dimension; the temporal dimension measures out the event's unfolding over time and the qualitative dimension measures out the change that occurs (or does not occur) during the event.", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 165, |
| "text": "Donatelli et al. (2018", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 166, |
| "end": 191, |
| "text": "Donatelli et al. ( , 2019", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 194, |
| "end": 206, |
| "text": "Croft (2012)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The UMR aspectual distinctions do not have a direct correspondence with either specific verbs or specific constructions in a language; instead, they annotate the aspectual structure of an event in its context. In order to ensure the maximum cross-linguistic comparability of annotation values, UMR uses lattices of compatible annotation values for certain annotation categories, including aspect (Van Gysel et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 396, |
| "end": 420, |
| "text": "(Van Gysel et al., 2019)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In addition to the lattices, UMR also considers a certain level of specificity in annotation as the base level, corresponding to distinctions that tend to be straightforward to annotate in a majority of languages. For aspect, there are four base-level annotation values: STATE, ACTIVITY, ENDEAVOR, and PERFORMANCE. In addition, there is a HA-BITUAL value. The annotation of these categories manually relies on a series of decisions that distinguish event types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "First, event nominals are annotated as PROCESS, one of the more coarse-grained categories on the lattice. Processes in reference (i.e., event/action nominals) lack grammatical clues as to their aspectual structure in English and many other languages; therefore, they are simply annotated as PROCESS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Next, the HABITUAL value applies to all events that are repeated on a regular basis, regardless of the internal aspectual structure of each individual (repeated) event. This means that further aspec-tual distinctions (including between PROCESS and STATE) are collapsed for habitual events.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The rest of the aspectual values make a distinction between stative and non-stative events. STATE is the base-level value that captures all stative events; non-stative events are divided into ACTIV-ITY, ENDEAVOR, and PERFORMANCE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "UMR defines stative events as those in which no change occurs on the qualitative dimension during the event (Vendler, 1967; Croft, 2012) . In addition to prototypical stative events, UMR extends the STATE value to \"nonverbal predication\" (He is a teacher. / There is a sandwich in the kitchen.), events modalized by ability modals (This car can go 100mph.), and modal complement-taking predicates (She wants to eat at noon.).", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 123, |
| "text": "(Vendler, 1967;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 124, |
| "end": 136, |
| "text": "Croft, 2012)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In addition, UMR uses the STATE value for a class of events termed \"inactive actions\" in Croft (2012) . Inactive actions are semantically intermediate between states and processes; this class includes verbs of sensation, perception, cognition, emotion, and position. These types of events can variably be construed as either states or processes, even within the same language. Since the construal of an inactive action can be hard to ascertain in context, UMR annotates all inactive actions with the STATE value.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 101, |
| "text": "Croft (2012)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The rest of the base level aspect annotations (ACTIVITY, ENDEAVOR, PERFORMANCE) work to characterize non-stative events (PROCESSES in UMR). These events are defined as those that involve change on the qualitative dimension.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The distinction between the ACTIVITY annotation value and the ENDEAVOR and PERFORMANCE values is based on whether the event is still ongoing at Document Creation Time (DCT), or whether it has ended. Events annotated as ACTIVITY indicate that the event may be ongoing at DCT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Both ENDEAVOR and PERFORMANCE characterize non-stative events that have ended prior to DCT. The ENDEAVOR and PERFORMANCE values differ from each other in whether the event has been terminated or completed. Events annotated as EN-DEAVOR signal that the event has terminated, without reaching completion. This means that the event has not reached a distinct result state on the qualitative dimension. The PERFORMANCE label indicates that an event has been completed, reaching a distinct result state. The UMR PERFORMANCE value corresponds to Vendler's achievements and accomplishments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "UMR also incudes a number of more finegrained aspectual types on its aspect lattice (see Van Gysel et al. 2019). Many of these finegrained aspect values make distinctions that crosscut the base-level annotation values. These include whether an event is punctual or durative, which cross-cuts the ENDEAVOR/PERFORMANCE distinction. For durative ENDEAVORS and PER-FORMANCES, in addition to all ACTIVITIES, there is also the cross-cutting distinction of incremental vs. nonincremental change (Dowty, 1991; Croft, 2012) .", |
| "cite_spans": [ |
| { |
| "start": 488, |
| "end": 501, |
| "text": "(Dowty, 1991;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 502, |
| "end": 514, |
| "text": "Croft, 2012)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The base-level UMR aspectual values were selected because they capture the most salient pieces of aspectual structure and they can be consistently annotated, even in languages with minimal grammatical aspect marking like English. The presence or absence of change on the qualitative dimension is captured by the PROCESS vs. STATE distinction. Boundedness on the temporal dimension is captured for processes by the distinction between ACTIVITY and ENDEAVOR/PERFORMANCE. Finally, boundedness on the qualitative dimension is captured by the distinction between ENDEAVOR and PERFORMANCE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Information", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The rule-based classifier 1 follows the sequential manual annotation steps as closely as possible, immediately exiting the sequence as soon as an annotation label has been assigned by a numbered step. The annotation loop processes text by sentence, such that every identified event in a sentence receives an annotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule-Based Classification", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Step 1a syntactically separates annotation branches into verbs and event nominals. Since all event nominals are assigned the same aspect by Step 1b, the event nominals branch does not pass through the sequence of rules and is analyzed separately. The verbs branch is explored by allowing any tokens that the spaCy English Web Core Large NLP parser (Honnibal et al., 2021) marks as having a part-of-speech (POS) corresponding to any of the Penn Treebank VERB tags 2 to continue on to the sequence of rule-based decisions (Steps 2a-8).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1a: Syntactic Split", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We experimented with various parsers for extracting verbal events, which included running the ClearTAC parser (Myers and Palmer, 2019) and iterating through the list of events SemParse (Gung, 2020; Gung and Palmer, 2021) generates for the input sentence. At this stage, we are able to extract more verbal events using the spaCy NLP parser.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 134, |
| "text": "(Myers and Palmer, 2019)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 185, |
| "end": 197, |
| "text": "(Gung, 2020;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 198, |
| "end": 220, |
| "text": "Gung and Palmer, 2021)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1a: Syntactic Split", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "While SemParse misses some verbal events, it is the sole parser that can identify nominal tokens that correspond to VerbNet frames. With this functionality, a derived nominal like explosion shares the same VerbNet ID as its parent verb explode. It can thus be recognized by SemParse as an event nominal. Thus, to follow the event nominals annotation branch, we run SemParse separately and only extract spans of text in the sentence that constitute event nominals.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1b: Event Nominals Branch", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We restrict identification of event nominals to spans of text that do not contain any Penn Treebank VERB tags. Thus, the span historic visit is identified as an event nominal because both tokens have a non-verbal POS: adjective and singular noun, respectively. But the span to lay the groundwork does not get identified as an event nominal because lay has a verbal POS and will thus be handled in the verbal annotation branch. All event nominals receive the PROCESS label from Step 1b, and thus exit the annotation search.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1b: Event Nominals Branch", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The event nominals branch is explored only after the verbs annotation branch terminates. The final list of events and corresponding aspects collected for each sentence combines the outputs of the verbs and event nominals annotation branches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1b: Event Nominals Branch", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The remaining steps all take place in the verbs annotation branch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 2-3: Verbs Branch", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Step 2a handles non-verbal predication, which analyzes all English copula forms that are followed by predicate nominals, adjectives, and locationals. It assigns all copula forms the aspect STATE, with the condition that the copula does not function as a helping verb, i.e. no verb form directly follows it. Note that UMR annotates the nominal or adjectival predicate rather than the copula, so the sentence \"He is friendly.\" results in a UMR aspect annotation for the event friendly rather than the verbal token is. We compare the aspect results accordingly, not requiring the token that receives the label from AutoAspect to correspond exactly to the predicate token(s) representing the UMR event.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 2-3: Verbs Branch", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Step 2b annotates for STATE as well, and it does so based on VerbNet class, requiring a semantic processing tool. We run SemParse and extract the VerbNet senses corresponding to the verbal tokens in the sentence. For each VerbNet sense y in the pre-determined list of VerbNet class IDs that are labeled STATE, we match the SemParse sense x to y by backing off to the most coarse-grained class. For example, the SemParse VerbNet class ID for the lemma want is x = want-32.1-1-1, but the specified class in the pre-determined list is y = want-32.1. We run a regular expressions match to ensure that the class ID y is contained within the span of the more fine-grained class ID x, confirming that x and y belong to the same class. Any verb event that does not receive a STATE label from Steps 2a and 2b receives the umbrella label PROCESS and continues on to Step 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 2-3: Verbs Branch", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Steps 4-8 subcategorize the umbrella label PRO-CESS carried over from Step 2b into the labels AC-TIVITY, PERFORMANCE, and ENDEAVOR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Steps 4 and 5 continue to classify based on Verb-Net class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step 4 assigns all verbs that receive a participial 3 Penn Treebank label from spaCy the label ACTIVITY. Verbs with inceptive and continuative auxiliary verbs 4 like started and continued also receive the ACTIVITY label.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step 5 assigns all completive auxilaries 5 -like finished -the label PERFORMANCE and all terminative auxiliaries 6like stopped -the label ENDEAVOR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step 6 assigns verbs that occur in clauses with container adverbials -marked by the preposition in -the label PERFORMANCE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step 6 also assigns verbs that occur in clauses with durative adverbials -marked by the preposition for -the label EN-DEAVOR. We use the spaCy dependency parser to check that those prepositions occur in the same clause as the verb being analyzed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step 7 annotates verbs that occur with non-result paths as ENDEAVOR, marked by prepositions like around, along, and past. A non-result path is defined for the tool as the occurrence of a preposition immediately after the verb it is a dependent of, e.g. in the sentence \"He walked along the river.\", 3 VBG (present participle or gerund), VBN (past participle) 4 begin-55.1, continue-55.3, and sustain-55.6 5 complete-55.2 6 stop-55.4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "where along occurs immediately after the main verb walked.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Finally, Step 8 assigns any verbal event that has not yet received an annotation the label PERFOR-MANCE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Steps 4-8: Verb Branch", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Many of the annotation steps are themselves complex semantic processing tasks. Therefore, the performance of this rule-based model depends heavily on the limits of the classifier components being utilized. Given the lack of substantial training data that would be required for a machine learningdriven implementation, we focus on specific examples that reveal the linguistic blind spots of our model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The following results were generated from running AutoAspect on four gold standard news articles from the LDC REFLEX English core set of newswire and web text documents (Strassel and Tracey, 2016 ) that were annotated by our team. Table 1 shows results for the critical subtask of event identification, which yielded a recall of 76.17%, a reasonably high recall for a pilot study with limited gold data. For this specific task, recall is the only appropriate measure for statistical analysis, because precision penalizes the model for annotating an event that is not present in the gold data. Given the variability in using syntactic and semantic cues in the annotation manual, it is more appropriate to see how much of the human-identified labels it is capturing.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 195, |
| "text": "(Strassel and Tracey, 2016", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For example, AutoAspect analyzes the initial split by analyzing Penn TreeBank verbs separately from Penn TreeBank nominals. In the phrase investigation of bombing campaign, it is important to note when AutoAspect only identifies the investigation event and fails to identify the campaign event. However, a sentence like \"I think the court would be highly politicized.\" does not assign an aspect to the verb think, but to the event be_politicized. Since AutoAspect will analyze every token labeled with a Penn TreeBank VERB part-of-speech, it would be disingenuous to penalize the model for labeling an event that is subsumed by a UMR event purposefully abstracting away from strictly syntactic cues. In Table 2 , we show the counts for each type of incorrect label that AutoAspect assigned to an event. This allows us to identify both the linguistic errors made by the model and which errors it makes more frequently. Further development of the tool can thus identify the error-triggering linguistic inputs and improve how a specific rule-based component processes those linguistic inputs. Table 2 also depicts high precision for STATE and PERFORMANCE, indicating that SemParse is successfully matching VerbNet states to verb tokens and that gold PERFORMANCE tokens are generally able to avoid triggering Steps 2a-7. We did obtain poor precision for HABITUAL and ACTIVITY, showing that the model is too accepting of false positive inputs in Steps 3 and 4.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 703, |
| "end": 710, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1090, |
| "end": 1097, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The following linguistic inputs consistently led to errors in the output of AutoAspect. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linguistic Analysis", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Failure to detect event nominals made up 45.5% of errors. Table 3 depicts event nominals that SemParse did not detect and Table 4 depicts event nominals that SemParse did detect. Sentence C is notable because it involves a nominal found in a dialogic omission of the main verb. SemParse still fails to identify pleasure as an event in the sentence \"It's been a pleasure.\", citing it as an attributive argument of the main verb, the event seem-109-1-1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 58, |
| "end": 65, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 122, |
| "end": 129, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "These examples indicate that abstracting away from syntactic cues like having a main verb remains a difficult NLP task. SemParse is trained on Unified PropBank corpora, mapping nominal and adjectival predicates to VerbNet roles. Since VerbNet roles are syntactically defined for verbs, mappings exist for linking sentences like \"John has a fear of spiders\" to \"John fears spiders\" and \"John is afraid of spiders\" (Gung, 2020) . Thus, an event like campaign in Sentence E will not be identified because SemParse currently only identifies event nominals/adjectivals that function as arguments of the main verb of the sentence, contemplated. A human annotator can identify another argument structure where the noun phrase headed by investigation has its own argument roles that could be identified as event nominals, but SemParse identification requires clear sentential structure as input and tends to be more limited to the main verb.", |
| "cite_spans": [ |
| { |
| "start": 413, |
| "end": 425, |
| "text": "(Gung, 2020)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Even then, event nominal identification is not guaranteed, given that survivors and genocide in Sentence B go undetected, despite being the direct object of the main verb told.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "One possibility is that SemParse handles more explicitly deverbal nominals better. In Table 3 , less explicitly deverbal nominals like signature and gesture are undetected. Table 4 shows that the deriving suffix -tion appears to make for more readibly detectable event nominals in decision, opposition, and investigation, all core arguments of their main verb. In F, like gesture, the nominal visit shares an identical form with its verbal lemma, but SemParse identifies visit and not gesture. Notably, the event nominal in F also does not occur as part of the core arguments of the main verb is, and was successfully identified as its own nominal phrase.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 86, |
| "end": 94, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 174, |
| "end": 181, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "But SemParse also missed objections in D and agreement in E, both nominals that have a transparently derivative suffix that attaches to the verb lemma. Both of those event nominals occur in adjunct clauses that are not core arguments of the main verb and themselves do not contain a verb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "2. Dialogic Sentences: In addition to Sentence C, UMR annotates dialogic sentences like \"One last question.\" as a singular event that labels the adjective: be_last. The current syntactic split of verbs and nominals does not allow AutoAspect to label predicative nominals and adjectives that lack a main verb. Additionally, multi-sentence coreference is common in dialogue, as in the sentences \"Is this case likely to strain US-Russian relations? I'm afraid it might.\", where an event from the previous clause (strain) becomes elided in a successive clause.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "3. Present Tense Verbs: Table 5 depicts mislabeled and correctly labeled present tense verbs. Mislabeling present tense verbs as HA-BITUAL was the most common error made by the model aside from the main subtask of event identification, making up 22% of errors. Mislabeling verbs in the present participle form as ACTIVITY was another consistent error, occurring at Step 4. The most common gold label for these erroneous HABITUAL and ACTIVITY labels was PERFORMANCE. For example, in Sentence L, returns is mislabeled as HABITUAL. The future tense verb will spend in the first clause changes the aspect for the successive clauses of L, i.e. the clause \"...before he returns home with his wife Sherry\".", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 31, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The prevalence of gold PERFORMANCE labels indicates that Steps 3 and 4 are prematurely assigning an aspect and not letting certain present participles continue throughout the sequence and make it all the way to Step 8. However, AutoAspect also correctly labeled some present tense forms, as seen in Sentences K and N. Experimenting with using tense and aspect annotations 8 from the ClearTAC parser resulted in even more false positives for HABITUAL and ACTIVITY. Reducing the number of false positives for HA-BITUAL and ACTIVITY necessitates building a semantic parser that can discern between sentences like L with multi-tense contexts and sentences like M with dialogic contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Less Explicitly Deverbal Event Nominals:", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The AutoAspect decision-making for container and durative adverbials in Step 6 -as well as the non-resultative paths in Step 7 -currently only checks if specific prepositions like in and for are found as dependents of the main verb. Thus, the sentence \"They also said this court did not give the lawyers for the defense due procedure.\" incorrectly assigns the verb give the aspect ENDEAVOR, because the prepositional phrase for the defense is incorrectly parsed by the spaCy dependency parser as being a dependent of the verb give.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Container and Durative Adverbials:", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Future work can incorporate semantic processing of prepositional phrases to help Au-toAspect refine its analysis of prepositional dependencies. One such system is SNACS , which outputs disambiguated supersenses like TIME and DURA-TIVE that could match the semantic properties of the container and durative adverbials.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Container and Durative Adverbials:", |
| "sec_num": "4." |
| }, |
| { |
| "text": "15 STATIVE verbs such as expected and contemplated were mislabeled as PER-FORMANCE. This implies that the pre-defined list of VerbNet class IDs that correspond to state events does not yet fully cover the range of stative verbs, and/or that SemParse isn't able to find all the VerbNet classes. One solution is to pursue development of a stativity annotator, such as SitEnt (Friedrich et al., 2016) , which houses training data that annotates verbs as stative, dynamic, or neither.", |
| "cite_spans": [ |
| { |
| "start": 373, |
| "end": 397, |
| "text": "(Friedrich et al., 2016)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stativity:", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The overall task of classifying SitEnt types achieved an accuracy of 76% using a CRF model that utilizes hand-crafted feature sets. A key difference is that the SitEnt features are accessed simultaneously by the model, while AutoAspect follows the sequential UMR annotation steps. Future research directions could incorporate developing a feature-based model to achieve an accuracy comparable to SitEnt, which also was scored on larger corpora (Brown and MASC) than the 4 gold documents AutoAspect scored.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stativity:", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The linguistic blind spots discovered from analyzing the results highlight many areas for future development for the AutoAspect annotator. It is clear that the primarily syntax-driven rules are unable to capture semantic properties like stativity, disambiguation of prepositional phrases, and genre of text. Feeding in input sentence-by-sentence also prevents AutoAspect from properly analyzing multi-sentence events. The tool currently has a recall of 76.17% for identifying events and a 62.57% accuracy for identified events. As more gold UMR data is processed, further linguistic blind spots can be identified for development past the pilot version.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "All code can be found at https://github.com/ dchensta/AutoAspect.2 VB (base form), VBD (past tense), VBG (gerund or present participle), VBN (past participle), VBP (non-3rd person singular present, and VBZ (3rd person singular present).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Tenses: present, past, future; Aspects: progressive, perfect, perfect progressive", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We gratefully acknowledge the support of NSF 1764048 RI: Medium: Collaborative Research: Developing a Uniform Meaning Representation for Natural Language Processing. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF or the U.S. government. We thank James Gung and Ghazaleh Kazeminejad for their valuable assistance in teaching us how to use the SemParse tool. We thank Skatje Myers for assisting us with the ClearTAC tool.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 exist-47.1\u2022 bulge-47.5.3\u2022 meander-47.7\u2022 contiguous_location-47.8\u2022 terminus-47.9\u2022 put_spatial-9.2-1\u2022 cling-22.5\u2022 entity_specific_modes_being-47.2\u2022 light_emission-43.1\u2022 smell_emission-43.3\u2022 sound_emission-43.2\u2022 sound_existence-47.4\u2022 substance_emission-43.4-1\u2022 swarm-47.5.1-1\u2022 animal_sounds-38\u2022 carve-21.2-1\u2022 modes_of_being_with_motion-47.3\u2022 snooze-40.4\u2022 body_internal_states-40.6\u2022 spatial_configuration-47.6\u2022 peer-30.3\u2022 see-30.1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Abstract meaning representation for sembanking", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Banarescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Bonial", |
| "suffix": "" |
| }, |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Madalina", |
| "middle": [], |
| "last": "Georgescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Griffitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 7th linguistic annotation workshop and interoperability with discourse", |
| "volume": "", |
| "issue": "", |
| "pages": "178--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with dis- course, pages 178-186.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Radical construction grammar", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Croft. 2001. Radical construction grammar. Oxford University Press, Oxford.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Verbs: Aspect and causal structure", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Croft. 2012. Verbs: Aspect and causal struc- ture. Oxford University Press, Oxford.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "press. Morphosyntax: Constructions of the World's Languages", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Croft. in press. Morphosyntax: Constructions of the World's Languages.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Annotation of tense and aspect semantics for sentential AMR", |
| "authors": [ |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Donatelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Regan", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions", |
| "volume": "", |
| "issue": "", |
| "pages": "96--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucia Donatelli, Michael Regan, William Croft, and Nathan Schneider. 2018. Annotation of tense and as- pect semantics for sentential AMR. In Proceedings of the Joint Workshop on Linguistic Annotation, Mul- tiword Expressions and Constructions (LAW-MWE- CxG-2018), pages 96-108.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Tense and aspect semantics for sentential amr", |
| "authors": [ |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Donatelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Regan", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Society for Computation in Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "346--348", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucia Donatelli, Nathan Schneider, William Croft, and Michael Regan. 2019. Tense and aspect semantics for sentential amr. Proceedings of the Society for Computation in Linguistics, 2(1):346-348.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Thematic proto-roles and argument selection. Language", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Dowty", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "67", |
| "issue": "", |
| "pages": "547--619", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Dowty. 1991. Thematic proto-roles and argu- ment selection. Language, 67:547-619.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Classification of telicity using cross-linguistic annotation projection", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Damyana", |
| "middle": [], |
| "last": "Gateva", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2559--2565", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D17-1271" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich and Damyana Gateva. 2017. Classification of telicity using cross-linguistic anno- tation projection. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2559-2565, Copenhagen, Den- mark. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Situation entity types: automatic classification of clause-level aspect", |
| "authors": [ |
| { |
| "first": "Annemarie", |
| "middle": [], |
| "last": "Friedrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1757--1768", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annemarie Friedrich, Alexis Palmer, and Manfred Pinkal. 2016. Situation entity types: automatic clas- sification of clause-level aspect. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1757-1768.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Abstraction, Sense Distinctions and Syntax in Neural Semantic Role Labeling", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Gung", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Gung. 2020. Abstraction, Sense Distinctions and Syntax in Neural Semantic Role Labeling. Ph.D. thesis, University of Colorado at Boulder.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Predicate representations and polysemy in verbnet semantic parsing", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Gung", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "14th International Conference on Computational Semantics (IWCS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Gung and Martha Palmer. 2021. Predicate repre- sentations and polysemy in verbnet semantic parsing. In 14th International Conference on Computational Semantics (IWCS), online.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "spaCy: Industrial-strength Natural Language Processing in Python", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2021. spaCy: Industrial-strength Natural Language Processing in Python.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Cleartac: Verb tense, aspect, and form classification using neural nets", |
| "authors": [ |
| { |
| "first": "Skatje", |
| "middle": [], |
| "last": "Myers", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the First International Workshop on Designing Meaning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "136--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Skatje Myers and Martha Palmer. 2019. Cleartac: Verb tense, aspect, and form classification using neural nets. In Proceedings of the First International Work- shop on Designing Meaning Representations, pages 136-140.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Comprehensive supersense disambiguation of english prepositions and possessives", |
| "authors": [ |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "Jena", |
| "middle": [ |
| "D" |
| ], |
| "last": "Hwang", |
| "suffix": "" |
| }, |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Srikumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Prange", |
| "suffix": "" |
| }, |
| { |
| "first": "Austin", |
| "middle": [], |
| "last": "Blodgett", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sarah", |
| "suffix": "" |
| }, |
| { |
| "first": "Aviram", |
| "middle": [], |
| "last": "Moeller", |
| "suffix": "" |
| }, |
| { |
| "first": "Adi", |
| "middle": [], |
| "last": "Stern", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Bitan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1805.04905" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nathan Schneider, Jena D Hwang, Vivek Srikumar, Jakob Prange, Austin Blodgett, Sarah R Moeller, Aviram Stern, Adi Bitan, and Omri Abend. 2018. Comprehensive supersense disambiguation of en- glish prepositions and possessives. arXiv preprint arXiv:1805.04905.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Lorelei language packs: Data, tools, and resources for technology development in low resource languages", |
| "authors": [ |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Strassel", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Tracey", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
| "volume": "", |
| "issue": "", |
| "pages": "3273--3280", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephanie Strassel and Jennifer Tracey. 2016. Lorelei language packs: Data, tools, and resources for tech- nology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3273-3280.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Cross-linguistic semantic annotation: Reconciling the language-specific and the universal", |
| "authors": [ |
| { |
| "first": "Jens", |
| "middle": [ |
| "E L" |
| ], |
| "last": "Van Gysel", |
| "suffix": "" |
| }, |
| { |
| "first": "Meagan", |
| "middle": [], |
| "last": "Vigus", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavlina", |
| "middle": [], |
| "last": "Kalm", |
| "suffix": "" |
| }, |
| { |
| "first": "Sook-Kyung", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Regan", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the First International Workshop on Designing Meaning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "1--14", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-3301" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jens E. L. Van Gysel, Meagan Vigus, Pavlina Kalm, Sook-kyung Lee, Michael Regan, and William Croft. 2019. Cross-linguistic semantic annotation: Recon- ciling the language-specific and the universal. In Proceedings of the First International Workshop on Designing Meaning Representations, pages 1-14, Florence, Italy.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Vallejos, and Nianwen Xue. 2021. Designing a uniform meaning representation for natural language processing", |
| "authors": [ |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "El Van Gysel", |
| "suffix": "" |
| }, |
| { |
| "first": "Meagan", |
| "middle": [], |
| "last": "Vigus", |
| "suffix": "" |
| }, |
| { |
| "first": "Jayeol", |
| "middle": [], |
| "last": "Chun", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Moeller", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiarui", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Tim", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Gorman", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Cowell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Croft", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "K\u00fcnstliche Intelligenz", |
| "volume": "", |
| "issue": "", |
| "pages": "1--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jens EL Van Gysel, Meagan Vigus, Jayeol Chun, Ken- neth Lai, Sarah Moeller, Jiarui Yao, Tim O'Gorman, Andrew Cowell, William Croft, Jan Huang, Chu- Ren Haji\u010d, James H. Martin, Stephan Oepen, Martha Palmer, James Pustejovsky, Rosa Vallejos, and Ni- anwen Xue. 2021. Designing a uniform meaning representation for natural language processing. K\u00fcn- stliche Intelligenz, pages 1-18.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Linguistics in philosophy, chapter Verbs and times", |
| "authors": [ |
| { |
| "first": "Zeno", |
| "middle": [], |
| "last": "Vendler", |
| "suffix": "" |
| } |
| ], |
| "year": 1967, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeno Vendler. 1967. Linguistics in philosophy, chapter Verbs and times. Cornell University Press, Ithaca.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Flow Chart of Automatic Classification" |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"5\">Of the 235 gold events spread throughout the</td></tr><tr><td colspan=\"5\">4 gold files, the model failed to identify 56, leav-</td></tr><tr><td colspan=\"5\">ing 179 events that were identified. Of those 179,</td></tr><tr><td colspan=\"5\">the model correctly labeled 112 gold events out of</td></tr><tr><td colspan=\"5\">the 179 it could identify, yielding an accuracy of</td></tr><tr><td>62.57%.</td><td/><td/><td/><td/></tr><tr><td>Label Error</td><td colspan=\"4\">FP TP # Gold Precision</td></tr><tr><td>HABITUAL</td><td>27</td><td>3</td><td>5</td><td>10</td></tr><tr><td>STATE</td><td>7</td><td>43</td><td>71</td><td>86</td></tr><tr><td>ACTIVITY</td><td>12</td><td>6</td><td>9</td><td>33.33</td></tr><tr><td>PROCESS</td><td>1</td><td>4</td><td>56</td><td>N/A 7</td></tr><tr><td>PERFORMANCE</td><td colspan=\"2\">15 56</td><td>94</td><td>78.87</td></tr><tr><td>ENDEAVOR</td><td>5</td><td>0</td><td>0</td><td>0</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": "Recall for event identification subtask and performance of model on all gold events that were identified by the model, across all aspect labels." |
| }, |
| "TABREF2": { |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "text": "Counts and precision of each type of mislabeled annotation. For each aspect label, FP and TP counts are from the 179 events that AutoAspect successfully detected, number of gold events is from the 235 total gold events." |
| }, |
| "TABREF4": { |
| "html": null, |
| "content": "<table><tr><td>Sentence</td><td>UMR Events</td></tr><tr><td>G. Part of the purpose of this his-</td><td>visit, be_part,</td></tr><tr><td>toric visit is to lay the ground-</td><td>lay</td></tr><tr><td>work...</td><td/></tr><tr><td>H. It is his decision.</td><td>decide</td></tr><tr><td>I. But the opposition here in the</td><td>opposition, in-</td></tr><tr><td>United States is intense.</td><td>tense</td></tr><tr><td>J. Carla Delponte briefly con-</td><td>contemplate, in-</td></tr><tr><td>templated an investigation of</td><td>vestigate, cam-</td></tr><tr><td>NATO's bombing campaign.</td><td>paign</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": "Missed event nominal corresponding to a gold UMR event in the sentence is bolded and italicized." |
| }, |
| "TABREF5": { |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "text": "Accurately identified span containing a UMR event nominal is bolded." |
| }, |
| "TABREF6": { |
| "html": null, |
| "content": "<table><tr><td>ACTIVITY</td><td>ACTIVITY</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": "Sentence UMR Event AutoAspect Gold Aspect K. ...the Pardon Commission, after it has made its decision, sends it to the President... send HABITUAL HABITUAL L. He will spend the next several days at the medical center there before he returns home with his wife Sherry. return HABITUAL PERFORMANCE M. Marsha, thank you very much for speaking with us. speak ACTIVITY PERFORMANCE N. ...they were afraid of the spy mania rising in Russia... rise" |
| }, |
| "TABREF7": { |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "text": "Annotation of present tense verb forms." |
| } |
| } |
| } |
| } |