| { |
| "paper_id": "P98-1015", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:17:31.906364Z" |
| }, |
| "title": "Semi-Automatic Recognition of Noun Modifier Relationships", |
| "authors": [ |
| { |
| "first": "Ken", |
| "middle": [], |
| "last": "Barker", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Ottawa Ottawa", |
| "location": { |
| "postCode": "K1N 6N5", |
| "country": "Canada" |
| } |
| }, |
| "email": "kbarker@site.uottawa.ca" |
| }, |
| { |
| "first": "Stan", |
| "middle": [], |
| "last": "Szpakowicz", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Ottawa Ottawa", |
| "location": { |
| "postCode": "K1N 6N5", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Semantic relationships among words and phrases are often marked by explicit syntactic or lexical clues that help recognize such relationships in texts. Within complex nominals, however, few overt clues are available. Systems that analyze such nominals must compensate for the lack of surface clues with other information. One way is to load the system with lexical semantics for nouns or adjectives. This merely shifts the problem elsewhere: how do we define the lexical semantics and build large semantic lexicons? Another way is to find constructions similar to a given complex nominal, for which the relationships are already known. This is the way we chose, but it too has drawbacks. Similarity is not easily assessed, similar analyzed constructions may not exist, and if they do exist, their analysis may not be appropriate for the current nominal. We present a semi-automatic system that identifies semantic relationships in noun phrases without using precoded noun or adjective semantics. Instead, partial matching on previously analyzed noun phrases leads to a tentative interpretation of a new input. Processing can start without prior analyses, but the early stage requires user interaction. As more noun phrases are analyzed, the system learns to find better interpretations and reduces its reliance on the user. In experiments on English technical texts the system correctly identified 60-70% of relationships automatically.", |
| "pdf_parse": { |
| "paper_id": "P98-1015", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Semantic relationships among words and phrases are often marked by explicit syntactic or lexical clues that help recognize such relationships in texts. Within complex nominals, however, few overt clues are available. Systems that analyze such nominals must compensate for the lack of surface clues with other information. One way is to load the system with lexical semantics for nouns or adjectives. This merely shifts the problem elsewhere: how do we define the lexical semantics and build large semantic lexicons? Another way is to find constructions similar to a given complex nominal, for which the relationships are already known. This is the way we chose, but it too has drawbacks. Similarity is not easily assessed, similar analyzed constructions may not exist, and if they do exist, their analysis may not be appropriate for the current nominal. We present a semi-automatic system that identifies semantic relationships in noun phrases without using precoded noun or adjective semantics. Instead, partial matching on previously analyzed noun phrases leads to a tentative interpretation of a new input. Processing can start without prior analyses, but the early stage requires user interaction. As more noun phrases are analyzed, the system learns to find better interpretations and reduces its reliance on the user. In experiments on English technical texts the system correctly identified 60-70% of relationships automatically.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Any system that extracts knowledge from text cannot ignore complex noun phrases. In technical domains especially, noun phrases carry much of the information. Part of that information is contained in words; cataloguing the semantics of sin-gle words for computational purposes is a difficult task that has received much attention. But part of the information in noun phrases is contained in the relationships between components.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We have built a system for noun modifier relationship (NMR) analysis that assigns semantic relationships in complex noun phrases. Syntactic analysis finds noun phrases in a sentence and provides a flat list of premodifiers and postmodifying prepositional phrases and appositives. The NMR analyzer first brackets the flat list of premodifiers into modifier-head pairs. Next, it assigns NMRs to each pair. NMRs are also assigned to the relationships between the noun phrase and each postmodifying phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A head noun along with a noun premodifier is often called a noun compound. Syntactically a noun compound acts as a noun: a modifier or a head may again be a compound. The NMR analyzer deals with the semantics of a particular kind of compound, namely those that are transparent and endocentric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Compounds", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The meaning of a transparent compound can be derived from the meaning of its elements. For example, laser printer is transparent (a printer that uses a laser). Guinea pig is opaque: there is no obvious direct relationship to guinea or to pig.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Compounds", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "An endocentric compound is a hyponym of its head. Desktop computer is endocentric because it is a kind of computer. Bird brain is exocentric because it does not refer to a kind of brain, but rather to a kind of person (whose brain resembles that of a bird).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Compounds", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Since the NMR analyzer is intended for technical texts, the restriction to transparent endocentric compounds should not limit the utility of the system. Our experiments have found no opaque or exocentric compounds in the test texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Compounds", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Most of the research on relationships between nouns and modifiers deals with noun compounds, but these relationships also hold between nouns and adjective premodifiers or postmodifying prepositional phrases. Lists of semantic labels have been proposed, based on the theory that a compound expresses one of a small number of covert semantic relations. Levi (1978) argues that semantics and word formation make noun-noun compounds a heterogeneous class. She removes opaque compounds and adds nominal non-predicating adjectives. For this class Levi offers nine semantic labels. According to her theory, these labels represent underlying predicates deleted during compound formation. George (1987) disputes the claim that Levi's non-predicating adjectives never appear in predicative position. Warren (1978) describes a multi-level system of semantic labels for noun-noun relationships. Warren (1984) extends the earlier work to cover adjective premodifiers as well as nouns. The similarity of the two lists suggests that many adjectives and premodifying nouns can be handled by the same set of semantic relations.", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 362, |
| "text": "Levi (1978)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 680, |
| "end": 693, |
| "text": "George (1987)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 790, |
| "end": 803, |
| "text": "Warren (1978)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 883, |
| "end": 896, |
| "text": "Warren (1984)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Relations in Noun Phrases", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Programs that uncover the relationships in modifier-noun compounds often base their analysis on the semantics of the individual words (or a composition thereof). Such systems assume the existence of some semantic lexicon. Leonard's system (1984) assigns semantic labels to noun-noun compounds based on a dictionary that includes taxonomic and meronymic (partwhole) information, information about the syntactic behaviour of nouns and about the relationships between nouns and verbs. Finin (1986) produces multiple semantic interpretations of modifier-noun compounds. The interpretations are based on precoded semantic class information and domaindependent frames describing the roles that can be associated with certain nouns. Ter Stal 's system (1996) identifies concepts in text and unifies them with structures extracted from a hand-coded lexicon containing syntactic information, logical form templates and taxonomic information.", |
| "cite_spans": [ |
| { |
| "start": 222, |
| "end": 245, |
| "text": "Leonard's system (1984)", |
| "ref_id": null |
| }, |
| { |
| "start": 482, |
| "end": 494, |
| "text": "Finin (1986)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 735, |
| "end": 751, |
| "text": "'s system (1996)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognizing Semantic Relations", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In an attempt to avoid the hand-coding required in other systems, Vanderwende (1993) automatically extracts semantic features of nouns from online dictionaries. Combinations of features imply particular semantic interpretations of the relationship between two nouns in a compound. For each NMR, we give a paraphrase and example modifier-noun compounds. Following the tradition in the study of noun compound semantics, the paraphrases act as definitions and can be used to check the acceptability of different interpretations of a compound. The paraphrases serve as definitions in this section and to help with interpretation during user interactions (as illustrated in section 6). In the analyzer, awkward paraphrases with adjectives could be improved by replacing adjectives with their WordNet pertainyms (Miller, 1990) , giving, for example, \"charity benefits from charitable donation\" instead of \"charitable benefits from charitable donation\". ", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 84, |
| "text": "Vanderwende (1993)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 806, |
| "end": 820, |
| "text": "(Miller, 1990)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recognizing Semantic Relations", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Before assigning NMRs, the system must bracket the head noun and the premodifier sequence into modifier-head pairs. Example (2) shows the bracketing for noun phrase (1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(1) dynamic high impedance microphone", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(2) (dynamic ((high impedance) microphone))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The bracketing problem for noun-noun-noun compounds has been investigated by Liberrnan & Sproat (1992), Pustejovsky et al. (1993) , Resnik (1993) and Lauer (1995) among others. Since the NMR analyzer must handle premodifier sequences of any length with both nouns and adjectives, it requires more general techniques. Our semi-automatic bracketer (Barker, 1998) allows for any number of adjective or noun premodifiers. After bracketing, each non-atomic element of a bracketed pair is considered a subphrase of the original phrase. The subphrases for the bracketing in (2) appear in (3), (4) and (5).", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 129, |
| "text": "(1993)", |
| "ref_id": null |
| }, |
| { |
| "start": 132, |
| "end": 145, |
| "text": "Resnik (1993)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 150, |
| "end": 162, |
| "text": "Lauer (1995)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 346, |
| "end": 360, |
| "text": "(Barker, 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(3) high impedance (4) high_impedance microphone (5) dynamic high_impedance_microphone Each subphrase consists of a modifier (possibly compound, as in (4)) and a head (possibly compound, as in (5)). The NMR analyzer assigns an NMR to the modifier-head pair that makes up each subphrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Once an NMR has been assigned, the system must store the assignment to help automate future processing. Instead of memorizing complete noun phrases (or even complete subphrases) and analyses, the system reduces compound modifiers and compound heads to their own local heads and stores these reduced pairs with their assigned NMR. This allows it to analyze different noun phrases that have only reduced pairs in common with previous phrases. For example, (6) and 7have the reduced pair (8) in common. If (6) has already been analyzed, its analysis can be used to assist in the analysis of (7)--see section 5.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(6) (dynamic ((high impedance) microphone)) (7) (dynamic (cardioid (vocal microphone))) (8) (dynamic microphone) 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Assigning NMRs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Three kinds of construction require NMR assignments: the modifier-head pairs from the bracketed premodifier sequence; postmodifying prepositional phrases; appositives. These three kinds of input can be generalized to a single form--a triple consisting of modifier, head and marker (M, H, Mk). For premodifiers, Mk is the symbol nil, since no lexical item links the premodifier to the head. For postmodifying prepositional phrases Mk is the preposition. For appositives, Mk is the symbol appos. The (M, H, Mk) triples for examples (9), (10) and (11) appear in 11dates to present to the user for approval. Appositives are automatically assigned Equative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noun Modifier Bracketing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The distance between two triples is a measure of the degree to which their modifiers, heads and markers match. Table 3 gives the eight different values for distance used by NMR analysis. The analyzer looks for previous triples at the lower distances before attempting to find triples at higher distances. For example, it will try to find identical triples before trying to find triples whose markers do not match.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 111, |
| "end": 118, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distance Between Triples", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Several things about the distance measures require explanation. First, a preposition is more similar to a nil marker than to a different preposition. Unlike a different preposition, the nil marker is not known to be different from the marker in an overtly marked pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Between Triples", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Next, no evidence suggests that triples with matching M are more similar or less similar than triples with matching H (distances 3 and 6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Between Triples", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Triples with matching prepositional marker (distance 4) are considered more similar than triples with matching M or H only. A preposition is an overt indicator of the relationship between M and H (see Quirk, 1985 : chapter 9) so a correlation is more likely between the preposition and the NMR than between a given M or H and the NMR.", |
| "cite_spans": [ |
| { |
| "start": 201, |
| "end": 212, |
| "text": "Quirk, 1985", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Between Triples", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "If the current triple has a prepositional marker not seen in any previous triple (distance 5), the system finds candidate NMRs in its NMR marker dictionary. This dictionary was constructed from a list of about 50 common atomic and phrasal prepositions. The various meanings of each preposition were mapped to NMRs by hand. Since the list of prepositions is small, dictionary con' struction was not a difficult knowledge engineering task (requiring just twenty hours of work of a secondary school student).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Between Triples", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The lists of candidate NMRs consist of all those NMRs previously assigned to (M, H, Mk) triples at a minimum distance from the triple under analysis. If the minimum distance was 3 or 6, there may be two candidate lists: LM contains the NMRs previously assigned to triples with matching M, L,-with matching H. The analyzer attempts to choose a set R of candidates to suggest to the user as the best NMRs for the current triple, If there is one list L of candidate NMRs, R contains the NMR (or NMRs) that occur most frequently in L For two lists LM and L,, R could be found in several ways, We could take R to contain the most frequent NMRs in LM u L,. This absolute frequency approach has a bias towards NMRs in the larger of the two lists. Alternatively, the system could prefer NMRs with the highest relative frequency in their lists. If there is less variety in the NMRs in LM than in LH, M might be a more consistent indicator of NMR than H. Consider example (12). R would contain NMR(s) with the highest score. This combined formula was used in the experiment described in section 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Best NMRs", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Since NMR analysis deals with endocentric compounds we can recover a taxonomic relationship from triples with a nil marker. Consider example (13) and its reduced pairs in 14:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Premodifiers as Classifiers", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "(13) ((laser printer) stand) (14) (laser printer) (printer stand)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Premodifiers as Classifiers", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "These pairs produce the following output:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Premodifiers as Classifiers", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "laser..printer_stand isa stand laser_.printer isa printer", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Premodifiers as Classifiers", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The NMR analyzer is intended to start processing from scratch. A session begins with no previous triples to match against the triple at hand. To compensate for the lack of previous analyses, the system relies on the help of a user, who supplies the correct NMR when the system cannot determine it automatically. In order to supply the correct NMR, or even to determine if the suggested NMR is correct, the user must be familiar with the NMR definitions. To minimize the burden of this requirement, all interactions use the modifier and head of the current phrase in the paraphrases from section 3. Furthermore, if the appropriate NMR is not among those suggested by the system, the user can request the complete list of paraphrases with the current modifier and head. Figure 1 shows the interaction for phrases (15)-(18). The system starts with no previously analyzed phrases. The NMR marker dictionary maps the preposition of to twelve NMRs: Agent, Cause, Content, Equative, Located, Material, Object, Possessor, Property, Result, Source, Topic.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 768, |
| "end": 776, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "User Interaction", |
| "sec_num": "6" |
| }, |
| { |
| "text": "(15) small gasoline engine (16) the repair of diesel engines (17) diesel engine repair shop (18) an auto repair center User input is shown bold underlined. At any prompt the user may type 'list' to view the complete list of NMR paraphrases for the current modifier and head.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Example", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We present the results of evaluating the NMR analyzer in the context of a large knowledge acquisition experiment (see Barker et al., 1998) . The NMR analyzer is one part of a larger interactive semantic analysis system. The experiment evaluated the semantic analysis of Atkinson (1990) . We refer to it as the small engines experiment. Other experiments have shown similar results. We consider three evaluation criteria. First, we evaluate the analyzer's ability to learn to make better suggestions to the user as more noun phrases are analyzed. Second, we evaluate its coverage by comparing the number of relationships assigned with the total number of such relationships in the text (i.e., the number it should have assigned). Third, we assess the burden that semi-automatic analysis places on the user.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 138, |
| "text": "Barker et al., 1998)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 270, |
| "end": 285, |
| "text": "Atkinson (1990)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Since the system starts with no previous noun phrase analyses, the user is responsible for supplying NMRs at the beginning of a session. To measure the rate of learning, we compare the cumulative number of assignments required from the user to the cumulative number of correct assignments suggested by the system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improvement in System Performance", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "In the small engines experiment, 886 modifier-noun pairs were assigned an NMR. We consider the system's assignment correct when the correct label is among its suggestions. According to this definition, 608 of the 886 NMRs (69%) were assigned correctly by the system. For most of these assignments (97.5%) the system offered a single suggestion. It had multiple (on average 3.3) suggestions only 22 times.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improvement in System Performance", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Phrase 15 Figure 2 shows the cumulative number of NMR assignments supplied by the user versus those determined correctly by the system. After about 100 assignments, the system was able to make the majority of assignments automatically. The curves in the figure show that the system learns to make better suggestions as more phrases are analyzed. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 10, |
| "end": 18, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improvement in System Performance", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "The NMR analyzer depends on a parser to find noun phrases in a text. If parsing is not 100% successful, the analyzer will not see all noun phrases in the input text. It is not feasible to find manually the total number of relationships in a text--even in one of only a few hundred sentences. To measure coverage, we sampled 100 modifier-noun pairs at random from the small engines text and found that 87 of them appeared in the analyzer's output. At 95% confidence, we can say that the system extracted between 79.0% and 92.2% of the modifier-noun relationships in the text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NMR Coverage", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "User burden is a fairly subjective criterion. To measure burden, we assigned an \"onus\" rating to each interaction during the small engines experiment. The onus is a number from 0 to 3.0 means that the correct NMR was obvious, whether suggested by the system or supplied by the user. 1 means that selecting an NMR required a few moments of reflection. A rating of 2 means that the interaction required serious thought, but we were ultimately able to choose an NMR. 3 means that even after much contemplation, we were unable to agree on an NMR. The average user onus rating was 0.1 for NMR interactions in the small engines experiment. 808 of the 886 NMR assignments received an onus rating of 0; 71 had a rating of 1; 7 received a rating of 2. No interactions were rated onus level 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "User Burden", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "Although the list of NMRs was inspired by the relationships found commonly in others' lists, it has not undergone a more rigorous validation (such as one described in Barker et al., 1997) . In section 5.2 we discussed different approaches to choosing NMRs from two lists of candidates. We have implemented and compared five different techniques for choosing the best NMRs, but experimental results are inconclusive as to which techniques are better. We should seek a more theoretically sound approach followed by further experimentation.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 187, |
| "text": "Barker et al., 1997)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "The NMR analyzer currently allows its stored triples (and associated NMRs) to be saved in a file at the end of a session. Any number of such files can be reloaded at the beginning of subsequent sessions, \"seeding\" the new sessions. It is necessary to establish the extent to which the triples and assignments from one text or domain are useful in the analysis of noun phrases from another domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work", |
| "sec_num": "8" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work is supported by the Natural Sciences and Engineering Research Council of Canada.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Mechanics of Small Engines", |
| "authors": [ |
| { |
| "first": "Henry", |
| "middle": [ |
| "F" |
| ], |
| "last": "Atkinson", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Atkinson, Henry F. (1990). Mechanics of Small En- gines. New York: Gregg Division, McGraw-Hill.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A Trainable Bracketer for Noun Modifiers", |
| "authors": [ |
| { |
| "first": "Ken", |
| "middle": [], |
| "last": "Barker", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "The Twelfth Canadian Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barker, Ken (1998). \"A Trainable Bracketer for Noun Modifiers\". The Twelfth Canadian Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Systematic Construction of a Versatile Case System", |
| "authors": [ |
| { |
| "first": "Ken", |
| "middle": [], |
| "last": "Barker", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Copeck", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Delisle & Stan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Szpakowicz", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Journal of Natural Language Engineering", |
| "volume": "3", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barker, Ken, Terry Copeck, Sylvain Delisle & Stan Szpakowicz (1997). \"Systematic Construction of a Versatile Case System.\" Journal of Natural Lan- guage Engineering 3(4), December 1997.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Test-Driving TANKA: Evaluating a Semi-Automatic System of Text Analysis for Knowledge Acquisition", |
| "authors": [ |
| { |
| "first": "Ken", |
| "middle": [], |
| "last": "Barker", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Delisle & Stan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Szpakowicz", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "The Twelfth Canadian Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barker, Ken, Sylvain Delisle & Stan Szpakowicz (1998). \"Test-Driving TANKA: Evaluating a Semi- Automatic System of Text Analysis for Knowledge Acquisition.\" The Twelfth Canadian Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Constraining the Interpretation of Nominal Compounds in a Limited Context", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [ |
| "W" |
| ], |
| "last": "Finin", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Analyzing Language in Restricted Domains: Sublanguage Description and Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "163--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Finin, Timothy W. (1986). \"Constraining the Interpre- tation of Nominal Compounds in a Limited Con- text.\" In Analyzing Language in Restricted Domains: Sublanguage Description and Processing, R. Grish- man & R. Kittredge, eds., Lawrence Erlbaum, Hillsdale, pp. 163-173.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "On \"Nominal Non-Predicating", |
| "authors": [ |
| { |
| "first": "Steffi", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Adjectives in English. Frankfurt am Main", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George, Steffi (1987). On \"Nominal Non-Predicating\" Adjectives in English. Frankfurt am Main: Peter Lang.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Corpus Statistics Meet the Noun Compound: Some Empirical Results", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Lauer", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge", |
| "volume": "", |
| "issue": "", |
| "pages": "47--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lauer, Mark (1995). \"Corpus Statistics Meet the Noun Compound: Some Empirical Results.\" Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge. 47-54.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The Interpretation of English Noun Sequences on the Computer", |
| "authors": [ |
| { |
| "first": "Rosemary", |
| "middle": [], |
| "last": "Leonard", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leonard, Rosemary (1984). The Interpretation of Eng- lish Noun Sequences on the Computer. Amsterdam: North-Holland.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The Syntax and Semantics of Complex Nominals", |
| "authors": [ |
| { |
| "first": "Judith", |
| "middle": [ |
| "N" |
| ], |
| "last": "Levi", |
| "suffix": "" |
| } |
| ], |
| "year": 1978, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Levi, Judith N. (1978). The Syntax and Semantics of Complex Nominals. New York: Academic Press.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Stress and Structure of Modified Noun Phrases", |
| "authors": [ |
| { |
| "first": "Mark & Richard", |
| "middle": [], |
| "last": "Liberman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sproat", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Lexical Matters (CSLI Lecture Notes, 24). Stanford: Center for the Study of Language and Information", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liberman, Mark & Richard Sproat (1992). \"Stress and Structure of Modified Noun Phrases.\" Lexical Mat- ters (CSLI Lecture Notes, 24). Stanford: Center for the Study of Language and Information.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "WordNet: An On-Line Lexical Database", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "A" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "International Journal of Lexicography", |
| "volume": "3", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miller, George A., ed. (1990). \"WordNet: An On-Line Lexical Database.\" International Journal of Lexicog- raphy 3(4).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Lexical Semantic Techniques for Corpus Analysis", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Pustejovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bergler", |
| "suffix": "" |
| }, |
| { |
| "first": "& P", |
| "middle": [], |
| "last": "Anick", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "331--358", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pustejovsky, James, S. Bergler & P. Anick (1993). \"Lexical Semantic Techniques for Corpus Analysis.\" Computational Linguistics 19(2). 331-358.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A Comprehensive Grammar of the English Language", |
| "authors": [ |
| { |
| "first": "Randolph", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "Sidney", |
| "middle": [], |
| "last": "Greenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Geoffrey Leech & Jan Svartvik", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quirk, Randolph, Sidney Greenbaum, Geoffrey Leech & Jan Svartvik (1985). A Comprehensive Grammar of the English Language. London: Longman.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Selection and Information: A Class-Based Approach to Lexical Relationships", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stuart", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Resnik, Philip Stuart (1993). \"Selection and Informa- tion: A Class-Based Approach to Lexical Relation- ships.\" Ph.D. thesis, IRCS Report 93-42, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Automated Interpretation of Nominal Compounds in a Technical Domain", |
| "authors": [], |
| "year": 1996, |
| "venue": "ter Stal", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "ter Stal, Wilco (1996). \"Automated Interpretation of Nominal Compounds in a Technical Domain.\" Ph.D. thesis, University of Twente, The Netherlands.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "SENS: The System for Evaluating Noun Sequences", |
| "authors": [ |
| { |
| "first": "Lucy", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "161--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vanderwende, Lucy (1993). \"SENS: The System for Evaluating Noun Sequences.\" In Natural Language Processing: The PLNLP Approach, K. Jensen, G. Heidorn & S. Richardson, eds., Kluwer Academic Publishers, Boston, pp. 161-173.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Semantic Patterns of Noun-Noun Compounds. G/Steborg: Acta Universitatis Gothoburgensis", |
| "authors": [ |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Warren", |
| "suffix": "" |
| } |
| ], |
| "year": 1978, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Warren, Beatrice (1978). Semantic Patterns of Noun- Noun Compounds. G/Steborg: Acta Universitatis Gothoburgensis.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Classifying Adjectives. GSteborg", |
| "authors": [ |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Warren", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Warren, Beatrice (1984). Classifying Adjectives. GSte- borg: Acta Universitatis Gothoburgensis.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Cumulative NMR assignments", |
| "uris": null |
| }, |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">lists the NMRs used by our analyzer. The</td></tr><tr><td colspan=\"2\">list is based on similar lists found in literature on</td></tr><tr><td colspan=\"2\">the semantics of noun compounds. It may evolve</td></tr><tr><td colspan=\"2\">as experimental evidence suggests changes.</td></tr><tr><td>Agent (agt)</td><td>Material (matr)</td></tr><tr><td>Beneficiary (benf)</td><td>Object (obj)</td></tr><tr><td>Cause (caus)</td><td>Possessor (poss)</td></tr><tr><td>Container (ctn)</td><td>Product (prod)</td></tr><tr><td>Content (cont)</td><td>Property (prop)</td></tr><tr><td>Destination (dest)</td><td>Purpose (purp)</td></tr><tr><td>Equative (equa)</td><td>Result (resu)</td></tr><tr><td>Instrument (inst)</td><td>Source (src)</td></tr><tr><td>Located (led)</td><td>Time (time)</td></tr><tr><td>Location (loc)</td><td>Topic (top)</td></tr></table>", |
| "text": "", |
| "html": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "The noun modifier relationships", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "modifier is also head composer arranger, player coach Instrument: modifier is used in compound electron microscope, diesel engine, laser printer Located: modifier is located at compound building site, home town, solar system Location: modifier is the location of compound lab printer, internal combustion, desert storm Material: compound is made of modifier carbon deposit, gingerbread man, water vapottr Object: modifier is acted on by compound engine repair, horse doctor Possessor: modifier has compound national debt, student loan, company car Product: modifier is a product of compound automobile factory, light bulb, colour printer", |
| "html": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>(9) monitor cable plug</td></tr><tr><td>(10) large piece of chocolate cake</td></tr><tr><td>(11) my brother, a friend to all young people</td></tr></table>", |
| "text": "To assign an NMR to a triple (M, H, Mk), the system looks for previous triples whose distance to the current triple is minimal. The NMRs assigned to previous similar triples comprise lists of candidate NMRs. The analyzer then finds what it considers the best NMR from these lists of candi-", |
| "html": null |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>: (M, H, Mk) triples for (9), (I0) and</td></tr></table>", |
| "text": "", |
| "html": null |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>dist</td><td>current triple</td><td>previous triple</td><td colspan=\"2\">example</td></tr><tr><td>0</td><td>(M, H, Mk)</td><td>(M, H, Mk)</td><td>wall beside a garden</td><td>wall beside a garden</td></tr><tr><td>1</td><td>(M, H, <prep>)</td><td>(M, H, nil)</td><td>wall beside a garden</td><td>garden wall</td></tr><tr><td>2</td><td>(M, H, Mk)</td><td>(M, H,_)</td><td>wall beside a garden</td><td>wall around a garden</td></tr><tr><td>3</td><td>(M, H, Mk)</td><td>(M, _, Mk) or (_, H, Mk)</td><td>pile of garbage</td><td>pile of sweaters</td></tr><tr><td>4</td><td>(M, H, <prep>)</td><td>( .... <prep>)</td><td>pile of garbage</td><td>house of bricks</td></tr><tr><td>5</td><td>(M, H, <prep>)</td><td>(_ .... )</td><td colspan=\"2\">ice in the cup nmrm(in, [ctn,inst, loc,src,time])</td></tr><tr><td>6</td><td>(M, H, Mk)</td><td>(M .... ) or (_, H, _)</td><td>wall beside a garden</td><td>garden fence</td></tr><tr><td>7</td><td>(M, H, Mk)</td><td>( ..... )</td><td>wall beside a garden</td><td>pile of garbage</td></tr></table>", |
| "text": "Compounds with the modifier front may always have been assigned Location. Compounds with", |
| "html": null |
| }, |
| "TABREF6": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"3\">frequency in LM u LH. But if Location has the</td></tr><tr><td colspan=\"3\">highest relative frequency, this method correctly</td></tr><tr><td colspan=\"3\">assigns Location to (12). There is a potential bias,</td></tr><tr><td colspan=\"3\">however, for smaller lists (a single N-MR in a list</td></tr><tr><td colspan=\"3\">always has the highest relative frequency).</td></tr><tr><td colspan=\"3\">To avoid these biases, we could combine ab-</td></tr><tr><td colspan=\"3\">solute and relative frequencies. Each NMR i is</td></tr><tr><td colspan=\"2\">assigned a score si calculated as:</td><td/></tr><tr><td>freq(i ~ Lu) 2</td><td colspan=\"2\">freq(i e LH) 2</td></tr><tr><td>s, =</td><td>+</td><td>IL.I</td></tr></table>", |
| "text": "Measures of distance between triples the head line may have been assigned many different NMRs. If line has been seen as a head more often than front as a modifier, one of the NMRs assigned to line may have the highest absolute", |
| "html": null |
| }, |
| "TABREF7": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Please enter a valid NMR label:</td><td>inst</td></tr><tr><td>Do you accept the NMR Instrument:</td><td/></tr><tr><td>gasoline is used in gasoline__engine</td><td>Y_</td></tr><tr><td>There is a relationship between</td><td/></tr><tr><td>small and small_gasoline_engine.</td><td/></tr><tr><td>Please enter a valid NMR label:</td><td>prop</td></tr><tr><td>Do you accept the NMR Property:</td><td/></tr><tr><td>small_gasoline__engine is small</td><td>Y</td></tr><tr><td>Phrase (16): the repair of diesel engines</td><td/></tr><tr><td>There is a relationship between</td><td/></tr><tr><td>diesel and diesel_engine.</td><td/></tr><tr><td>NMR Analyzer's best suggestions for this input:</td><td/></tr><tr><td>(1) prop: diesel_engine is diesel</td><td/></tr><tr><td>(2) inst: diesel is used in diesel_engine</td><td/></tr><tr><td>Please enter a number between 1 and 2:</td><td>_2</td></tr><tr><td>Do you accept the NMR Instrument:</td><td/></tr><tr><td>diesel is used in diesel_engine</td><td>Y</td></tr><tr><td>There is a relationship between</td><td/></tr><tr><td>diesel_engine and repair.</td><td/></tr><tr><td>NMR Analyzer's best suggestions for this input:</td><td/></tr><tr><td>(1) agt: repairis performed by dieselengine</td><td/></tr><tr><td>(2) caus: diesel_engine causes repair</td><td/></tr><tr><td>(7) obj: diesel_engine is acted on by repair</td><td/></tr><tr><td>(12) top: repairis concerned with diesel_engine</td><td/></tr><tr><td>Please enter a number between 1 and 12:</td><td>7</td></tr><tr><td>Do you accept the NMR Object:</td><td/></tr><tr><td>diesel_en~lin e is acted on by repair</td><td>Y</td></tr><tr><td>Phrase (17): diesel engine repair shop</td><td/></tr><tr><td>Do you accept the NMR Instrument:</td><td/></tr><tr><td>diesel is used in diesel_engine</td><td>Y__</td></tr><tr><td>Do you accept the NMR Object:</td><td/></tr><tr><td colspan=\"2\">diesel_engine is acted on by diesel_engine_.repair Y</td></tr><tr><td>There is a relationship between</td><td/></tr><tr><td colspan=\"2\">diesel_ engine_repair and diesel_enginerepair_shop.</td></tr><tr><td>Please enter a valid NMR label:</td><td>purp</td></tr><tr><td>Do you accept the NMR Purpose:</td><td/></tr><tr><td>dieselengine_repair__shop is meant for</td><td/></tr><tr><td>dieseC engine_repair</td><td>Y</td></tr><tr><td>Phrase (18): an auto repair center</td><td/></tr><tr><td>Do you accept the NMR Object:</td><td/></tr><tr><td>auto is acted on by auto_repair</td><td>Y</td></tr><tr><td>Do you accept the NMR Purpose:</td><td/></tr><tr><td>auto_repair_ centeris meant for auto_repair</td><td>Y</td></tr><tr><td colspan=\"2\">Figure I: NMR analysis interaction for (15)-(18)</td></tr></table>", |
| "text": ": small gasoline engineThere is a relationship between gasoline and gasoline_engine.", |
| "html": null |
| } |
| } |
| } |
| } |