ACL-OCL / Base_JSON /prefixP /json /P94 /P94-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P94-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:19:01.199919Z"
},
"title": "GRADED UNIFICATION: A FRAMEWORK FOR INTERACTIVE PROCESSING",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"region": "Pennsylvania",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An extension to classical unification, called graded unification is presented. It is capable of combining contradictory information. An interactive processing paradigm and parser based on this new operator are also presented.",
"pdf_parse": {
"paper_id": "P94-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "An extension to classical unification, called graded unification is presented. It is capable of combining contradictory information. An interactive processing paradigm and parser based on this new operator are also presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Improved understanding of the nature of knowledge used in human language processing suggests the feasibility of interactive models in computational linguistics (CL). Recent psycholinguistic work such as (Stowe, 1989; Trueswell et al., 1994) has documented rapid employment of semantic information to guide human syntactic processing. In addition, corpus-based stochastic modelling of lexical patterns (see Weischedel et al., 1993) may provide information about word sense frequency of the kind advocated since (Ford et al., 1982) . Incremental employment of such knowledge to resolve syntactic ambiguity is a natural step towards improved cognitive accuracy and efficiency in CL models.",
"cite_spans": [
{
"start": 203,
"end": 216,
"text": "(Stowe, 1989;",
"ref_id": "BIBREF6"
},
{
"start": 217,
"end": 240,
"text": "Trueswell et al., 1994)",
"ref_id": "BIBREF7"
},
{
"start": 406,
"end": 430,
"text": "Weischedel et al., 1993)",
"ref_id": null
},
{
"start": 510,
"end": 529,
"text": "(Ford et al., 1982)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This exercise will, however, pose difficulties for the classical ('hard') constraint-based paradigm. As illustrated by the Trueswell et al. (1994) results, this view of constraints is too rigid to handle the kinds of effects at hand. These experiments used pairs of locally ambiguous reduced relative clauses such as: 1) the man recognized by the spy took off down the street 2) the van recognized by the spy took off down the street The verb recognized is ambiguously either a past participial form or a past tense form. Eye tracking showed that subjects resolved the ambiguity rapidly (before reading the by-phrase) in 2) but not in 1) 1. The conclusion they draw is that subjects use knowledge about thematic roles to guide syntactic decisions. Since van, which is inanimate, makes a good Theme but a poor Agent for recognized, the past participial analysis in 2) is reinforced and the main clause (past tense) suppressed. Being animate, man performs either thematic role well, allowing the main clause reading to remain *I thank Christy Doran, Jason Eisner, Jeff Reynar, and John Trueswell for valuable comments. I am grateful to Ewan Klein and the Centre for Cognitive Science, Edinburgh, where most of this work was conducted, and also acknowledge the support of DARPA grant N00014-90-J-1863.",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "Trueswell et al. (1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "1In fact, ambiguity effects were often completely eliminated in examples like 2), with reading times matching those for the unambiguous case: 3) the man/van that was recognized by the spy ... plausible until the disambiguating by-phrase is encountered. At this point, readers of 1) displayed confusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Semantic constraints do appear to be at work here. However, the effects observed by Trueswell et al. are graded. Verb-complement combinations occupy a continuous spectrum of \"thematic fit\", which influences reading times. This likely stems from the variance of verbs with respect to the thematic roles they allow (e.g., Agent, Instrument, Patient, etc.) and the syntactic positions of these.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The upshot of such observations is that classical unification (see Shieber, 1986) , which has served well as the combinatory mechanism in classical constraint-based parsers, is too brittle to withstand this onslaught of uncertainty.",
"cite_spans": [
{
"start": 67,
"end": 81,
"text": "Shieber, 1986)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This paper presents an extension to classical unification, called graded unification. Graded unification combines two feature structures, and returns a strength which reflects the compatibility of the information encoded by the two structures. Thus, two structures which could not unify via classical unification may unify via graded unification, and all combinatory decisions made during processing are endowed with a level of goodness. The operator is similar in spirit to the operators of fuzzy logic (see Kapcprzyk, 1992) , which attempts to provide a calculus for reasoning in uncertain domains. Another related approach is the \"Unification Space\" model of Kempen & Vosse (1989) , which unifies through a process of simulated annealing, and also uses a notion of unification strength.",
"cite_spans": [
{
"start": 509,
"end": 525,
"text": "Kapcprzyk, 1992)",
"ref_id": "BIBREF3"
},
{
"start": 662,
"end": 683,
"text": "Kempen & Vosse (1989)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "A parser has been implemented which combines constituents via graded unification and whose decisions are influenced by unification strengths. The result is a paradigm of incremental processing, which maintains a feature-based system of knowledge representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Though the employment of graded unification engenders a new processing style, the system's architecture parallels that of a conventional unification-based parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": null
},
{
"text": "The feature structures which encode the grammar in this system are conventional feature structures augmented by the association of priorities with each atomic-valued feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structures: Prioritized Features",
"sec_num": null
},
{
"text": "Prioritizing features allows them to vary in terms of influence over the strength of unification. The priority of an atomic-valued feature fi in a feature structure X will be denoted by Pri(fi, X). The effect of feature prioritization is clarified in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structures: Prioritized Features",
"sec_num": null
},
{
"text": "Given two feature structures, the graded unification mechanism (Ua) computes two results, a unifying structure and a unification strength.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Unification",
"sec_num": null
},
{
"text": "Structural Unification Graded unification builds structure exactly as classical unification except in the case of atomic unification, where it deviates crucially.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Unification",
"sec_num": null
},
{
"text": "Atoms in this framework are weighted disjunctive values. The weight associated with a disjunct is viewed as the confidence with which the processor believes that disjunct to be the 'correct' value. Figures l(a) and l(b) depict atoms (where l(a) is \"truly atomic\" because it contains only one disjunct). Atomic unification creates a mixture of its two argument atoms as follows. When two atoms are unified, the set union of their disjuncts is collected in the result. For each disjunct in the result, the associated weight becomes the average of the weights associated with that disjunct in the two argument atoms. Figure l(c) shows an example unification of two atoms. The result is an atom which is 'believed' to be SG (singular), but could possibly be PL (plural).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Unification",
"sec_num": null
},
{
"text": "The unification strength (denoted t3aStrength) is a weighted average of atomic unification strengths, defined in terms of two sums, the actual compatibility and the perfect compatibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "If A and B are non-atomic feature structures to be unified, then the following holds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "I laStrength(A, B) = ActualCornpatibility(A,B) Per ] ectC ornpatibility( A,B ) \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "The actual compatibility is the sum: UGStrength(via,ViB) ~. if fi shared by A and B",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "UGStrength(via,ViB)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "Pri(fi,A)+Pri(li,B) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "\u2022 Pvi(fi, A) if fi occurs only in A Pri(fi, B)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "if fi occurs only in B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "where i indexes all atomic-valued features in A or B, and v;a and ViB are the values of fi in A and B respectively. The perfect compatibility is computed by a formula identical to this except that UaStrength is set to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "If A and B are atomic, then IIGStrenglh (A, B) is the total weight of disjuncts shared by A and B:",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 46,
"text": "(A, B)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "tJcStrength(A,B) = ~-~i Min(wiA, WiB)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "where i indexes all disjuncts di shared by A and B, and wia and wiB are the weights of di in A and B respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "By taking atomic unification strengths into account, the actual compatibility provides a raw measure of the extent to which two feature structures agree. By ignoring unification strengths (assuming a value of 1.o), the perfect compatibility is an idealization of the actual compatibility; it is what the actual compatibility would be if the two structures were able to unify via classical unification. Thus, unification strength is always a value between 0 and 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification Strength",
"sec_num": null
},
{
"text": "The parser is a modified unification-based chart parser. Chart edges are assigned activation levels, which represent the 'goodness' of (or confidence in) their associated analyses. Each new edge is activated according to the strength of the unification which licenses its creation and the activations of its constituent edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parser: Activated Chart Edges",
"sec_num": null
},
{
"text": "Without some strict limit on its operation, graded unification will overgenerate wildly. Two mechanisms exist to constrain graded unification. First, if a particular unification completes with strength below a specified unification threshold, it fails. Second, if a new edge is constructed with activation below a specified activation threshold, it is not allowed to enter the chart, and is suspended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "Parsing Strategy The chart is initialized to contain one inactive edge for each lexical entry of each word in the input. Lexical edges are currently assigned an initial activation of 1.o.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "The chart can then be expanded in two ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "1. An active edge may be extended by unifying its first unseen constituent with the LrlS of an inactive edge. (The weights wi sum to 1.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "EDGE3 enters the chart only if its activation exceeds the activation threshold. Rule invocation is depicted in figure 3 . The first needed constituent in EDGE1 is unified with the LHS of aULE1. EDGE2 is created to begin searching for C. The new edge's activation is again a function of unification strength and other activations:",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "activ 3 ---wl \u2022 UGSTRENGTH(C, C') 9-w2 \u2022 activl + w 3 . activ2 E~E~ I A -- B o/C~ RULEI [_IGOr-------------'/ [ C ' -- D E ~ EDGE2 ~'J~\" ~ o D E Figure 3: Top Down Rule Invocation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "The activation levels of grammar rule edges, like those for lexical edges, are currently pegged to 1.o.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "A Framework for Interactive Processing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "The system described above provides a flexible framework for the interactive use of non-syntactic knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "Animacy and Thematic Roles Knowledge about animacy and its important function in the filling of thematic roles can be modelled as a binary feature, ANIMATE. A (active voice) verb can strongly 'want' an animate Agent by specifying that its subject be [ANIMATE Jr] and assigning a high priority to the feature ANIMATE. Thus, any parse combining this verb with an inanimate subject will suffer in terms of unification strength. A noun can be strongly animate by having a high weight associated with the positive value of ANIMATE. Animacy has been encoded in a toy grammar. However, principled settings for the priority of this feature are left to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraining Graded Unification",
"sec_num": null
},
{
"text": "Corpus-based part-of-speech (POS) statistics can also be naturally incorporated into the current model. It is proposed here that a Viterbi decoder could be used to generate the likelihoods of the n best POS tags for a given word in the input string. Lexical chart edges would then be initially activated to levels proportional to the predicted likelihoods of their associated tags. Since these activations will be propagated to larger edges, parses involving predicted word senses would consequently be given a head start in a race of activations. Attractively, this strategy allows a fuller use of statistical information than one which uses the information simply to deterministically choose the n best tags, which are then treated as equally likely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Information from Corpora",
"sec_num": null
},
{
"text": "A crucial feature of this framework is its potential for modelling the interaction between sources of information like the two above when they disagree. Sentences 1} and 2) again provide illustration. In such sentences, knowledge about word sense frequency supports the wrong analysis, and semantic constraints must be employed to achieve the correct (human) performance. Intuitively, the raw frequency (without considering context) of the past tense form of recognized is higher than that of the past participial. POS taggers, despite considering local context, consistently mis-tag the verb in reduced relatives. The absence of a disambiguating relativizer (e.g., that) is one obvious source of difficulty here. But even the ostensibly disambiguating preposition by, is itself ambiguous, since it might introduce a manner or locative phrase consistent with the main clause analysis. 2 Modelling human performance in such contexts requires allowing thematic information to compete against and defeat word frequency information. The current model allows such competition, as follows. POS information may incorrectly predict the main clause analysis, boosting the lexical edge associated with the past tense, and thereby boosting the main clause parse. However, the unification combining the past tense form of recognized with an inanimate subject (van) will be weak, due to the constraints encoded in the verb's lexical entry. Since the activations of constituent edges depend on the strengths of the unifications used to build them, the main clause parse Will lose activation. The parse combining the past participial with an inanimate subject (Theme) will suffer no losses, allowing it to overtake the incorrect parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction of Diverse Information",
"sec_num": null
},
{
"text": "Assigning feature priorities and activation thresholds in this model will certainly be a considerable task. It is hoped that principled and automated methods can be found for assigning values to these variables. One promising idea is to glean information about patterns of subcategorization and thematic roles from annotated corpora. Annotation of such information has been suggested as a future direction for the Treebank project (Marcus el al., 1993) . It should be noted that learning such information will require more training data (hence larger corpora) than learning to tag part of speech. In addition, psycholinguistic studies such as the large norming study 3 of MacDonald and Pearlmutter (described in Trueswell et al., 1994) may prove useful in encoding thematic information in small lexicons.",
"cite_spans": [
{
"start": 431,
"end": 452,
"text": "(Marcus el al., 1993)",
"ref_id": null
},
{
"start": 712,
"end": 735,
"text": "Trueswell et al., 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "The Mental Representation of Grammatical l:telations",
"authors": [
{
"first": "",
"middle": [],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "727--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaplan (1982). A Competence Based Theory of Syntactic Closure. In Bresnan, J. (Ed.), The Mental Representation of Grammatical l:telations (pp. 727-796). MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Incremental Syntactic Tree Formation in Human Sentence Processing: a Cognitive Architecture Based on Activation Decay and Simulated Annealing",
"authors": [
{
"first": "O",
"middle": [],
"last": "Kempen",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Vosse",
"suffix": ""
}
],
"year": 1989,
"venue": "Connection Science",
"volume": "1",
"issue": "3",
"pages": "273--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kempen, O. and T. Vosse (1989). Incremental Syntactic Tree Formation in Human Sentence Processing: a Cognitive Architecture Based on Activa- tion Decay and Simulated Annealing. Connection Science, 1(3), 273-290.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "(gd.) The Encyclopedia of Artificial Intelligence",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kapcprzyk",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kapcprzyk, J. (1992). Fuzzy Sets and Fuzzy Logic. In Shapiro, S. (gd.) The Encyclopedia of Artificial Intelligence. John Wiley 8z Sons., New York.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Building a Large Annotated Corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Markiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M., B. Santorini, and M Markiewicz (1993). Building a Large An- notated Corpus of English: The Penn Treebank. Computational Lin- guistics, 19(2), 1993.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An Introduction to Unification-Based Approaches to Grammar",
"authors": [
{
"first": "S",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 1986,
"venue": "CSLI Lecture Notes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shieber, S. (1986). An Introduction to Unification-Based Approaches to Grammar. CSLI Lecture Notes, Chicago University Press, Chicago.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Thematic Structures and Sentence Comprehension",
"authors": [
{
"first": "L",
"middle": [],
"last": "Stowe",
"suffix": ""
}
],
"year": 1989,
"venue": "Linguistic Structure in Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stowe, L. (1989). Thematic Structures and Sentence Comprehension. In Carlsonp G. and M. Tanenhaus (Eds.) Linguistic Structure in Language Processing Kluwer Academic Publishers.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semantic Influences on Parsing: Use of Thematic Role Information in Syntactic Ambiguity B.esolutlon",
"authors": [
{
"first": "J",
"middle": [],
"last": "Trueswell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "T~nnenh&us",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Garnsey",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of Memory and Language",
"volume": "33",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trueswell, J., M. T~nnenh&us, S. Garnsey (1994). Semantic Influences on Parsing: Use of Thematic Role Information in Syntactic Ambiguity B.es- olutlon. Journal of Memory and Language, 33, In Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "=In fact, the utility of byis neutralized in the case of POS tagging, since prepositions are uniformly tagged (e.g., using the tag IN in the Penn Treebank; see Marcus et al., 1993). 3These studies attempt to establish thematic patterns by asking large numbers of subjects to answer questions like \"How typical is it for a van to be recognized by someone",
"authors": [
{
"first": "J",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmucci",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "P~amshaw",
"suffix": ""
}
],
"year": 1993,
"venue": "with a rating between 1 and 7",
"volume": "19",
"issue": "",
"pages": "359--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz, J. Palmucci, M. Meteer, and L. P~amshaw (1993). Coping with Ambiguity and Unknown Words through Proba- bilistic Models. Computational Linguistics, 19(2), 359-382. =In fact, the utility of byis neutralized in the case of POS tagging, since prepositions are uniformly tagged (e.g., using the tag IN in the Penn Treebank; see Marcus et al., 1993). 3These studies attempt to establish thematic patterns by asking large numbers of subjects to answer questions like \"How typical is it for a van to be recognized by someone?\" with a rating between 1 and 7.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Figure h Examples of Atoms",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "2. A new active edge may be created by unifying theLHS of a rule with the first unseen constituent of some active edge in the chart (top down rule invocation).",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Extension of an Active Edge by an Inactive EdgeFigure 2depicts the extension of the active EDGE1 with the inactive EDGE2. The characters represent feature structures, and the ovular nodes on the right end of each edge represent activation level. The parser tries to unify C', the mother node of EDGE2, with C, the first needed constituent of EDGE1. If this unification succeeds, the parser builds the extended edge, EDGE3 (where C Ua C' produces C\"). The activation of the new edge is a function of the strength of the unification and the current activations of EDGE1 and EDGE2:",
"uris": null
}
}
}
}