| { |
| "paper_id": "P91-1047", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:03:41.577255Z" |
| }, |
| "title": "Discovering the Lexical Features of a Language", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Pennsylvania Philadelphia", |
| "location": { |
| "postCode": "19104", |
| "region": "PA" |
| } |
| }, |
| "email": "emaihbrill@unagi.cis.upenn.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "P91-1047", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "This paper examines the possibility of automatically discovering the lexieal features of a language. There is strong evidence that the set of possible lexical features which can be used in a language is unbounded, and thus not innate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Lakoff [Lakoff 87 ] describes a language in which the feature -I-woman-or-fire-ordangerons-thing exists. This feature is based upon ancient folklore of the society in which it is used. If the set of possible lexieal features is indeed unbounded, then it cannot be part of the innate Universal Grammar and must be learned. Even if the set is not unbounded, the child is still left with the challenging task of determining which features are used in her language. If a child does not know a priori what lexical features are used in her language, there are two sources for acquiring this information: semantic and syntactic cues. A learner using semantic cues could recognize that words often refer to objects, actions, and properties, and from this deduce the lexical features: noun, verb and adjective. Pinker [Pinker 89] proposes that a combination of semantic cues and innate semantic primitives could account for the acquisition of verb features. He believes that the child can discover semantic properties of a verb by noticing the types of actions typically taking place when the verb is uttered. Once these properties are known, says Pinker, they can be used to reliably predict the distributional behavior of the verb. However, Gleitman [Gleitman 90] presents evidence that semantic cues axe not sufficient for a child to acquire verb features and believes that the use of this semantic information in conjunction with information about the subcategorization properties of the verb may be sufficient for learning verb features.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 17, |
| "text": "[Lakoff 87", |
| "ref_id": null |
| }, |
| { |
| "start": 809, |
| "end": 820, |
| "text": "[Pinker 89]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper takes Gleitman's suggestion to the extreme, in hope of determining whether syntactic cues may not just aid in feature discovery, but may be all that is necessary. We present evidence for the sufficiency of a strictly syntax-based model for discovering Described below is a fully implemented program which takes a corpus of text as input and outputs a fairly accurate word class list for the language in question. Each word class corresponds to a lexical feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The program runs in O(n 3) time and O(n 2) space, where n is the number of words in the lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The program is based upon a Markov model. A Markov model is defined as: An important property of Markov models is that they have no memory other than that stored in the current state. In other words, where X(t) is the value given by the model at time t, In the model we use, there is a unique state for each word in the lexicon. We are not concerned with initial state probabilities. Transition probabilities represent the probability that word b will follow a and are estimated by examining a large corpus of text. To estimate the transition probability from state a to state b:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discovering Lexical Features", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1. Count the number of times b follows a in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discovering Lexical Features", |
| "sec_num": "2" |
| }, |
| { |
| "text": "the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Divide this value by the number of times a occurs in", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Such a model is clearly insufficient for expressing the grammar of a natural language. However, there is a great deal of information encoded in such a model about the distributional behavior of words with respect to a very local context, namely the context of immediately adjacent words. For a particular word, this information is captured in the set of transitions and transition probabilities going into and out of the state representing the word in the Markov model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Divide this value by the number of times a occurs in", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Once the transition probabilities of the model have been estimated, it is possible to discover word classes. If two states are sufficiently similar with respect to the transitions into and out of them, then it is assumed that the states are equivalent. The set of all sufficiently similar states forms a word class. By varying the level considered to be sufficiently similar, different levels of word classes can be discovered. For instance, when only highly similar states are considered equivalent, one might expect animate nouns to form a class. When the similarity requirement is relaxed, this class may expand into the class of all nouns. Once word classes are found, lexical features can be extracted by assuming that there is a feature of the language which accounts for each word class. Below is an example actually generated by the program:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Divide this value by the number of times a occurs in", |
| "sec_num": "2." |
| }, |
| { |
| "text": "With very strict state similarity requirements, HE and SHE form a class. As the similarity requirement is relaxed, the class grows to include I, forming the class of singular nominative pronouns. Upon further relaxation, THEY and WE form a class. Next, (HE, SHE, I) and (THEY, WE) collapse into a single class, the class of nominative pronouns. YOU and IT collapse into the class of pronouns which are both nominative and accusative. Note that next, YOU and IT merge with the class of nominative pronouns. This is because the program currently deals with bimodals by eventually assigning them to the class whose characteristics they exhibit most strongly. For another example of this, see HER below. This work is still in progress, and a number of different directions are being pursued. We are currently attempting to automatically acquire the suffixes of a language, and then trying to class words based upon how they distribute with respect to suffixes.", |
| "cite_spans": [ |
| { |
| "start": 253, |
| "end": 265, |
| "text": "(HE, SHE, I)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Divide this value by the number of times a occurs in", |
| "sec_num": "2." |
| }, |
| { |
| "text": "One problem with this work is that it is difficult to judge results. One can eye the results and see that the lexical features found seem to be correct, but how can we judge that the features are indeed the correct ones? How can one set of hypothesized features meaningfully be compared to another set? We are currently working on an information-theoretic metric, similar to that proposed by Jelinek [Jelinek 90] for scoring probabilistic context-free grammars, to score the quality of hypothesized lexical feature sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Divide this value by the number of times a occurs in", |
| "sec_num": "2." |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Frequency Analysis o.f English Usage: Le~c.icon and Grammar", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Francis", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Kucera", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "82] Francis, W. and H. Kucera. (1982) Frequency Anal- ysis o.f English Usage: Le~c.icon and Grammar.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The Structural Sources of Verb Meanings", |
| "authors": [ |
| { |
| "first": "Lila", |
| "middle": [], |
| "last": "G|eitman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Language Acquisition", |
| "volume": "", |
| "issue": "1", |
| "pages": "3--55", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G|eitman, Lila. (1990) \"The Structural Sources of Verb Meanings.\" Language Acquisition, Voltmae 1, pp. 3-55.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Basic Methods of Probahilistic Context Free Grvannmrs", |
| "authors": [ |
| { |
| "first": "Structural", |
| "middle": [], |
| "last": "Lingulstics", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chicago", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jellnek", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "D L" |
| ], |
| "last": "Lafferty & R", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "I.B.M. Technical Report", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Structural Lingulstics. Chicago: University of Chicago Press. [Jelinek 90] Jellnek, F., J.D. Lafferty & R.L. Mercer. (1990) \"Basic Methods of Probahilistic Context Free Grvannmrs.\" I.B.M. Technical Report, RC 16374.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Women, Fire and Dangerous Things: What Categories Reveal About the Mind", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Lakoff", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Pinker, S. Learnability and Cognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lakoff, G. (1987) Women, Fire and Dangerous Things: What Categories Reveal About the Mind. Chicago: University of Chicago Press. [Pinker 89] Pinker, S. Learnability and Cognition. Cambridge: MIT Press.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "state probabilities init(x) --3. Transition probabilities trans(x,~)", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "text": "t) = ~tt [ X(t --1) = at-l)", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF0": { |
| "text": "and mass nouns. But while meaning specifies the core of a word class, it does not specify precisely what can and cannot be a member of a class.For instance, furniture is a mass noun in English, but is a count noun in French. While the meaning of furniture cannot be sufficient for determining whether it is a count or mass noun, the distribution of the word Call.", |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">the lexical features of a language. The work is based</td></tr><tr><td colspan=\"2\">upon the hypothesis that whenever two words are se-</td></tr><tr><td colspan=\"2\">mantically dissimilar, this difference will manifest it-</td></tr><tr><td>self in the syntax via</td><td>lexical distribution (in a sense,</td></tr><tr><td>playing out the notion</td><td>of distributional analysis [Harris</td></tr><tr><td>51]). Most, if not all,</td><td>features have a semantic basis.</td></tr><tr><td>For instance, there is</td><td>a clear semantic difference be-</td></tr><tr><td>tween most count</td><td/></tr><tr><td>-89-</td><td/></tr><tr><td>C0031 PRI.</td><td/></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF1": { |
| "text": "This algorithm was run on a Markov model trained on the Brown Corpus, a corpus of approximately one million words[Francis 82]. The results, although preliminary, are very encouraging. These are a few of the word classes found by the program:", |
| "type_str": "table", |
| "content": "<table><tr><td>\u2022 CAME WENT</td></tr><tr><td>\u2022 THEM ME HIM US</td></tr><tr><td>\u2022 HER HIS</td></tr><tr><td>\u2022 FOR ON BY IN WITH FROM AT</td></tr><tr><td>\u2022 THEIR MY OUR YOUR ITS</td></tr><tr><td>\u2022 ANY MANY EACH SOME</td></tr><tr><td>\u2022 MAY WILL COULD MIGHT WOULD CAN</td></tr><tr><td>SHOULD MUST</td></tr><tr><td>\u2022 FIRST LAST</td></tr><tr><td>\u2022 LITTLE MUCH</td></tr><tr><td>\u2022 MEN PEOPLE MAN</td></tr></table>", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |