ACL-OCL / Base_JSON /prefixP /json /P95 /P95-1047.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P95-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:34:05.237228Z"
},
"title": "Acquiring a Lexicon from Unsegmented Speech",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "De Marcken",
"suffix": "",
"affiliation": {},
"email": "cgdemarc@ai.mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present work-in-progress on the machine acquisition of a lexicon from sentences that are each an unsegmented phone sequence paired with a primitive representation of meaning. A simple exploratory algorithm is described, along with the direction of current work and a discussion of the relevance of the problem for child language acquisition and computer speech recognition.",
"pdf_parse": {
"paper_id": "P95-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "We present work-in-progress on the machine acquisition of a lexicon from sentences that are each an unsegmented phone sequence paired with a primitive representation of meaning. A simple exploratory algorithm is described, along with the direction of current work and a discussion of the relevance of the problem for child language acquisition and computer speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We are interested in how a lexicon of discrete words can be acquired from continuous speech, a problem fundamental both to child language acquisition and to the automated induction of computer speech recognition systems; see (Olivier, 1968; Wolff, 1982; Cartwright and Brent, 1994) for previous computational work in this area. For the time being, we approximate the problem as induction from phone sequences rather than acoustic pressure, and assume that learning takes place in an environment where simple semantic representations of the speech intent are available to the acquisition mechanism.",
"cite_spans": [
{
"start": 225,
"end": 240,
"text": "(Olivier, 1968;",
"ref_id": "BIBREF3"
},
{
"start": 241,
"end": 253,
"text": "Wolff, 1982;",
"ref_id": "BIBREF5"
},
{
"start": 254,
"end": 281,
"text": "Cartwright and Brent, 1994)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For example, we approximate the greater problem as that of learning from inputs like Phon. Input: /~raebltslne~ b~W t/ Sem. Input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "{ BOAT A IN RABBIT THE BE }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(The rabbit's in a boat.) where the semantic input is an unordered set of identifiers corresponding to word paradigms. Obviously the artificial pseudo-semantic representations make the problem much easier: we experiment with them as a first step, somewhere between learning language \"from a radio\" and providing an unambiguous textual transcription, as might be used for training a speech recognition system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to create a program that, after training on many such pairs, can segment a new phonetic utterance into a sequence of morpheme identifiers. Such output could be used as input to many grammar acquisition programs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We have implemented a simple algorithm as an exploratory effort. It maintains a single dictionary, a set of words. Each word consists of a phone sequence and a set of sememes (semantic symbols). Initially, the dictionary is empty. When presented with an utterance, the algorithm goes through the following sequence of actions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Prototype",
"sec_num": "2"
},
{
"text": "\u2022 It attempts to cover (\"parse\") the utterance phones and semantic symbols with a sequence of words from the dictionary, each word offset a certain distance into the phone sequence, with words potentially overlapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Prototype",
"sec_num": "2"
},
{
"text": "\u2022 It then creates new words that account for uncovered portions of the utterance, and adjusts words from the parse to better fit the utterance. \u2022 Finally, it reparses the utterance with the old dictionary and the new words, and adds the new words to the dictionary if the resulting parse covers the utterance well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Prototype",
"sec_num": "2"
},
{
"text": "Occasionally, the program removes rarely-used words from the dictionary, and removes words which can themselves be parsed. The general operation of the program should be made clearer by the following two examples. In the first, the program starts with an empty dictionary, early in the acquisition process, and receives the simple utterance/nina/{ NINA } (a child's name). Naturally, it is unable to parse the input. Having successfully parsed the input, it adds the new word to the dictionary. Later in the acquisition process, it encounters the sentence you kicked off ~he sock, when the dictionary contains (among other words) /yu/ { YOU }, /~a/ { THE }, and /rsuk/ { SOCK }. /sak/ { SOCK } to the dictionary. /rsuk/ { SOCK }, not used in this analysis, is eventually discarded from the dictionary for lack of use. /klkt~f/{ KICK OFF } is later found to be parsable into two subwords, and also discarded. One can view this procedure as a variant of the expectation-maximization (Dempster et al., 1977) procedure, with the parse of each utterance as the hidden variables. There is currently no preference for which words are used in a parse, save to minimize mismatches and unparsed portions of the input, but obviously a word grammar could be learned in conjunction with this acquisition process, and used as a disambiguation step.",
"cite_spans": [
{
"start": 981,
"end": 1004,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Prototype",
"sec_num": "2"
},
{
"text": "To test the algorithm, we used 34438 utterances from the Childes database of mothers' speech to children (MacWhinney and Snow, 1985; Suppes, 1973) . These text utterances were run through a publicly available text-to-phone engine. A semantic dictionary was created by hand, in which each root word from the utterances was mapped to a corresponding sememe. Various forms of a root (\"see\", \"saw\", \"seeing\") all map to the same sememe, e.g., SEE . Semantic representations for a given utterance are merely unordered sets of sememes generated by taking the union of the sememe for each word in the utterance. Figure 1 contains the first 6 utterances from the database.",
"cite_spans": [
{
"start": 105,
"end": 132,
"text": "(MacWhinney and Snow, 1985;",
"ref_id": "BIBREF2"
},
{
"start": 133,
"end": 146,
"text": "Suppes, 1973)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 605,
"end": 613,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tests and Results",
"sec_num": "3"
},
{
"text": "We describe the results of a single run of the algorithm, trained on one exposure to each of the 34438 utterances, containing a total of 2158 different stems. The final dictionary contains 1182 words, where some entries are different forms of a common stem. 82 of the words in the dictionary have never been used in a good parse. We eliminate these words, leaving 1100. Figure 2 presents some entries in the final dictionary, and figure 3 presents all 21 (2%) of the dictionary entries that might be reasonably considered mistakes. Figure 1 : The first 6 utterances from the Childes database used to test the algorithm. upon a new word like ring,/rig/, use the/I~/{} to account for most of the sound, and build a new word /r/{ RINa } to cover the rest; witness something in figure 3. Most other semantically-empty affixes (plural/s/for instance) are also properly hypothesized and disallowed, but the dictionary learns multiple entries to account for them (/eg/ \"egg\" and /egz/ \"eggs\"). The system learns synonyms (\"is\", \"was\", \"am\", ...) and homonyms (\"read\", \"red\" ; \"know\", \"no\") without difficulty.",
"cite_spans": [],
"ref_spans": [
{
"start": 370,
"end": 378,
"text": "Figure 2",
"ref_id": null
},
{
"start": 532,
"end": 540,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tests and Results",
"sec_num": "3"
},
{
"text": "Removing the restriction on empty semantics, and also setting the semantics of the function words a, an, the, that and of to {}, the most common empty words learned are given in figure 4. The ring problem surfaces: among other words learned are now /k/{ CAR } and/br/{ BRI/IG }. To fix such problems, it is obvious more constraint on morpheme order must be incorporated into the parsing process, perhaps in the form of a statistical grammar acquired simultaneously with the dictionary. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phones",
"sec_num": null
},
{
"text": "The algorithm described above is extremely simple, as was the input fed to it. In particular,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Directions",
"sec_num": "4"
},
{
"text": "\u2022 The input was phonetically oversimplified, each word pronounced the same way each time it occurred, regardless of environment. There was no phonological noise and no cross-word effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Directions",
"sec_num": "4"
},
{
"text": "\u2022 The semantic representations were not only noise free and unambiguous, but corresponded directly to the words in the utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Directions",
"sec_num": "4"
},
{
"text": "To better investigate more realistic formulations of the acquisition problem, we are extending our coverage to actual phonetic transcriptions of speech, by allowing for various phonological processes and noise, and by building in probabilistic models of morphology and syntax. We are further reducing the information present in the semantic input by removing all function word symbols and merging various content symbols to encompass several word paradigms. We hope to transition to phonemic input produced by a phoneme-based speech recognizer in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Directions",
"sec_num": "4"
},
{
"text": "Finally, we are instituting an objective test measure: rather than examining the dictionary directly, we will compare segmentation and morphemelabeling to textual transcripts of the input speech. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Directions",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "This research is supported by NSF grant 9217041-ASC and AR.PA under the ttPCC program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": " Figure 2 : Dictionary entries. The left 10 are the 10 words used most frequently in good parses. The right 10 were selected randomly from the 1100 entries. Figure 3 : All of the significant dictionary errors. Some of them, like /J'iz/ are conglomerations that should have been divided. Others, like/t/, /wo/, and /don/ demonstrate how the system compensates for the morphological irregularity of English contractions. The /I~/problem is discussed in the text; misanalysis of the role of/I~/ also manifests itself on something.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 9,
"text": "Figure 2",
"ref_id": null
},
{
"start": 157,
"end": 165,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "The most obvious error visible in figure 3 is the suffix -ing (/I~/), which should be have an empty sememe set. Indeed, such a word is properly hypothesized but a special mechanism prevents semantically empty words from being added to the dictionary. Without this mechanism, the system would chance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "/iv/{ BE } /z~/ { YOU }",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Segmenting speech without a lexicon: Evidence for a bootstrapping model of lexical acquisition",
"authors": [
{
"first": "Timothy",
"middle": [
"Andrew"
],
"last": "Cartwright",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Brent",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of the 16th Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Andrew Cartwright and Michael R. Brent. 1994. Segmenting speech without a lexicon: Evi- dence for a bootstrapping model of lexical acqui- sition. In Proc. of the 16th Annual Meeting of the Cognitive Science Society, IIillsdale, New Jersey.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Maximum liklihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Liard",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society, B",
"volume": "",
"issue": "39",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Liard, and D. B. Rubin. 1977. Maximum liklihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B(39):1-38.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The child language data exchange system",
"authors": [
{
"first": "B",
"middle": [],
"last": "Macwhinney",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Snow",
"suffix": ""
}
],
"year": 1985,
"venue": "Journal of Child Language",
"volume": "12",
"issue": "",
"pages": "271--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. MacWhinney and C. Snow. 1985. The child lan- guage data exchange system. Journal of Child Language, 12:271-296.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Stochastic Grammars and Language Acquisition Mechanisms",
"authors": [
{
"first": "Donald Cort",
"middle": [],
"last": "Olivier",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Cort Olivier. 1968. Stochastic Grammars and Language Acquisition Mechanisms. Ph.D. thesis, Harvard University, Cambridge, Mas- sachusetts.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The semantics of children's language",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Suppes",
"suffix": ""
}
],
"year": 1973,
"venue": "American Psychologist",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Suppes. 1973. The semantics of children's language. American Psychologist.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Language acquisition, data compression and generalization",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Wolff",
"suffix": ""
}
],
"year": 1982,
"venue": "Language and Communication",
"volume": "2",
"issue": "1",
"pages": "57--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Gerald Wolff. 1982. Language acquisition, data compression and generalization. Language and Communication, 2(1):57-89.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The most common semantically empty words in the final dictionary."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "5"
}
}
}
}