| { |
| "paper_id": "P12-1020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:28:22.089437Z" |
| }, |
| "title": "Bootstrapping a Unified Model of Lexical and Phonetic Acquisition", |
| "authors": [ |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "melsner0@gmail.com" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "During early language acquisition, infants must learn both a lexicon and a model of phonetics that explains how lexical items can vary in pronunciation-for instance \"the\" might be realized as [Di] or [D@]. Previous models of acquisition have generally tackled these problems in isolation, yet behavioral evidence suggests infants acquire lexical and phonetic knowledge simultaneously. We present a Bayesian model that clusters together phonetic variants of the same lexical item while learning both a language model over lexical items and a log-linear model of pronunciation variability based on articulatory features. The model is trained on transcribed surface pronunciations, and learns by bootstrapping, without access to the true lexicon. We test the model using a corpus of child-directed speech with realistic phonetic variation and either gold standard or automatically induced word boundaries. In both cases modeling variability improves the accuracy of the learned lexicon over a system that assumes each lexical item has a unique pronunciation.", |
| "pdf_parse": { |
| "paper_id": "P12-1020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "During early language acquisition, infants must learn both a lexicon and a model of phonetics that explains how lexical items can vary in pronunciation-for instance \"the\" might be realized as [Di] or [D@]. Previous models of acquisition have generally tackled these problems in isolation, yet behavioral evidence suggests infants acquire lexical and phonetic knowledge simultaneously. We present a Bayesian model that clusters together phonetic variants of the same lexical item while learning both a language model over lexical items and a log-linear model of pronunciation variability based on articulatory features. The model is trained on transcribed surface pronunciations, and learns by bootstrapping, without access to the true lexicon. We test the model using a corpus of child-directed speech with realistic phonetic variation and either gold standard or automatically induced word boundaries. In both cases modeling variability improves the accuracy of the learned lexicon over a system that assumes each lexical item has a unique pronunciation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Infants acquiring their first language confront two difficult cognitive problems: building a lexicon of word forms, and learning basic phonetics and phonology. The two tasks are closely related: knowing what sounds can substitute for one another helps in clustering together variant pronunciations of the same word, while knowing the environments in which particular words can occur helps determine which sound changes are meaningful and which are not (Feldman et al., 2009) . For instance, if an infant who already knows the word ju \"you\" encounters a new word jd, they must decide whether it is a new lexical item or a variant of the word they already know. Evidence for the correct conclusion comes from the pronunciation (many English vowels are reduced to [d] in unstressed positions) and the context-if the next word is \"want\", \"you\" is a plausible choice. To date, most models of infant language learning have focused on either lexicon-building or phonetic learning in isolation. For example, many models of word segmentation implicitly or explicitly build a lexicon while segmenting the input stream of phonemes into word tokens; in nearly all cases the phonemic input is created from an orthographic transcription using a phonemic dictionary, thus abstracting away from any phonetic variability (Brent, 1999; Venkataraman, 2001; Swingley, 2005; Goldwater et al., 2009, among others) . As illustrated in Figure 1 , these models attempt to infer line (a) from line (d). However, (d) is an idealization: real speech has variability, and behavioral evidence suggests that infants are still learning about the phonetics and phonology of their language even after beginning to segment words, rather than learning to neutralize the variations first and acquiring the lexicon afterwards (Feldman et al., 2009, and references therein) .", |
| "cite_spans": [ |
| { |
| "start": 464, |
| "end": 474, |
| "text": "al., 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 761, |
| "end": 764, |
| "text": "[d]", |
| "ref_id": null |
| }, |
| { |
| "start": 1304, |
| "end": 1317, |
| "text": "(Brent, 1999;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1318, |
| "end": 1337, |
| "text": "Venkataraman, 2001;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 1338, |
| "end": 1353, |
| "text": "Swingley, 2005;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 1354, |
| "end": 1391, |
| "text": "Goldwater et al., 2009, among others)", |
| "ref_id": null |
| }, |
| { |
| "start": 1788, |
| "end": 1834, |
| "text": "(Feldman et al., 2009, and references therein)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1412, |
| "end": 1420, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Based on this evidence, a more realistic model of early language acquisition should propose a method of inferring the intended forms (Figure 1a ) from the unsegmented surface forms (1c) while also learning a model of phonetic variation relating the intended and surface forms (a) and (b). Previous models with similar goals have learned from an artificial corpus with a small vocabulary (Driesen et al., 2009; R\u00e4s\u00e4nen, 2011) or have modeled variability only in vowels (Feldman et al., 2009) ; to our knowledge, this paper is the first to use a naturalistic infant-directed corpus while modeling variability in all segments, and to incorporate word-level context (a bigram language model). Our main contribution is a joint lexicalphonetic model that infers intended forms from segmented surface forms; we test the system using input with either gold standard word boundaries or boundaries induced by an existing unsupervised segmentation model (Goldwater et al., 2009) . We show that in both cases modeling variability improves the accuracy of the learned lexicon over a system that assumes each intended form has a unique surface form.", |
| "cite_spans": [ |
| { |
| "start": 387, |
| "end": 409, |
| "text": "(Driesen et al., 2009;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 410, |
| "end": 424, |
| "text": "R\u00e4s\u00e4nen, 2011)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 468, |
| "end": 490, |
| "text": "(Feldman et al., 2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 943, |
| "end": 967, |
| "text": "(Goldwater et al., 2009)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 143, |
| "text": "(Figure 1a", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our model is conceptually similar to those used in speech recognition and other applications: we assume the intended tokens are generated from a bigram language model and then distorted by a noisy channel, in particular a log-linear model of phonetic variability. But unlike speech recognition, we have no intended-form, surface-form training pairs to train the phonetic model, nor even a dictionary of intended-form strings to train the language model. Instead, we initialize the noise model using feature weights based on universal linguistic principles (e.g., a surface phone is likely to share articulatory features with the intended phone) and use a bootstrapping process to iteratively infer the intended forms and retrain the language model and noise model. While we do not claim that the particular inference mechanism we use is cognitively plausible, our positive results further support the claim that infants can and do acquire phonetics and the lexicon in concert.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our work is inspired by the lexical-phonetic model of Feldman et al. (2009) . They extend a model for clustering acoustic tokens into phonetic categories (Vallabha et al., 2007) by adding a lexical level that simultaneously clusters word tokens (which contain the acoustic tokens) into lexical entries. Including the lexical level improves the model's phonetic categorization, and a follow-up study on artificial language learning (Feldman, 2011) supports the claim that human learners use lexical knowledge to distinguish meaningful from unimportant phonetic contrasts. Feldman et al. (2009) use a real-valued representation for vowels (formant values), but assume no variability in consonants, and treat each word token independently. In contrast, our model uses a symbolic representation for sounds, but models variability in all segment types and incorporates a bigram word-level language model. To our knowledge, the only other lexicon-building systems that also learn about phonetic variability are those of Driesen et al. (2009) and R\u00e4s\u00e4nen (2011). These systems learn to represent lexical items and their variability from a discretized representation of the speech stream, but they are tested on an artificial corpus with only 80 vocabulary items that was constructed so as to \"avoid strong word-to-word dependencies\" (R\u00e4s\u00e4nen, 2011). Here, we use a naturalistic corpus, demonstrating that lexical-phonetic learning is possible in this more general setting and that word-level context information is important for doing so.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 75, |
| "text": "Feldman et al. (2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 154, |
| "end": 177, |
| "text": "(Vallabha et al., 2007)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 431, |
| "end": 446, |
| "text": "(Feldman, 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 571, |
| "end": 592, |
| "text": "Feldman et al. (2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1014, |
| "end": 1035, |
| "text": "Driesen et al. (2009)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Several other related systems work directly from the acoustic signal and many of these do use naturalistic corpora. However, they do not learn at both the lexical and phonetic/acoustic level. For example, Park and Glass (2008) , Aimetti (2009) , Jansen et al. (2010) , and McInnes and Goldwater (2011) present lexicon-building systems that use hard-coded acoustic similarity measures rather than learning about variability, and they only extract and cluster a few frequent words. On the phonetic side, Varadarajan et al. (2008) and describe systems that learn phone-like units but without the benefit of top-down information.", |
| "cite_spans": [ |
| { |
| "start": 205, |
| "end": 226, |
| "text": "Park and Glass (2008)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 229, |
| "end": 243, |
| "text": "Aimetti (2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 246, |
| "end": 266, |
| "text": "Jansen et al. (2010)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 502, |
| "end": 527, |
| "text": "Varadarajan et al. (2008)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A final line of related work is on word segmentation. In addition to the models mentioned in Section 1, which use phonemic input, a few models of word segmentation have been tested using phonetic input (Fleck, 2008; Rytting, 2007; Daland and Pierrehumbert, 2010) . However, they do not cluster segmented word tokens into lexical items (none of these models even maintains an explicit lexicon), nor do they model or learn from phonetic variation in the input.", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 215, |
| "text": "(Fleck, 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 216, |
| "end": 230, |
| "text": "Rytting, 2007;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 231, |
| "end": 262, |
| "text": "Daland and Pierrehumbert, 2010)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our lexical-phonetic model is defined using the standard noisy channel framework: first a sequence of intended word tokens is generated using a language model, and then each token is transformed by a probabilistic finite-state transducer to produce the observed surface sequence. In this section, we present the model in a hierarchical Bayesian framework to emphasize its similarity to existing models, in particular those of Feldman et al. (2009) and Goldwater et al. (2009) . In our actual implementation, however, we use approximation and MAP point estimates to make our inference process more tractable; we discuss these simplifications in Section 4.", |
| "cite_spans": [ |
| { |
| "start": 426, |
| "end": 447, |
| "text": "Feldman et al. (2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 452, |
| "end": 475, |
| "text": "Goldwater et al. (2009)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our observed data consists of a (segmented) sequence of surface words s 1 . . . s n . We wish to recover the corresponding sequence of intended words x 1 . . . x n . As shown in Figure 2 , s i is produced from x i by a transducer T : s i \u223c T (x i ), which models phonetic changes. Each x i is sampled from a distribution \u03b8 which represents word frequencies, and its left and right context words, l i and r i , are drawn from distributions conditioned on x i , in order to capture information about the environments in which x i appears:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 178, |
| "end": 186, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "l i \u223c P L (x i ), r i \u223c P R (x i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Because the number of word types is not known in advance, \u03b8 is drawn from a Dirichlet process DP (\u03b1), and P L (x) and P R (x) have Pitman-Yor priors with concentration parameter 0 and discount d (Teh, 2006 ).", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 205, |
| "text": "(Teh, 2006", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our generative model of x i is unusual for two reasons. First, we treat each x i independently rather than linking them via a Markov chain. This makes the model deficient, since l i overlaps with x i\u22121 and so forth, generating each token twice. During inference, however, we will never compute the joint probability of all the data at once, only the probabilities of subsets of the variables with particular intended word forms u and v. As long as no two of these words are adjacent, the deficiency will have no effect. We make this independence assumption for computational reasons-when deciding whether to merge u and v into a single lexical entry, we compute the change in estimated probability for their contexts, but not the effect on other words for which u and v themselves appear as context words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Also unusual is that we factor the joint probability (l, x, r) as p(x)p(l|x)p(r|x) rather than as a leftto-right chain p(l)p(x|l)p(r|x). Given our independence assumption above, these two quantities are mathematically equivalent, so the difference matters only because we are using smoothed estimates. Our factorization leads to a symmetric treatment of left and right contexts, which simplifies implementation: we can store all the context parameters locally as P L (\u2022|x) rather than distributed over various P (x|\u2022).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Next, we explain our transducer T . A weighted finite-state transducer (WFST) is a variant of a finitestate automaton (Pereira et al., 1994) that reads an input string symbol-by-symbol and probabilistically produces an output string; thus it can be used to specify a conditional probability on output strings given an input. Our WFST ( Figure 3 ) computes a weighted edit distance, and is implemented using OpenFST (Allauzen et al., 2007) . It contains a state for each triplet of (previous, current, next) phones; conditioned on this state, it emits a character output which can be thought of as a possible surface realization of current in its particular environment. The output can be the empty string , in which case current is deleted. The machine can also insert characters at any point in the string, by transitioning to an insert state (previous, , current) and then returning while emitting some new character.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 140, |
| "text": "(Pereira et al., 1994)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 415, |
| "end": 438, |
| "text": "(Allauzen et al., 2007)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 336, |
| "end": 344, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The transducer is parameterized by the probabilities of the arcs. For instance, all arcs leaving the state (\u2022, h, i) consume the character h and emit some character c with probability p(c|\u2022, h, i). Following Figure 3 : The fragment of the transducer responsible for input string [Di] \"the\". \"...\" represents an output arc for each possible character, including the empty string ; \u2022 is the word boundary marker. Dreyer et al. (2008) , we parameterize these distributions with a log-linear model. The model features are based on articulatory phonetics and distinguish three dimensions of sound production: voicing, place of articulation and manner of articulation.", |
| "cite_spans": [ |
| { |
| "start": 411, |
| "end": 431, |
| "text": "Dreyer et al. (2008)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 208, |
| "end": 216, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Features are generated from four positional templates ( Figure 4 ): (curr)\u2192out, (prev, curr)\u2192out, (curr, next)\u2192out and (prev, curr, next)\u2192out. Each template is instantiated once per articulatory dimension, with prev, curr, next and out replaced by their values for that dimension: for instance, there are two voicing values, voiced and unvoiced 1 and the (curr)\u2192out template for [h] producing [d] would be instantiated as (voiced)\u2192voiced. To capture trends specific to particular sounds, each template is instantiated again using the actual symbol for curr and articulatory values for everything else (e.g., [h]\u2192unvoiced). An additional template, \u2192out, captures the marginal frequency of the output symbol. There are also faithfulness features, same-sound, same-voice, same-place and same-manner which check if curr is exactly identical to out or shares the exact value of a particular feature.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 56, |
| "end": 64, |
| "text": "Figure 4", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our choice of templates and features is based on standard linguistic principles: we expect that changing only a single articulatory dimension will be more acceptable than changing several, and that the articulatory dimensions of context phones are important because of assimilatory and dissimilatory processes (Hayes, 2011) . In modern phonetics and phonology, these generalizations are usually expressed as Optimality Theory constraints; log-linear models such as ours have previously been used to implement stochas- tic Optimality Theory models (Goldwater and Johnson, 2003; Hayes and Wilson, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 310, |
| "end": 323, |
| "text": "(Hayes, 2011)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 547, |
| "end": 576, |
| "text": "(Goldwater and Johnson, 2003;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 577, |
| "end": 600, |
| "text": "Hayes and Wilson, 2008)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical-phonetic model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Global optimization of the model posterior is difficult; instead we use Viterbi EM (Spitkovsky et al., 2010; Allahverdyan and Galstyan, 2011). We begin with a simple initial transducer and alternate between two phases: clustering together surface forms, and reestimating the transducer parameters. We iterate this procedure until convergence (when successive clustering phases find nearly the same set of merges); this tends to take about 5 or 6 iterations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In our clustering phase, we improve the model posterior as much as possible by greedily making type merges, where, for a pair of intended word forms u and v, we replace all instances of x i = u with x i = v. We maintain the invariant that each intended word form's most common surface form must be itself; this biases the model toward solutions with low distortion in the transducer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We write the change in the log posterior probability of the model resulting from a type merge of u to v as \u2206(u, v), which factors into two terms, one depending on the surface string and the transducer, and the other depending on the string of intended words. In order to ensure that each intended word form's most common surface form is itself, we define \u2206(u, v) = \u2212\u221e if u is more common than v.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We write the log probability of x being transduced to s as T (s|x). If we merge u into v, we no longer need to produce any surface forms from u, but instead we must derive them from v. If #(\u2022) counts the occurrences of some event in the current state of the model, the transducer component of \u2206 is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2206 T = s #(x i =u, s i =s)(T (s|v) \u2212 T (s|u)) (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This term is typically negative, voting against a merge, since u is more similar to itself than to v.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The language modeling term relating to the intended string again factors into multiple components. The probability of a particular l i , x i , r i can be broken into p(x i )p(l i |x i )p(r i |x i ) according to the model. We deal first with the p(x i ) unigram term, considering all tokens where x i \u2208 {u, v} and computing the probability p u = p(x i = u|x i \u2208 {u, v}). By definition of a Dirichlet process, the marginal over a subset of the variables will be Dirichlet, so for \u03b1 > 1 we have the MAP estimate:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p u = #(x i =u) + \u03b1 \u2212 1 #(x i \u2208 {u, v}) + 2(\u03b1 \u2212 1)", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "p v = p(x i = v|x i \u2208 {u, v}) is computed similarly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "If we decide to merge u into v, however, the probability p(x i = v|x i \u2208 {u, v}) becomes 1. The change in log-probability resulting from the merge is closely related to the entropy of the distribution:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2206 U = \u2212#(x i =u) log(p u )\u2212#(x i =v) log(p v )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This change must be positive and favors merging. Next, we consider the change in probability from the left contexts (the derivations for right contexts are equivalent). If u and v are separate words, we generate their left contexts from different distributions p(l|u) and p(l|v), while if they are merged, we must generate all the contexts from the same distribution p(l|{u, v}). This change is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2206 L = l #(l, u){log(p(l|{u, v})) \u2212 log(p(l|u)} + l #(l, v){log(p(l|{u, v})) \u2212 log(p(l|v)}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In a full Bayesian model, we would integrate over the parameters of these distributions; instead, we use Kneser-Ney smoothing (Kneser and Ney, 1995) which has been shown to approximate the MAP solution of a hierarchical Pitman-Yor model (Teh, 2006; Goldwater et al., 2006) . The Kneser-Ney discount 2 d is a tunable parameter of our system, and controls whether the term favors merging or not. If d is small, p(l|u) and p(l|v) are close to their maximumlikelihood estimates, and \u2206 L is similar to a Jensen-Shannon divergence; it is always negative and discourages mergers. As d increases, however, p(l|u) for rare words approaches the prior distribution; in this case, merging two words may result in better posterior parameters than estimating both separately, since the combined estimate loses less mass to discounting.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 148, |
| "text": "(Kneser and Ney, 1995)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 237, |
| "end": 248, |
| "text": "(Teh, 2006;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 249, |
| "end": 272, |
| "text": "Goldwater et al., 2006)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Because neither the transducer nor the language model are perfect models of the true distribution, they can have incompatible dynamic ranges. Often, the transducer distribution is too peaked; to remedy this, we downweight the transducer probability by \u03bb, a parameter of our model, which we set to .5. Downweighting of the acoustic model versus the LM is typical in speech recognition (Bahl et al., 1980) .", |
| "cite_spans": [ |
| { |
| "start": 384, |
| "end": 403, |
| "text": "(Bahl et al., 1980)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To summarize, the full change in posterior is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2206(u, v) = \u2206 U + \u2206 L + \u2206 R + \u03bb\u2206 T", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "There are four parameters. The transducer regularization r = 1 and unigram prior \u03b1 = 2, which we set ad-hoc, have little impact on performance. The Kneser-Ney discount d = 2 and transducer downweight \u03bb = .5 have more influence and were tuned on development data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring merges", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In the clustering phase, we start with an initial solution in which each surface form is its own intended pronunciation and iteratively improve this solution by merging together word types, picking (approximately) the best merger at each point. We begin by computing a set of candidate mergers for each surface word type u. This step saves time by quickly rejecting mergers which are certain to get very low transducer scores. We reject a pair u, v if the difference in their length is greater than 4, or if both words are longer than 4 segments, but, when we consider them as unordered bags of segments, the Dice coefficient between them is less than .5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For each word u and all its candidates v, we compute \u2206(u, v) as in Equation 4. We keep track of the Input: vocabulary of surface forms u Input: C(u): candidate intended forms of u Output: intend(u):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "intended form of u foreach u \u2208 vocab do // initialization v * (u) \u2190 argmax v \u2208 C(u) \u2206(u, v); \u2206 * (u) \u2190 \u2206(u, v * (u)) intend(u) \u2190 u add u to queue Q with priority \u2206 * (u)) while top(Q) > \u2212\u221e do u \u2190 pop(Q) recompute v * (u), \u2206 * (u) if \u2206 * (u) > 0 then // merge u with best merger intend(u) \u2190 v * (u) update \u2206(x, u) \u2200x : v * (x) = u remove u from C(x) \u2200x update \u2206(x, v) \u2200x : v * (x) = v update \u2206(v, x) \u2200x \u2208 C(v)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "if updated \u2206 > \u2206 * for any words then reset \u2206 * , v * for those words // (these updates can increase a word's priority from \u2212\u221e) else if \u2206 * (u) = \u2212\u221e then // reject but leave in queue", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2206 * (u) \u2190 \u2212\u221e", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Algorithm 1: Our clustering phase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "current best target v * (u) and best score \u2206 * (u), using a priority queue. At each step of the algorithm, we pop the u with the current best \u2206 * (u), recompute its scores, and then merge it with v * (u) if doing so would improve the model posterior. In an exact algorithm, we would then need to recompute most of the other scores, since merging u and v * (u) affects other words for which u and v * (u) are candidates, and also words for which they appear in the context set. However, recomputing all these scores would be extremely time-consuming. 3 Therefore, we recompute scores for only those words where the previous best merger was either u or v * (u). (If the best merge would not improve the probability, we reject it, but since its score might increase if we merge v * (u), we leave u in the queue, setting its \u2206 score to \u2212\u221e; this score will be updated if we merge v * (u).) Since we recompute the exact scores \u2206(u, v) immediately before merging u, the algorithm is guaran-teed never to reduce the posterior probability. It can potentially make changes in the wrong order, since not all the \u2206s are recomputed in each step, but most changes do not affect one another, so performing them out of order has no impact. Empirically, we find that mutually exclusive changes (usually of the form (u, v) and (v, w)) tend to differ enough in initial score that they are evaluated in the correct order.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Clustering algorithm", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To train the transducer on a set of mappings between surface and intended forms, we find the maximumprobability state sequence for each mapping (another application of Viterbi EM) and extract features for each state and its output. Learning weights is then a maximum-entropy problem, which we solve using Orthant-wise Limited-memory Quasi-Newton. 4 To construct our initial transducer, we first learn weights for the marginal distribution on surface sounds by training the max-ent system with only the bias features active. Next, we manually set weights (Table 1) for insertions and deletions, which do not appear on the surface, and for faithfulness features. Other features get an initial weight of 0.", |
| "cite_spans": [ |
| { |
| "start": 347, |
| "end": 348, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 554, |
| "end": 563, |
| "text": "(Table 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training the transducer", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Our corpus is a processed version of the Bernstein-Ratner corpus (Bernstein-Ratner, 1987) from CHILDES (MacWhinney, 2000) , which contains orthographic transcriptions of parent-child dyads with infants aged 13-23 months. Brent and Cartwright (1996) created a phonemic version of this corpus by extracting all infant-directed utterances and converted them to a phonemic transcription using a dictionary. This version, which contains 9790 utterances (33399 tokens, 1321 types), is now standard for word segmentation, but contains no phonetic variability.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 121, |
| "text": "(MacWhinney, 2000)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 221, |
| "end": 248, |
| "text": "Brent and Cartwright (1996)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Since producing a close phonetic transcription of this data would be impractical, we instead construct an approximate phonetic version using information from the Buckeye corpus (Pitt et al., 2007) . Buckeye is a corpus of adult-directed conversational American English, and has been phonetically transcribed Feature Weight output-is-x marginal p(x) output-is-0 same-sound 5 same-{place,voice, manner} 2 insertion -3 Table 1 : Initial transducer weights.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 196, |
| "text": "(Pitt et al., 2007)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 416, |
| "end": 423, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\"about\" ahbawt:15, bawt:9, ihbawt:4, ahbawd:4, ihbawd:4, ahbaat:2, baw:1, ahbaht:1, erbawd:1, bawd:1, ahbaad:1, ahpaat:1, bah:1, baht:1, ah:1, ahbahd:1, ehbaat:1, ahbaed:1, ihbaht:1, baot:1 \"wanna\" waanah:94, waanih:37, wahnah:16, waan:13, wahneh:8, wahnih:5, wahney:3, waanlih:3, wehnih:2, waaneh:2, waonih:2, waaah:1, wuhnih:1, wahn:1, waantah:1, waanaa:1, wowiy:1, waaih:1, wah:1, waaniy:1 Table 2 : Empirical distribution of pronunciations of \"about\" and \"wanna\" in our dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 393, |
| "end": 400, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "by hand to indicate realistic pronunciation variability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To create our phonetic corpus, we replace each phonemic word in the Bernstein-Ratner-Brent corpus with a phonetic pronunciation of that word sampled from the empirical distribution of pronunciations in Buckeye (Table 2 ). If the word never occurs in Buckeye, we use the original phonemic version. Our corpus is not completely realistic as a sample of child-directed speech. Since each pronunciation is sampled independently, it lacks coarticulation and prosodic effects, and the distribution of pronunciations is derived from adult-directed rather than child-directed speech. Nonetheless, it represents phonetic variability more realistically than the Bernstein-Ratner-Brent corpus, while still maintaining the lexical characteristics of infant-directed speech (as compared to the Buckeye corpus, with its much larger vocabulary and more complex language model).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 210, |
| "end": 218, |
| "text": "(Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We conduct our development experiments on the first 8000 input utterances, holding out the remaining 1790 for evaluation. For evaluation experiments, we run the system on all 9790 utterances, reporting scores on only the last 1790.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We evaluate our results by generalizing the three segmentation metrics from Goldwater et al. (2009) : word boundary F-score, word token F-score, and lexicon (word type) F-score . 0 1 2 3 4 5 Iteration 75 76 77 78 79 80 81 82 Token F Lexicon F Figure 5 : System scores over 5 iterations.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 99, |
| "text": "Goldwater et al. (2009)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 177, |
| "end": 241, |
| "text": ". 0 1 2 3 4 5 Iteration 75 76 77 78 79 80 81 82", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 260, |
| "end": 268, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In our first set of experiments we evaluate how well our system clusters together surface forms derived from the same intended form, assuming gold standard word boundaries. We do not evaluate the induced intended forms directly against the gold standard intended forms-we want to evaluate cluster memberships and not labels. Instead we compute a one-to-one mapping between our induced lexical items and the gold standard, maximizing the agreement between the two (Haghighi and Klein, 2006) . Using this mapping, we compute mapped token Fscore 5 and lexicon F-score.", |
| "cite_spans": [ |
| { |
| "start": 463, |
| "end": 489, |
| "text": "(Haghighi and Klein, 2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In our second set of experiments, we use unknown word boundaries and evaluate the segmentations. We report the standard word boundary F and unlabeled word token F as well as mapped F. The unlabeled token score counts correctly segmented tokens, whether assigned a correct intended form or not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We first run our system with known word boundaries (Table 3) . As a baseline, we treat every surface token as its own intended form (none). This baseline has fairly high accuracy; 65% of word tokens receive the most common pronunciation for their intended form. 6 As an upper bound, we find the best intended form for each surface type (type ubound). This correctly resolves 91% of tokens; the remaining error is due to homophones (surface types corresponding to more than one intended form). We also test our sys- tem using an oracle transducer (oracle trans.)-the transducer estimated from the upper-bound mapping. This scores 83%, showing that our articulatory feature set captures most, but not all, of the available information. At the beginning of bootstrapping, our system (init) scores 75%, but this improves to 79% after five iterations of reestimation (system). Most learning occurs in the first two or three iterations ( Figure 5 ). To determine the importance of different parts of our system, we run a few ablation tests on development data. Context information is critical to obtain a good solution; setting \u2206 L and \u2206 R to 0 lowers our dev token F-score from 83% to 75%. Initializing all feature weights to 0 yields a poor initial solution (18% dev token F instead of 75%), but after learning the result is only slightly lower than using the weights in Table 1 (78% rather than 80%), showing that the system is quite robust to initialization.", |
| "cite_spans": [ |
| { |
| "start": 262, |
| "end": 263, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 51, |
| "end": 60, |
| "text": "(Table 3)", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 932, |
| "end": 940, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 1367, |
| "end": 1374, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Known word boundaries", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "As a simple extension of our model to the case of unknown word boundaries, we interleave it with an existing model of word segmentation, dpseg (Gold-water et al., 2009) . 7 In each iteration, we run the segmenter, then bootstrap our model for five iterations on the segmented output. We then concatenate the intended word sequence proposed by our model to produce the next iteration's segmenter input.", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 168, |
| "text": "(Gold-water et al., 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 171, |
| "end": 172, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unknown word boundaries", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Phonetic variation is known to reduce the performance of dpseg (Fleck, 2008; Boruta et al., 2011) and our experiments confirm this (Table 4) . Using induced word boundaries also makes it harder to recover the lexicon (Table 5) , lowering the baseline F-score from 67% to 43%. Nevertheless, our system improves the lexicon F-score to 46%, with token F rising from 44% to 49%, demonstrating the system's ability to work without gold word boundaries. Unfortunately, performing multiple iterations between the segmenter and lexical-phonetic learner has little further effect; we hope to address this issue in future.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 76, |
| "text": "(Fleck, 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 77, |
| "end": 97, |
| "text": "Boruta et al., 2011)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 131, |
| "end": 140, |
| "text": "(Table 4)", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 217, |
| "end": 226, |
| "text": "(Table 5)", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unknown word boundaries", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We have presented a noisy-channel model that simultaneously learns a lexicon, a bigram language model, and a model of phonetic variation, while using only the noisy surface forms as training data. It is the first model of lexical-phonetic acquisition to include word-level context and to be tested on an infant-directed corpus with realistic phonetic variability. Whether trained using gold standard or automatically induced word boundaries, the model recovers lexical items more effectively than a system that assumes no phonetic variability; moreover, the use of word-level context is key to the model's success. Ultimately, we hope to extend the model to jointly infer word boundaries along with lexical-phonetic knowledge, and to work directly from acoustic input. However, we have already shown that lexical-phonetic learning from a broad-coverage corpus is possible, supporting the claim that infants acquire lexical and phonetic knowledge simultaneously.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We use seven place values and five manner values (stop, nasal stop, fricative, vowel, other). Empty segments like and \u2022 are assigned a special value \"no-value\" for all features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use one discount, rather than several as in modified KN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The transducer scores can be cached since they depend only on surface forms, but the language model scores cannot.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the implementation ofAndrew and Gao (2007) with an l2 regularizer and regularization parameter r = 1; although this could be tuned, in practice it has little effect on results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "When using the gold word boundaries, the precision and recall are equal and this is is the same as the accuracy; in segmentation experiments the two differ, because with fewer segmentation boundaries, the system proposes fewer tokens. Only correctly segmented tokens which are also mapped to the correct form count as matches.6 The lexicon recall is not quite 100% because one rare word appears only as a homophone of another word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by EPSRC grant EP/H050442/1 to the second author. 7 dpseg1.2 from http://homepages.inf.ed.ac. uk/sgwater/resources.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Modelling early language acquisition skills: Towards a general statistical learning mechanism", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Aimetti", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Student Research Workshop at EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Aimetti. 2009. Modelling early language acquisition skills: Towards a general statistical learning mechanism. In Proceedings of the Student Research Workshop at EACL.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Comparative analysis of Viterbi training and ML estimation for HMMs", |
| "authors": [ |
| { |
| "first": "Armen", |
| "middle": [], |
| "last": "Allahverdyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Aram", |
| "middle": [], |
| "last": "Galstyan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Armen Allahverdyan and Aram Galstyan. 2011. Compar- ative analysis of Viterbi training and ML estimation for HMMs. In Advances in Neural Information Processing Systems (NIPS).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "OpenFst: A general and efficient weighted finite-state transducer library", |
| "authors": [ |
| { |
| "first": "Cyril", |
| "middle": [], |
| "last": "Allauzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Riley", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Schalkwyk", |
| "suffix": "" |
| }, |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Skut", |
| "suffix": "" |
| }, |
| { |
| "first": "Mehryar", |
| "middle": [], |
| "last": "Mohri", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Ninth International Conference on Implementation and Application of Automata", |
| "volume": "4783", |
| "issue": "", |
| "pages": "11--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wo- jciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state trans- ducer library. In Proceedings of the Ninth Interna- tional Conference on Implementation and Application of Automata, (CIAA 2007), volume 4783 of Lecture Notes in Computer Science, pages 11-23. Springer. http://www.openfst.org.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Scalable training of L1-regularized log-linear models", |
| "authors": [ |
| { |
| "first": "Galen", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ICML '07", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Galen Andrew and Jianfeng Gao. 2007. Scalable training of L1-regularized log-linear models. In ICML '07.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Language-model/acoustic-channelmodel balance mechanism", |
| "authors": [ |
| { |
| "first": "Lalit", |
| "middle": [], |
| "last": "Bahl", |
| "suffix": "" |
| }, |
| { |
| "first": "Raimo", |
| "middle": [], |
| "last": "Bakis", |
| "suffix": "" |
| }, |
| { |
| "first": "Frederick", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "", |
| "volume": "23", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lalit Bahl, Raimo Bakis, Frederick Jelinek, and Robert Mercer. 1980. Language-model/acoustic-channel- model balance mechanism. Technical disclosure bul- letin Vol. 23, No. 7b, IBM, December.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The phonology of parentchild speech", |
| "authors": [ |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Bernstein-Ratner", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Children's Language", |
| "volume": "6", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nan Bernstein-Ratner. 1987. The phonology of parent- child speech. In K. Nelson and A. van Kleeck, editors, Children's Language, volume 6. Erlbaum, Hillsdale, NJ.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Testing the robustness of online word segmentation: effects of linguistic diversity and phonetic variation", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Boruta", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Peperkamp", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Crabb\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ACL HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Boruta, S. Peperkamp, B. Crabb\u00e9, E. Dupoux, et al. 2011. Testing the robustness of online word segmenta- tion: effects of linguistic diversity and phonetic varia- tion. ACL HLT 2011, page 1.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Distributional regularity and phonotactic constraints are useful for segmentation", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Brent", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Cartwright", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Cognition", |
| "volume": "61", |
| "issue": "", |
| "pages": "93--125", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Brent and Timothy Cartwright. 1996. Distribu- tional regularity and phonotactic constraints are useful for segmentation. Cognition, 61:93-125.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An efficient, probabilistically sound algorithm for segmentation and word discovery", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brent", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Machine Learning", |
| "volume": "34", |
| "issue": "", |
| "pages": "71--105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael R. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71-105, February.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Learning diphone-based segmentation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Daland", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "B" |
| ], |
| "last": "Pierrehumbert", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Cognitive Science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Daland and J.B. Pierrehumbert. 2010. Learning diphone-based segmentation. Cognitive Science.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Latent-variable modeling of string transductions with finite-state methods", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Dreyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [ |
| "R" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08", |
| "volume": "", |
| "issue": "", |
| "pages": "1080--1089", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus Dreyer, Jason R. Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 1080-1089, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Adaptive non-negative matrix factorization in a computational model of language acquisition", |
| "authors": [ |
| { |
| "first": "Joris", |
| "middle": [], |
| "last": "Driesen", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Louis Ten Bosch", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Hamme", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joris Driesen, Louis ten Bosch, and Hugo Van hamme. 2009. Adaptive non-negative matrix factorization in a computational model of language acquisition. In Proceedings of Interspeech.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Templatic features for modeling phoneme acquisition", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Beraud-Sudreau", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sagayama", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 33rd Annual Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Dupoux, G. Beraud-Sudreau, and S. Sagayama. 2011. Templatic features for modeling phoneme acquisition. In Proceedings of the 33rd Annual Cognitive Science Society.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Learning phonetic categories by learning a lexicon", |
| "authors": [ |
| { |
| "first": "Naomi", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 31st Annual Conference of the Cognitive Science Society (CogSci)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Naomi Feldman, Thomas Griffiths, and James Morgan. 2009. Learning phonetic categories by learning a lexi- con. In Proceedings of the 31st Annual Conference of the Cognitive Science Society (CogSci).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Interactions between word and speech sound categorization in language acquisition", |
| "authors": [ |
| { |
| "first": "Naomi", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Naomi Feldman. 2011. Interactions between word and speech sound categorization in language acquisition. Ph.D. thesis, Brown University.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Lexicalized phonotactic word segmentation", |
| "authors": [ |
| { |
| "first": "Margaret", |
| "middle": [ |
| "M" |
| ], |
| "last": "Fleck", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "130--138", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Margaret M. Fleck. 2008. Lexicalized phonotactic word segmentation. In Proceedings of ACL-08: HLT, pages 130-138, Columbus, Ohio, June. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning OT constraint rankings using a maximum entropy model", |
| "authors": [ |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Stockholm Workshop on Variation within Optimality Theory", |
| "volume": "", |
| "issue": "", |
| "pages": "111--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sharon Goldwater and Mark Johnson. 2003. Learning OT constraint rankings using a maximum entropy model. In J. Spenader, A. Eriksson, and Osten Dahl, editors, Proceedings of the Stockholm Workshop on Variation within Optimality Theory, pages 111-120, Stockholm. Stockholm University.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Interpolating between types and tokens by estimating power-law generators", |
| "authors": [ |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sharon Goldwater, Tom Griffiths, and Mark Johnson. 2006. Interpolating between types and tokens by esti- mating power-law generators. In Advances in Neural Information Processing Systems (NIPS) 18.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A Bayesian framework for word segmentation: Exploring the effects of context", |
| "authors": [ |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "46th Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "398--406", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2009. A Bayesian framework for word segmen- tation: Exploring the effects of context. In In 46th Annual Meeting of the ACL, pages 398-406.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Prototype-driven learning for sequence models", |
| "authors": [ |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "320--327", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 320-327, New York City, USA, June. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "A maximum entropy model of phonotactics and phonotactic learning", |
| "authors": [ |
| { |
| "first": "Bruce", |
| "middle": [], |
| "last": "Hayes", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Linguistic Inquiry", |
| "volume": "39", |
| "issue": "3", |
| "pages": "379--440", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bruce Hayes and Colin Wilson. 2008. A maximum en- tropy model of phonotactics and phonotactic learning. Linguistic Inquiry, 39(3):379-440.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Introductory Phonology", |
| "authors": [ |
| { |
| "first": "Bruce", |
| "middle": [], |
| "last": "Hayes", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bruce Hayes. 2011. Introductory Phonology. John Wiley and Sons.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Towards spoken term discovery at scale with zero resources", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Hermansky", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "1676--1679", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Jansen, K. Church, and H. Hermansky. 2010. Towards spoken term discovery at scale with zero resources. In Proceedings of Interspeech, pages 1676-1679.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Improved backing-off for Mgram language modeling", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kneser", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proc. ICASSP '95", |
| "volume": "", |
| "issue": "", |
| "pages": "181--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Kneser and H. Ney. 1995. Improved backing-off for M- gram language modeling. In Proc. ICASSP '95, pages 181-184, Detroit, MI, May.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "The CHILDES Project: Tools for Analyzing Talk", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Macwhinney", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. MacWhinney. 2000. The CHILDES Project: Tools for Analyzing Talk. Vol 2: The Database. Lawrence Erlbaum Associates, Mahwah, NJ, 3rd edition.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Unsupervised extraction of recurring words from infantdirected speech", |
| "authors": [ |
| { |
| "first": "Fergus", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mcinnes", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 33rd Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fergus R. McInnes and Sharon Goldwater. 2011. Un- supervised extraction of recurring words from infant- directed speech. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Unsupervised pattern discovery in speech", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "S" |
| ], |
| "last": "Park", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "IEEE Transactions on Audio, Speech and Language Processing", |
| "volume": "16", |
| "issue": "", |
| "pages": "186--197", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. S. Park and J. R. Glass. 2008. Unsupervised pat- tern discovery in speech. IEEE Transactions on Audio, Speech and Language Processing, 16:186-197.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Weighted rational transductions and their application to human language processing", |
| "authors": [ |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Riley", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Sproat", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fernando Pereira, Michael Riley, and Richard Sproat. 1994. Weighted rational transductions and their ap- plication to human language processing. In HLT.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Buckeye corpus of conversational speech", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mark", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Pitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Dilley", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Kiesling", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Raymond", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Hume", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fosler-Lussier", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark A. Pitt, Laura Dilley, Keith Johnson, Scott Kies- ling, William Raymond, Elizabeth Hume, and Eric Fosler-Lussier. 2007. Buckeye corpus of conversa- tional speech (2nd release).", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "A computational model of word segmentation from continuous speech using transitional probabilities of atomic acoustic events", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Okko R\u00e4s\u00e4nen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Cognition", |
| "volume": "120", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Okko R\u00e4s\u00e4nen. 2011. A computational model of word segmentation from continuous speech using transitional probabilities of atomic acoustic events. Cognition, 120(2):28.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Preserving Subsegmental Variation in Modeling Word Segmentation (Or, the Raising of Baby Mondegreen)", |
| "authors": [ |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Rytting", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anton Rytting. 2007. Preserving Subsegmental Varia- tion in Modeling Word Segmentation (Or, the Raising of Baby Mondegreen). Ph.D. thesis, The Ohio State University.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Viterbi training improves unsupervised dependency parsing", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Valentin", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiyan", |
| "middle": [], |
| "last": "Spitkovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Alshawi", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "9--17", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and Christopher D. Manning. 2010. Viterbi training improves unsupervised dependency parsing. In Pro- ceedings of the Fourteenth Conference on Computa- tional Natural Language Learning, pages 9-17, Up- psala, Sweden, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Statistical clustering and the contents of the infant vocabulary", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Swingley", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Cognitive Psychology", |
| "volume": "50", |
| "issue": "", |
| "pages": "86--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Swingley. 2005. Statistical clustering and the contents of the infant vocabulary. Cognitive Psychology, 50:86- 132.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A hierarchical Bayesian language model based on Pitman-Yor processes", |
| "authors": [ |
| { |
| "first": "Yee Whye", |
| "middle": [], |
| "last": "Teh", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "985--992", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 985-992, Sydney, Australia, July. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Unsupervised learning of vowel categories from infant-directed speech", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "K" |
| ], |
| "last": "Vallabha", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcclelland", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pons", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "F" |
| ], |
| "last": "Werker", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Amano", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "104", |
| "issue": "33", |
| "pages": "13273--13278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G.K. Vallabha, J.L. McClelland, F. Pons, J.F. Werker, and S. Amano. 2007. Unsupervised learning of vowel categories from infant-directed speech. Proceedings of the National Academy of Sciences, 104(33):13273- 13278.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Unsupervised learning of acoustic sub-word units", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Varadarajan", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "165--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Varadarajan, S. Khudanpur, and E. Dupoux. 2008. Un- supervised learning of acoustic sub-word units. In Pro- ceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 165-168. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "A statistical model for word discovery in transcribed speech", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Venkataraman", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "3", |
| "pages": "351--372", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Venkataraman. 2001. A statistical model for word discovery in transcribed speech. Computational Lin- guistics, 27(3):351-372.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "The utterances you want one? want a cookie? represented (a) using a canonical phonemic encoding for each word and (b) as they might be pronounced phonetically. Lines (c) and (d) remove the word boundaries (but not utterance boundaries) from (b) and (a), respectively.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Our generative model of the surface tokens s from intended tokens x, which occur with left and right contexts l and r.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": "Some features generated for (\u2022, D, i) \u2192 d. Each black factor node corresponds to a positional template. The features instantiated for the (curr)\u2192out and \u2192out template are shown in full, and we show some of the features for the (curr,next)\u2192out template.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table><tr><td/><td>Boundaries</td><td/><td colspan=\"3\">Unlabeled Tokens</td></tr><tr><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"6\">no var. 90.1 80.3 84.9 74.5 68.7 71.5</td></tr><tr><td colspan=\"6\">w/var. 70.4 93.5 80.3 56.5 69.7 62.4</td></tr></table>", |
| "type_str": "table", |
| "text": "Results on 1790 utterances (known boundaries).", |
| "num": null |
| }, |
| "TABREF2": { |
| "html": null, |
| "content": "<table><tr><td/><td colspan=\"3\">Mapped Tokens</td><td colspan=\"3\">Lexicon (types)</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"7\">none 39.8 49.0 43.9 37.7 49.1 42.6</td></tr><tr><td>init</td><td colspan=\"6\">42.2 52.0 56.5 50.1 40.8 45.0</td></tr><tr><td>sys</td><td colspan=\"6\">44.2 54.5 48.8 48.6 43.1 45.7</td></tr></table>", |
| "type_str": "table", |
| "text": "Degradation in dpseg segmentation performance caused by pronunciation variation.", |
| "num": null |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "Results on 1790 utterances (induced boundaries).", |
| "num": null |
| } |
| } |
| } |
| } |