| { |
| "paper_id": "N19-1007", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:57:00.713764Z" |
| }, |
| "title": "Measuring the Perceptual Availability of Phonological Features During Language Acquisition Using Unsupervised Binary Stochastic Autoencoders", |
| "authors": [ |
| { |
| "first": "Cory", |
| "middle": [], |
| "last": "Shain", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The Ohio State University", |
| "location": {} |
| }, |
| "email": "shain.3@osu.edu" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "melsner@ling.osu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we deploy binary stochastic neural autoencoder networks as models of infant language learning in two typologically unrelated languages (Xitsonga and English). We show that the drive to model auditory percepts leads to latent clusters that partially align with theory-driven phonemic categories. We further evaluate the degree to which theorydriven phonological features are encoded in the latent bit patterns, finding that some (e.g. [\u00b1approximant]), are well represented by the network in both languages, while others (e.g. [\u00b1spread glottis]) are less so. Together, these findings suggest that many reliable cues to phonemic structure are immediately available to infants from bottom-up perceptual characteristics alone, but that these cues must eventually be supplemented by top-down lexical and phonotactic information to achieve adult-like phone discrimination. Our results also suggest differences in degree of perceptual availability between features, yielding testable predictions as to which features might depend more or less heavily on top-down cues during child language acquisition.", |
| "pdf_parse": { |
| "paper_id": "N19-1007", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we deploy binary stochastic neural autoencoder networks as models of infant language learning in two typologically unrelated languages (Xitsonga and English). We show that the drive to model auditory percepts leads to latent clusters that partially align with theory-driven phonemic categories. We further evaluate the degree to which theorydriven phonological features are encoded in the latent bit patterns, finding that some (e.g. [\u00b1approximant]), are well represented by the network in both languages, while others (e.g. [\u00b1spread glottis]) are less so. Together, these findings suggest that many reliable cues to phonemic structure are immediately available to infants from bottom-up perceptual characteristics alone, but that these cues must eventually be supplemented by top-down lexical and phonotactic information to achieve adult-like phone discrimination. Our results also suggest differences in degree of perceptual availability between features, yielding testable predictions as to which features might depend more or less heavily on top-down cues during child language acquisition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Distinctive features like [\u00b1voice] and [\u00b1sonorant] have been a core construct of phonological theory for many decades (Trubetskoy, 1939; Jakobson et al., 1951; Chomsky and Halle, 1968; Clements, 1985) . They have been used in automatic speech recognition (Livescu and Glass, 2004) , and psycholinguistic evidence suggests that they are cognitively available during language acquisition (Kuhl, 1980; White and Morgan, 2008) . Nonetheless, distinctive features are not directly observed by humans; they are abstractions that must be inferred from dense perceptual information (sound waves) during language acquisition and comprehension, which raises questions about how they are learned and recognized. In adults, phonological comprehension is aided by top-down lexical and phonotactic (i.e. sound sequencing) constraints. For example, the classic phonemic restoration effect (Warren, 1970) shows that adults infer missing phonemes from context with such ease that they often fail to notice when acoustic cues to phone identity are erased. However, infants first learning their phonemic categories have not yet acquired reliable top-down lexical and phonotactic models and must rely more heavily on bottom-up perceptual information. To a learner faced with the immense challenge of discovering structure in dense perceptual input, do theory-driven phonological features \"stand out\" or are they swamped by noise? In this paper, we address this question using an unsupervised computational acquisition model. Previous models of phonological category induction have emphasized the importance of topdown information (information about the contexts in which phonemes occur) (Peperkamp et al., 2006; Swingley, 2009; Feldman et al., 2009a Feldman et al., , 2013a Moreton and Pater, 2012a,b; Martin et al., 2013; Pater and Moreton, 2014; Frank et al., 2014; Doyle et al., 2014; Doyle and Levy, 2016) . But to prevent the acquisition process from being circular, the learner cannot operate solely on top-down information -the acoustic signal must provide some evidence for the phonemic categories. We hypothesize that the same must be true for at least some phonological features (e.g. [\u00b1nasal] , [\u00b1lateral] ), but previous work on unsupervised speech processing has inferred phonological structure from spoken utterances using either (1) discrete transition-based architectures (Varadarajan et al., 2008; Jansen and Church, 2011; Lee and Glass, 2012) , which do attempt to discover featurally-related natural classes, or (2) continuous deep neural (Kamper et al., , 2017a Renshaw et al., 2015) architectures, whose internal representations are difficult to interpret. Furthermore, these approaches do not separate the contributions of top-down sequential information from bottom-up acoustic properties of segments, making it difficult to assess the relative importance of these information sources throughout the acquisition process.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 34, |
| "text": "[\u00b1voice]", |
| "ref_id": null |
| }, |
| { |
| "start": 39, |
| "end": 50, |
| "text": "[\u00b1sonorant]", |
| "ref_id": null |
| }, |
| { |
| "start": 118, |
| "end": 136, |
| "text": "(Trubetskoy, 1939;", |
| "ref_id": "BIBREF114" |
| }, |
| { |
| "start": 137, |
| "end": 159, |
| "text": "Jakobson et al., 1951;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 160, |
| "end": 184, |
| "text": "Chomsky and Halle, 1968;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 185, |
| "end": 200, |
| "text": "Clements, 1985)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 255, |
| "end": 280, |
| "text": "(Livescu and Glass, 2004)", |
| "ref_id": "BIBREF75" |
| }, |
| { |
| "start": 386, |
| "end": 398, |
| "text": "(Kuhl, 1980;", |
| "ref_id": "BIBREF67" |
| }, |
| { |
| "start": 399, |
| "end": 422, |
| "text": "White and Morgan, 2008)", |
| "ref_id": "BIBREF121" |
| }, |
| { |
| "start": 874, |
| "end": 888, |
| "text": "(Warren, 1970)", |
| "ref_id": "BIBREF118" |
| }, |
| { |
| "start": 1667, |
| "end": 1691, |
| "text": "(Peperkamp et al., 2006;", |
| "ref_id": "BIBREF95" |
| }, |
| { |
| "start": 1692, |
| "end": 1707, |
| "text": "Swingley, 2009;", |
| "ref_id": "BIBREF111" |
| }, |
| { |
| "start": 1708, |
| "end": 1729, |
| "text": "Feldman et al., 2009a", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1730, |
| "end": 1753, |
| "text": "Feldman et al., , 2013a", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1754, |
| "end": 1781, |
| "text": "Moreton and Pater, 2012a,b;", |
| "ref_id": null |
| }, |
| { |
| "start": 1782, |
| "end": 1802, |
| "text": "Martin et al., 2013;", |
| "ref_id": "BIBREF79" |
| }, |
| { |
| "start": 1803, |
| "end": 1827, |
| "text": "Pater and Moreton, 2014;", |
| "ref_id": "BIBREF93" |
| }, |
| { |
| "start": 1828, |
| "end": 1847, |
| "text": "Frank et al., 2014;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1848, |
| "end": 1867, |
| "text": "Doyle et al., 2014;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1868, |
| "end": 1889, |
| "text": "Doyle and Levy, 2016)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 2175, |
| "end": 2183, |
| "text": "[\u00b1nasal]", |
| "ref_id": null |
| }, |
| { |
| "start": 2186, |
| "end": 2196, |
| "text": "[\u00b1lateral]", |
| "ref_id": null |
| }, |
| { |
| "start": 2368, |
| "end": 2394, |
| "text": "(Varadarajan et al., 2008;", |
| "ref_id": "BIBREF116" |
| }, |
| { |
| "start": 2395, |
| "end": 2419, |
| "text": "Jansen and Church, 2011;", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 2420, |
| "end": 2440, |
| "text": "Lee and Glass, 2012)", |
| "ref_id": "BIBREF71" |
| }, |
| { |
| "start": 2538, |
| "end": 2561, |
| "text": "(Kamper et al., , 2017a", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 2562, |
| "end": 2583, |
| "text": "Renshaw et al., 2015)", |
| "ref_id": "BIBREF103" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "By contrast, our model attends exclusively to phone-internal acoustic patterns using a deep neural autoencoder with a discrete embedding space composed of binary stochastic neurons (BSNs) (Rosenblatt, 1958; Hinton, 2012; Bengio et al., 2013; Courbariaux et al., 2016) . BSNs allow us to exploit (1) the interpretability of discrete representations, (2) the decomposability of phone segments into phonological features, and (3) and the power of deep neural function approximators to relate percepts and their representations. Since every token is labeled with a binary latent code, it is possible to evaluate the model's recovery not only of phonological categories but also of phonological features. Featural representations can encode distributional facts about which processes apply to which classes of sounds in ways that cross-cut the phonological space, rather than simply grouping each segment with a set of similar neighbors (LeCun et al., 2015) . By focusing on the acoustic properties of sounds themselves rather than their sequencing in context, our model enables exploration of two questions about the data available to young learners whose training signal must primarily be extracted from bottom-up perceptual information: (1) to what extent can phoneme categories emerge from a drive to model auditory percepts, and (2) how perceptually available are theory-driven phonological features (that is, how easily can they be extracted directly from lowlevel acoustic percepts)?", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 206, |
| "text": "(Rosenblatt, 1958;", |
| "ref_id": "BIBREF106" |
| }, |
| { |
| "start": 207, |
| "end": 220, |
| "text": "Hinton, 2012;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 221, |
| "end": 241, |
| "text": "Bengio et al., 2013;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 242, |
| "end": 267, |
| "text": "Courbariaux et al., 2016)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 932, |
| "end": 952, |
| "text": "(LeCun et al., 2015)", |
| "ref_id": "BIBREF70" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our results show (a) that phonemic categories emerge naturally but imperfectly from perceptual reconstruction and (b) that theory-driven features differ in their degree of perceptual availability. Together, these findings suggest that many reliable cues to phonemic structure are immediately available to infants from bottom-up perceptual characteristics alone, but that these cues may eventually need to be supplemented by top-down lexical and phonotactic information to achieve adult-like phone discrimination (Feldman et al., 2013a; Pater and Moreton, 2014) . Our findings also suggest hy-potheses as to precisely which kinds of phonological features are more or less perceptually available and therefore might depend more or less heavily on top-down cues for acquisition. Such differences might suggest relative timelines at which different features might be appropriated in support of phonemic, phonotactic, and lexical generalization, providing a rich set of testable hypotheses about child language acquisition.", |
| "cite_spans": [ |
| { |
| "start": 512, |
| "end": 535, |
| "text": "(Feldman et al., 2013a;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 536, |
| "end": 560, |
| "text": "Pater and Moreton, 2014)", |
| "ref_id": "BIBREF93" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The present paper has a strong connection to recent work on unsupervised speech processing, especially the Zerospeech 2015 (Versteegh et al., 2015) and 2017 (Dunbar et al., 2017) shared tasks. Participating systems (Badino et al., 2015; Renshaw et al., 2015; Agenbag and Niesler, 2015; Baljekar et al., 2015; R\u00e4s\u00e4nen et al., 2015; Lyzinski et al., 2015; Zeghidour et al., 2016; Heck et al., 2016; Srivastava and Shrivastava, 2016; Kamper et al., 2017b; Yuan et al., 2017; Heck et al., 2017; Shibata et al., 2017; Ansari et al., 2017a,b) perform unsupervised ABX discrimination and/or spoken term discovery on the basis of unlabeled speech alone. The design and evaluation of these and related systems (Kamper et al., , 2017a Elsner and Shain, 2017; R\u00e4s\u00e4nen et al., 2018) are oriented toward word-level modeling. As such, our focus on the perceptual availability of phonological features is orthogonal to but complementary with this line of research. Since distinctive features are important for indexing lexical contrasts, especially between highly confusable words (e.g. onset voicing alone distinguishes sap and zap in English), studying the perceptual availability of distinctive features to an unsupervised learner may help improve the design and analysis of lowresource speech processing systems.", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 147, |
| "text": "(Versteegh et al., 2015)", |
| "ref_id": "BIBREF117" |
| }, |
| { |
| "start": 157, |
| "end": 178, |
| "text": "(Dunbar et al., 2017)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 215, |
| "end": 236, |
| "text": "(Badino et al., 2015;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 237, |
| "end": 258, |
| "text": "Renshaw et al., 2015;", |
| "ref_id": "BIBREF103" |
| }, |
| { |
| "start": 259, |
| "end": 285, |
| "text": "Agenbag and Niesler, 2015;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 286, |
| "end": 308, |
| "text": "Baljekar et al., 2015;", |
| "ref_id": null |
| }, |
| { |
| "start": 309, |
| "end": 330, |
| "text": "R\u00e4s\u00e4nen et al., 2015;", |
| "ref_id": "BIBREF101" |
| }, |
| { |
| "start": 331, |
| "end": 353, |
| "text": "Lyzinski et al., 2015;", |
| "ref_id": "BIBREF76" |
| }, |
| { |
| "start": 354, |
| "end": 377, |
| "text": "Zeghidour et al., 2016;", |
| "ref_id": "BIBREF126" |
| }, |
| { |
| "start": 378, |
| "end": 396, |
| "text": "Heck et al., 2016;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 397, |
| "end": 430, |
| "text": "Srivastava and Shrivastava, 2016;", |
| "ref_id": "BIBREF110" |
| }, |
| { |
| "start": 431, |
| "end": 452, |
| "text": "Kamper et al., 2017b;", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 453, |
| "end": 471, |
| "text": "Yuan et al., 2017;", |
| "ref_id": "BIBREF125" |
| }, |
| { |
| "start": 472, |
| "end": 490, |
| "text": "Heck et al., 2017;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 491, |
| "end": 512, |
| "text": "Shibata et al., 2017;", |
| "ref_id": "BIBREF108" |
| }, |
| { |
| "start": 513, |
| "end": 536, |
| "text": "Ansari et al., 2017a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 701, |
| "end": 724, |
| "text": "(Kamper et al., , 2017a", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 725, |
| "end": 748, |
| "text": "Elsner and Shain, 2017;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 749, |
| "end": 770, |
| "text": "R\u00e4s\u00e4nen et al., 2018)", |
| "ref_id": "BIBREF102" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Speech Processing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To our knowledge, the task most closely related to the current paper is unsupervised phone discovery. Some studies in this tradition segment speech into phone-like units without clustering them (Dusan and Rabiner, 2006; Qiao et al., 2008) , while others cluster small subsets of presegmented sounds (usually vowels) using parametric models (mixture-of-Gaussians) (Vallabha et al., 2007; Feldman et al., 2013a; Antetomaso et al., 2017) . Further work combines these tasks and extends the approach to cover the entire acous-tic space (Lee and Glass, 2012) . However, for a variety of reasons, the Lee and Glass (2012) model does not straightforwardly support evaluation of the perceptual availability of phonological features. First, they do not quantitatively evaluate the discovered phoneme clusters. Second, the model incorporates phonotactics through transition probabilities, making it difficult to disentangle the contributions of top-down and bottom-up information to the learning process. Third, the clustering model is not feature-based, but instead consists of atomic categories, each defining a distinct generative process for acoustics. This design is at odds with the widely held view in linguistic theory that phonemes are not inscrutable atoms of the phonological grammar, but instead labels for bundles of features that define natural classes (Clements, 1985) . Our approach is therefore more appropriate to the question at hand.", |
| "cite_spans": [ |
| { |
| "start": 194, |
| "end": 219, |
| "text": "(Dusan and Rabiner, 2006;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 220, |
| "end": 238, |
| "text": "Qiao et al., 2008)", |
| "ref_id": "BIBREF100" |
| }, |
| { |
| "start": 363, |
| "end": 386, |
| "text": "(Vallabha et al., 2007;", |
| "ref_id": "BIBREF115" |
| }, |
| { |
| "start": 387, |
| "end": 409, |
| "text": "Feldman et al., 2013a;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 410, |
| "end": 434, |
| "text": "Antetomaso et al., 2017)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 532, |
| "end": 553, |
| "text": "(Lee and Glass, 2012)", |
| "ref_id": "BIBREF71" |
| }, |
| { |
| "start": 595, |
| "end": 615, |
| "text": "Lee and Glass (2012)", |
| "ref_id": "BIBREF71" |
| }, |
| { |
| "start": 1357, |
| "end": 1373, |
| "text": "(Clements, 1985)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Speech Processing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "There is a great deal of evidence that many phonological contrasts are perceptually available from a very early stage (Eimas et al., 1971; Moffitt, 1971; Trehub, 1973; Jusczyk and Derrah, 1987; Eimas et al., 1987) . However, studies of infant phone discrimination typically use carefully-enunciated laboratory stimuli, which have been shown to be substantially easier to discriminate than phones in naturalistic utterances (Feldman et al., 2013a; Antetomaso et al., 2017) . It is thus likely that inferring phone categories from acoustic evidence is a persistently challenging task, and studies have found language-specific tuning of the speech perception system from fetal stages (Moon et al., 2013) through the first year (Kuhl et al., 1992; Werker and Tees, 1984) and even all the way into the preteen years (Hazan and Barrett, 2000) . Experiments show that these contrasts are expressed, not simply as oppositions between particular categories, but as a featural system, even in early infancy. Evidence of featural effects has been found in the phone discrimination patterns of both adults (Chl\u00e1dkov\u00e1 et al., 2015) and infants (Kuhl, 1980; Hillenbrand, 1985; White and Morgan, 2008) . Studies have also shown that infants generalize new distinctions along featural dimensions (Maye et al., 2008b; Cristi\u00e0 et al., 2011) . Given infants' early detection and use of some featural contrasts, we hypothesize that there is strong evidence in the acoustic signal for these distinctions, which may then bootstrap the acquisition of phonotactic and lexical patterns (Beckman and Edwards, 2000) .", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 138, |
| "text": "(Eimas et al., 1971;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 139, |
| "end": 153, |
| "text": "Moffitt, 1971;", |
| "ref_id": "BIBREF86" |
| }, |
| { |
| "start": 154, |
| "end": 167, |
| "text": "Trehub, 1973;", |
| "ref_id": "BIBREF112" |
| }, |
| { |
| "start": 168, |
| "end": 193, |
| "text": "Jusczyk and Derrah, 1987;", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 194, |
| "end": 213, |
| "text": "Eimas et al., 1987)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 423, |
| "end": 446, |
| "text": "(Feldman et al., 2013a;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 447, |
| "end": 471, |
| "text": "Antetomaso et al., 2017)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 681, |
| "end": 700, |
| "text": "(Moon et al., 2013)", |
| "ref_id": "BIBREF87" |
| }, |
| { |
| "start": 724, |
| "end": 743, |
| "text": "(Kuhl et al., 1992;", |
| "ref_id": "BIBREF68" |
| }, |
| { |
| "start": 744, |
| "end": 766, |
| "text": "Werker and Tees, 1984)", |
| "ref_id": "BIBREF120" |
| }, |
| { |
| "start": 811, |
| "end": 836, |
| "text": "(Hazan and Barrett, 2000)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 1094, |
| "end": 1118, |
| "text": "(Chl\u00e1dkov\u00e1 et al., 2015)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1131, |
| "end": 1143, |
| "text": "(Kuhl, 1980;", |
| "ref_id": "BIBREF67" |
| }, |
| { |
| "start": 1144, |
| "end": 1162, |
| "text": "Hillenbrand, 1985;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 1163, |
| "end": 1186, |
| "text": "White and Morgan, 2008)", |
| "ref_id": "BIBREF121" |
| }, |
| { |
| "start": 1280, |
| "end": 1300, |
| "text": "(Maye et al., 2008b;", |
| "ref_id": "BIBREF81" |
| }, |
| { |
| "start": 1301, |
| "end": 1322, |
| "text": "Cristi\u00e0 et al., 2011)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1561, |
| "end": 1588, |
| "text": "(Beckman and Edwards, 2000)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distinctive Features and Phonology Acquisition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Experiments also suggest asymmetries in the perceptual availability of features. For example, a consonant-vowel distinction appears to be an important early foothold in phonology acquisition: vowel/consonant discrimination emerges early in infant speech processing (Dehaene-Lambertz and Dehaene, 1994) , language-specificity in perception follows different timecourses for consonants Tees, 1984) and vowels (Kuhl et al., 1992) , and vowels and consonants play distinct roles in lexical access vs. rule discovery in children (Nazzi, 2005; Pons and Toro, 2010; Hochmann et al., 2011) . Young infants have also been shown to be sensitive to voicing contrasts (Lasky et al., 1975; Aslin et al., 1981; Maye et al., 2008b) . Features that distinguish consonantlike from vowel-like segments or voiced from unvoiced segments may thus be highly available to young learners. Infants struggle by comparison with other kinds of phone discrimination tasks, including certain stop-fricative contrasts (Polka et al., 2001 ) and certain place distinctions within nasal (Narayan et al., 2010) and sibilant (Nittrouer, 2001; Cristi\u00e0 et al., 2011) segments. Even adults struggle with fricative place discrimination from strictly acoustic cues (McGuire and Babel, 2012). Similar asymmetries emerge from our unsupervised learner, as shown in Section 4.2. Our computational acquisition model complements this experimental research in several ways. First, its internal representations, unlike those of human infants, are open to detailed analysis, even when exposed to naturalistic language stimuli. Second, we can perform cross-linguistic comparisons using readily available corpora without requiring access to a pool of human subjects in each language community. Third, our model provides global and graded quantification of the perceptual availability of distinctive features in natural speech, permitting us to explore relationships between features in a way that is difficult to do through experiments on infants, which are generally constrained to same-different contrasts over a small set of manipulations.", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 301, |
| "text": "(Dehaene-Lambertz and Dehaene, 1994)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 384, |
| "end": 399, |
| "text": "Tees, 1984) and", |
| "ref_id": "BIBREF120" |
| }, |
| { |
| "start": 400, |
| "end": 426, |
| "text": "vowels (Kuhl et al., 1992)", |
| "ref_id": null |
| }, |
| { |
| "start": 524, |
| "end": 537, |
| "text": "(Nazzi, 2005;", |
| "ref_id": "BIBREF91" |
| }, |
| { |
| "start": 538, |
| "end": 558, |
| "text": "Pons and Toro, 2010;", |
| "ref_id": "BIBREF98" |
| }, |
| { |
| "start": 559, |
| "end": 581, |
| "text": "Hochmann et al., 2011)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 656, |
| "end": 676, |
| "text": "(Lasky et al., 1975;", |
| "ref_id": "BIBREF69" |
| }, |
| { |
| "start": 677, |
| "end": 696, |
| "text": "Aslin et al., 1981;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 697, |
| "end": 716, |
| "text": "Maye et al., 2008b)", |
| "ref_id": "BIBREF81" |
| }, |
| { |
| "start": 987, |
| "end": 1006, |
| "text": "(Polka et al., 2001", |
| "ref_id": "BIBREF97" |
| }, |
| { |
| "start": 1089, |
| "end": 1106, |
| "text": "(Nittrouer, 2001;", |
| "ref_id": "BIBREF92" |
| }, |
| { |
| "start": 1107, |
| "end": 1128, |
| "text": "Cristi\u00e0 et al., 2011)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distinctive Features and Phonology Acquisition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The reconstruction objective used here is not merely a convenient supervision signal. There is reason to believe that people actively model their perceptual worlds (Mamassian et al., 2002; Feldman, 2012; Singer et al., 2018; Yan et al., 2018) , and autoassociative structures have been found in several brain areas (Treves and Rolls, 1991; Rolls and Treves, 1998) . There is also evidence that phonetic comprehension and production can be acquired symbiotically through a sensorimotor loop relating acoustic perception and articulator movements (Houde and Jordan, 1998; Fadiga et al., 2002; Watkins et al., 2003; Wilson et al., 2004; Pulverm\u00fcller et al., 2006; Kr\u00f6ger et al., 2009; Bolhuis et al., 2010; Kr\u00f6ger and Cao, 2015; Bekolay, 2016) . Finally, evidence suggests that working memory limitations impose compression pressures on the perceptual system that favor sparse representations of dense acoustic percepts (Baddeley and Hitch, 1974) and may guide infant language acquisition (Baddeley et al., 1998; Elsner and Shain, 2017) . It is thus reasonable to suppose that perceptual reconstruction -such as that implemented by an autoencoder architecture -is immediately available as a learning signal to infants who still lack reliable guidance from phonotactics or the lexicon.", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 188, |
| "text": "(Mamassian et al., 2002;", |
| "ref_id": "BIBREF78" |
| }, |
| { |
| "start": 189, |
| "end": 203, |
| "text": "Feldman, 2012;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 204, |
| "end": 224, |
| "text": "Singer et al., 2018;", |
| "ref_id": "BIBREF109" |
| }, |
| { |
| "start": 225, |
| "end": 242, |
| "text": "Yan et al., 2018)", |
| "ref_id": "BIBREF124" |
| }, |
| { |
| "start": 315, |
| "end": 339, |
| "text": "(Treves and Rolls, 1991;", |
| "ref_id": "BIBREF113" |
| }, |
| { |
| "start": 340, |
| "end": 363, |
| "text": "Rolls and Treves, 1998)", |
| "ref_id": "BIBREF104" |
| }, |
| { |
| "start": 545, |
| "end": 569, |
| "text": "(Houde and Jordan, 1998;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 570, |
| "end": 590, |
| "text": "Fadiga et al., 2002;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 591, |
| "end": 612, |
| "text": "Watkins et al., 2003;", |
| "ref_id": "BIBREF119" |
| }, |
| { |
| "start": 613, |
| "end": 633, |
| "text": "Wilson et al., 2004;", |
| "ref_id": "BIBREF123" |
| }, |
| { |
| "start": 634, |
| "end": 660, |
| "text": "Pulverm\u00fcller et al., 2006;", |
| "ref_id": "BIBREF99" |
| }, |
| { |
| "start": 661, |
| "end": 681, |
| "text": "Kr\u00f6ger et al., 2009;", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 682, |
| "end": 703, |
| "text": "Bolhuis et al., 2010;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 704, |
| "end": 725, |
| "text": "Kr\u00f6ger and Cao, 2015;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 726, |
| "end": 740, |
| "text": "Bekolay, 2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 917, |
| "end": 943, |
| "text": "(Baddeley and Hitch, 1974)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 986, |
| "end": 1009, |
| "text": "(Baddeley et al., 1998;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1010, |
| "end": 1033, |
| "text": "Elsner and Shain, 2017)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognition and the BSN Autoencoder", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Our use of BSNs follows the spirit of the earliest work on artificial neural networks (Rosenblatt, 1958 ). Rosenblatt's perceptron was designed to study learning and decision-making in the brain and therefore used binary neurons to model the discrete firing behavior of their biological counterparts. This tradition has been replaced in deep learning research with differentiable activation functions that support supervised learning through backpropagation of error but are less biologically plausible. Our work takes advantage of the development of effective estimators for the gradients of discrete neurons Hinton, 2012; Bengio et al., 2013; Courbariaux et al., 2016; Chung et al., 2017) to wed these two traditions, exploiting BSNs to encode the learner's latent representation of auditory percepts and deep networks to map between percepts and their latent representations. In addition to the greater similarity of BSNs to biological neurons, the use of discrete featural representations is motivated by experimental evidence that human phone perception (including that of infants) is both featural (White and Morgan, 2008; Chl\u00e1dkov\u00e1 et al., 2015) and categorical (Liberman et al., 1961; Eimas et al., 1987; Harnad, 2003; Feldman et al., 2009b) .", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 103, |
| "text": "(Rosenblatt, 1958", |
| "ref_id": "BIBREF106" |
| }, |
| { |
| "start": 610, |
| "end": 623, |
| "text": "Hinton, 2012;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 624, |
| "end": 644, |
| "text": "Bengio et al., 2013;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 645, |
| "end": 670, |
| "text": "Courbariaux et al., 2016;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 671, |
| "end": 690, |
| "text": "Chung et al., 2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1104, |
| "end": 1128, |
| "text": "(White and Morgan, 2008;", |
| "ref_id": "BIBREF121" |
| }, |
| { |
| "start": 1129, |
| "end": 1152, |
| "text": "Chl\u00e1dkov\u00e1 et al., 2015)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1169, |
| "end": 1192, |
| "text": "(Liberman et al., 1961;", |
| "ref_id": "BIBREF72" |
| }, |
| { |
| "start": 1193, |
| "end": 1212, |
| "text": "Eimas et al., 1987;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1213, |
| "end": 1226, |
| "text": "Harnad, 2003;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 1227, |
| "end": 1249, |
| "text": "Feldman et al., 2009b)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognition and the BSN Autoencoder", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Experiments reported here use an 8-bit binary segment encoding. Eight bits is the the lower bound on binary encodings that are sufficiently expressive to capture all segmental contrasts in any known language (Mielke, 2009) . Although theory-driven taxonomies generally contain more than eight distinctive features, these taxonomies are known to be highly redundant (Cherry et al., 1953) . For example, the phonological featurization of the Xitsonga segments analyzed in our experiments contains 26 theory-driven features (Hayes, 2011; Hall et al., 2016) , yielding up to 2 26 = 67108864 distinct segment categories, far more than the number of known segment types in Xitsonga or even the number of training instances in our data. By entailment, any representation that can identify all segment types in a language can also identify all featural contrasts that discriminate those types, regardless of how the feature space is factored. For this reason, we consider a phonological feature to be represented if it can be detected by an arbitrary function of the latent bits (Section 4.2), without assuming that the true and discovered feature spaces will factor identically.", |
| "cite_spans": [ |
| { |
| "start": 208, |
| "end": 222, |
| "text": "(Mielke, 2009)", |
| "ref_id": "BIBREF84" |
| }, |
| { |
| "start": 365, |
| "end": 386, |
| "text": "(Cherry et al., 1953)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 521, |
| "end": 534, |
| "text": "(Hayes, 2011;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 535, |
| "end": 553, |
| "text": "Hall et al., 2016)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognition and the BSN Autoencoder", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Our study shares an interest in phonological features with previous work in automatic speech recognition attempting to discover mappings between acoustics and hand-labeled featural representations (Liu, 1996; Bitar and Espy-Wilson, 1996; Frankel and King, 2001; Kirchhoff et al., 2002; Livescu and Glass, 2004; Mitra et al., 2011, inter alia) . While these results provide evidence that such a mapping is indeed learnable in an oracle setting, they rely on a supervision signal (direct annotation of the target representations) to which children do not have access. Our unsupervised approach measures perceptual availability of features in a more realistic learning scenario.", |
| "cite_spans": [ |
| { |
| "start": 197, |
| "end": 208, |
| "text": "(Liu, 1996;", |
| "ref_id": "BIBREF74" |
| }, |
| { |
| "start": 209, |
| "end": 237, |
| "text": "Bitar and Espy-Wilson, 1996;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 238, |
| "end": 261, |
| "text": "Frankel and King, 2001;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 262, |
| "end": 285, |
| "text": "Kirchhoff et al., 2002;", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 286, |
| "end": 310, |
| "text": "Livescu and Glass, 2004;", |
| "ref_id": "BIBREF75" |
| }, |
| { |
| "start": 311, |
| "end": 342, |
| "text": "Mitra et al., 2011, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Acoustic Feature Learning", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The simulated learner used in this study is a deep neural autoencoder with an 8-bit layer of BSNs as its principle information bottleneck, depicted in Figure 1 . The model processes a given phone segment by encoding the segment's acoustic informa- tion into a bit pattern and then reconstructing the acoustic information from the encoded bit pattern.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 151, |
| "end": 159, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Acoustics E 1 E 2 . . . E e Encoder Segment Encoding Decoder D 1 D 2 . . . D d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "It is thus incentivized to use the latent bits in a systematic featural manner, encoding similar segments in similar ways. The encoder and decoder are both deep feedfoward residual networks (He et al., 2016) . 1 To enable feedforward autoencoding of sequential data, phone segments are clipped at 50 timesteps (500ms), providing complete coverage of over 99% of the phone segments in each corpus. Given F -dimensional input acoustic frames and a maximum input length of M timesteps, the weight matrix of each encoder layer is \u2208 R F M \u00d7F M except the final layer (\u2208 R F M \u00d78 ). Given R-dimensional reconstructed acoustic frames and a maximum output length of N timesteps, the weight matrix of each decoder layer is \u2208 R RN \u00d7RN except the first layer (\u2208 R 8\u00d7RN ). Both the encoder and decoder contain initial and final dense transformation layers, with three residual layers in between. Each residual layer contains two dense layers. All internal layers use tanh activations and are batchnormalized with a decay rate of 0.9 (Ioffe and Szegedy, 2015) .", |
| "cite_spans": [ |
| { |
| "start": 190, |
| "end": 207, |
| "text": "(He et al., 2016)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 210, |
| "end": 211, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 1021, |
| "end": 1046, |
| "text": "(Ioffe and Szegedy, 2015)", |
| "ref_id": "BIBREF55" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Given that the capacity for speaker adaptation -short-term accommodation of idiosyncrasies in individuals' productions -has been shown for 1 Feedforward networks are used both for computational reasons and because they dramatically outperformed recurrent networks in initial experiments, especially when RNN's were used for decoding. We hypothesize that this is due to the lack of direct access to the encoder timesteps, such as that permitted by sequence to sequence models with attention (Bahdanau et al., 2015). Attention is not viable for our goals because it defeats the purposes of an autoencoder by allowing the decoder to bypass the encoder's latent representation. both adults (Clarke and Garrett, 2004; Maye et al., 2008a) and children (Kuhl, 1979; van Heugten and Johnson, 2014) , we equip the models with a 16-dimensional speaker embedding, which is concatenated both to the acoustic input frames and to the latent bit vector.", |
| "cite_spans": [ |
| { |
| "start": 686, |
| "end": 712, |
| "text": "(Clarke and Garrett, 2004;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 713, |
| "end": 732, |
| "text": "Maye et al., 2008a)", |
| "ref_id": "BIBREF80" |
| }, |
| { |
| "start": 746, |
| "end": 758, |
| "text": "(Kuhl, 1979;", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 759, |
| "end": 789, |
| "text": "van Heugten and Johnson, 2014)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Each BSN of the latent encoding is associated with a firing probability \u2208 [0, 1] parameterized by the encoder network. The neural activation can be discretized either deterministically or by sampling. The use of BSNs to encode segments is a problem for gradient-based optimization since it introduces a non-differentiable discrete decision into the network's latent structure. We overcome this problem by approximating missing gradients using the straight-through estimator (Hinton, 2012; Bengio et al., 2013; Courbariaux et al., 2016) with slope-annealing (Chung et al., 2017) . Slope annealing multiplies the pre-activations a by a monotonically increasing function of the training iteration t, incrementally decreasing the bias of the straight-through estimator. We use the following annealing function:", |
| "cite_spans": [ |
| { |
| "start": 474, |
| "end": 488, |
| "text": "(Hinton, 2012;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 489, |
| "end": 509, |
| "text": "Bengio et al., 2013;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 510, |
| "end": 535, |
| "text": "Courbariaux et al., 2016)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 557, |
| "end": 577, |
| "text": "(Chung et al., 2017)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "a \u2190 a(1 + 0.1t)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We discretize the latent dimensions using Bernoulli sampling during training and thresholding at 0.5 during evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The models are implemented in Tensorflow (Abadi et al., 2015) and optimized using Adam (Kingma and Ba, 2014) for 150 training epochs with a constant learning rate of 0.001. The source code is available at https://github.com/ coryshain/dnnseg.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 61, |
| "text": "(Abadi et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We apply our model to the Xitsonga and English speech data from the Zerospeech 2015 shared task. The Xitsonga data are drawn from the NCHLT corpus (De Vries et al., 2014) and contain 2h29m07s of read speech from 24 speakers. The English data are drawn from the Buckeye Corpus (Pitt et al., 2005) and contain 4h59m05s of conversational speech from 12 speakers. While neither of these corpora represent child-directed speech, they both consist of fluently produced word tokens in context, rather than isolated productions as in many previous laboratory studies with infants (Eimas et al., 1971; Werker and Tees, 1984; Kuhl et al., 1992, inter alia) . We pre-segment the audio files using time-aligned phone transcriptions pro- vided in the challenge repository. The gold segment labels are used in clustering evaluation metrics, but the unsupervised learner never has access to them. Data selection criteria and annotation procedures are are described in more detail in Versteegh et al. (2015).", |
| "cite_spans": [ |
| { |
| "start": 276, |
| "end": 295, |
| "text": "(Pitt et al., 2005)", |
| "ref_id": "BIBREF96" |
| }, |
| { |
| "start": 572, |
| "end": 592, |
| "text": "(Eimas et al., 1971;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 593, |
| "end": 615, |
| "text": "Werker and Tees, 1984;", |
| "ref_id": "BIBREF120" |
| }, |
| { |
| "start": 616, |
| "end": 646, |
| "text": "Kuhl et al., 1992, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Prior to fitting, we apply a standard spectral preprocessing pipeline from automatic speech recognition: raw acoustic signals are converted into 13dimensional vectors of Mel frequency cepstral coefficients (MFCCs) (Mermelstein, 1976) with first and second order deltas, yielding 39-dimensional frames sequenced in time. Each frame covers 25ms of speech, and frames are spaced 10ms apart. The deltas are used by the encoder but stripped from the reconstruction targets. Following preceding work showing improved unsupervised clustering when segments are given fixed-dimensional acoustic representations, thus abstracting away from the variable temporal dilation in natural speech (Kamper et al., 2017a,b) , we resample all reconstruction targets to a length of 25 frames.", |
| "cite_spans": [ |
| { |
| "start": 214, |
| "end": 233, |
| "text": "(Mermelstein, 1976)", |
| "ref_id": "BIBREF83" |
| }, |
| { |
| "start": 679, |
| "end": 703, |
| "text": "(Kamper et al., 2017a,b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "This pipeline instantiates some standard assumptions about the perceptual representations underlying human speech processing. Alternative representations -for instance, articulatory representations (Liu, 1996; Frankel and King, 2001; Kirchhoff et al., 2002; Livescu and Glass, 2004) or other spectral transforms (Zwicker, 1961; Makhoul, 1975; Hermansky, 1990; Hermansky et al., 1991; Coifman and Wickerhauser, 1992; Shao et al., 2009) -have been proposed as alternatives to MFCCs. Our results concerning perceptual availability are of course tied to our input representation, since phenomena that are poorly distinguished by MFCCs have less effect on our autoencoder loss function. Nonetheless, MFCCs are known to produce high-quality supervised speech recognizers (Zheng et al., 2001; , and we therefore leave optimization of the representation of speech features to future work. The first research question posed in the introduction was to what extent theory-driven phoneme categories emerge from a drive to model auditory percepts. We explore this question by evaluating the degree of correspondence between the autoencoder hidden states and the gold phone labels. Table 1 reports learning outcomes using the information theoretic measures homogeneity (H), completeness (C), and V-measure (V) for unsupervised cluster evaluation (Rosenberg and Hirschberg, 2007) . All three metrics range over the interval [0, 1] , with 1 indexing perfect performance. As shown in the table, our model yields dramatically better clustering performance than a random baseline that uniformly draws cluster IDs from a pool of 256 categories: we obtain 2118% and 4500% relative V-measure improvements in Xitsonga and English, respectively. At the same time, clustering performance is far from perfect. This result indicates that perceptual modelingan immediately-available learning signal in infant language acquisition -both (1) drives the learner a long way toward phoneme acquisition, and (2) is insufficient to fully identify phone categories in our learners. One likely explanation for the latter is evidence from cognitive science that phonotactic and lexical information (to which our learners do not have access) supplement perception as the acquisition process unfolds (Feldman et al., 2013a; Pater and Moreton, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 209, |
| "text": "(Liu, 1996;", |
| "ref_id": "BIBREF74" |
| }, |
| { |
| "start": 210, |
| "end": 233, |
| "text": "Frankel and King, 2001;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 234, |
| "end": 257, |
| "text": "Kirchhoff et al., 2002;", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 258, |
| "end": 282, |
| "text": "Livescu and Glass, 2004)", |
| "ref_id": "BIBREF75" |
| }, |
| { |
| "start": 312, |
| "end": 327, |
| "text": "(Zwicker, 1961;", |
| "ref_id": "BIBREF128" |
| }, |
| { |
| "start": 328, |
| "end": 342, |
| "text": "Makhoul, 1975;", |
| "ref_id": "BIBREF77" |
| }, |
| { |
| "start": 343, |
| "end": 359, |
| "text": "Hermansky, 1990;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 360, |
| "end": 383, |
| "text": "Hermansky et al., 1991;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 384, |
| "end": 415, |
| "text": "Coifman and Wickerhauser, 1992;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 416, |
| "end": 434, |
| "text": "Shao et al., 2009)", |
| "ref_id": "BIBREF107" |
| }, |
| { |
| "start": 765, |
| "end": 785, |
| "text": "(Zheng et al., 2001;", |
| "ref_id": "BIBREF127" |
| }, |
| { |
| "start": 1332, |
| "end": 1364, |
| "text": "(Rosenberg and Hirschberg, 2007)", |
| "ref_id": "BIBREF105" |
| }, |
| { |
| "start": 1409, |
| "end": 1412, |
| "text": "[0,", |
| "ref_id": null |
| }, |
| { |
| "start": 1413, |
| "end": 1415, |
| "text": "1]", |
| "ref_id": null |
| }, |
| { |
| "start": 2260, |
| "end": 2283, |
| "text": "(Feldman et al., 2013a;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 2284, |
| "end": 2308, |
| "text": "Pater and Moreton, 2014)", |
| "ref_id": "BIBREF93" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1168, |
| "end": 1175, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The middle rows of Table 1 show ablation results from using non-discrete sigmoid neurons rather than BSNs in the encoding layer (Sigmoid vs. BSN) 2 and/or removing the speaker adaptation feature (i.e. removing speaker embeddings). As shown, the classification performance of our model benefits substantially from the use of BSN encodings with speaker adaptation, especially on Xitsonga. Note that the reconstruction losses of the sigmoid encoders are better than those of the BSN encoders despite their degraded classification performance. This is to be expected: sigmoid neurons have greater representational capacity than binary neurons, since they can encode information through continuous gradations. They are therefore more capable of memorizing idiosyncratic properties of the input and are less incentivized to discover generalizable latent classes. The ablation results thus suggest that speaker adaptation and categorical perception support the discovery of linguistically relevant abstractions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 26, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The second research question posed in the introduction was to what extent distinctive features differ in perceptual availability. We explore this question in two ways. First, we qualitatively assess the linguistic plausibility of the natural clustering in the latent bits. Figure 2 visualizes this clustering based on correlations between the average of the bit patterns across all instances of each gold phone type for both datasets. If the unsupervised classifier ignored phonological structure altogether, the plots would be roughly uniform in color, and if the unsupervised classifier perfectly identified phonemes, the plots would consist entirely of fully light or fully dark cells, with unique bit patterns associated with each phone type. As shown, the reality falls in between: while the visualized classifications are far from perfect, they nonetheless contain a great deal of structure and suggest the presence of rough natural classes in both languages, especially of affricates, nasals, sibilants, and approximants. Our learners also replicate infants' difficulty in discriminating some nasal and fricative place features (Polka et al., 2001; Nittrouer, 2001; Narayan et al., 2010) , assigning highly similar representations to many subtypes of nasals and fricatives across places of articulation (see e.g. similar mean bit patterns of /n/ vs. /n/ and /s/ vs. /s/ in both languages).", |
| "cite_spans": [ |
| { |
| "start": 1135, |
| "end": 1155, |
| "text": "(Polka et al., 2001;", |
| "ref_id": "BIBREF97" |
| }, |
| { |
| "start": 1156, |
| "end": 1172, |
| "text": "Nittrouer, 2001;", |
| "ref_id": "BIBREF92" |
| }, |
| { |
| "start": 1173, |
| "end": 1194, |
| "text": "Narayan et al., 2010)", |
| "ref_id": "BIBREF90" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 273, |
| "end": 281, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distinctive Features Differ in Perceptual Availability", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Second, we quantitatively evaluate the degree to which theory-driven features like [\u00b1voice] are recoverable from the network's latent representations. To do so, we map gold phone labels into binary distinctive feature clusters from Hayes (2011) using Phonological CorpusTools (Hall et al., 2016) . One possible form of analysis would be to search for individual correspondences between distinctive features and the model's latent dimensions. However, this is likely to underestimate the degree of feature learning because the deep decoder can learn arbitrary logics on the latent bit patterns, a necessary property for fitting complex non-linear mappings from latent features to acoustics. We instead evaluate distinctive feature discovery by fitting random forest classifiers that predict theory-driven features using the latent bit patterns as inputs. We can then use classifier performance to assess the degree to which a given distinctive feature can be recovered by a logical statement on the network's latent bits. The classifiers were fitted using 5-fold cross-validation in Scikit-learn (Pedregosa et al., 2011) with 100 estimators, balanced class weighting, and an entropybased split criterion. Tables 2 and 3 . As shown, (1) there are large differences in perceptual availability between features, and (2) relative avail- 3 suggesting that these features are more difficult to discover bottom-up and may therefore be more dependent on phonotactic and lexical constraints for acquisition. 4 This finding aligns with the acquisition literature in suggesting that there may be substantial differences in perceptual availability between different place and manner features (see Section 2.2).", |
| "cite_spans": [ |
| { |
| "start": 276, |
| "end": 295, |
| "text": "(Hall et al., 2016)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 1095, |
| "end": 1119, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF94" |
| }, |
| { |
| "start": 1332, |
| "end": 1333, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 1498, |
| "end": 1499, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1204, |
| "end": 1218, |
| "text": "Tables 2 and 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distinctive Features Differ in Perceptual Availability", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In addition to these cross-linguistic similarities, the models also reveal important differences between Xitsonga and English. For example, the two languages differ in the relative availability of features that distinguish vowels vs. features that distinguish consonants. In English, vowel features like [\u00b1front] , [\u00b1high] , and [\u00b1back] are substantially less well learned than consonant features like [\u00b1coronal] , [\u00b1anterior] , and [\u00b1delayed release], while the opposite holds in Xitsonga. We hypothesize that this is due to the fact that there are more vowels and fewer consonants in English than in Xitsonga: having fewer distinctions might reduce the degree of \"crowding\" in the articulatory space, increasing perceptual contrast between phone types (Liljencrants and Lindblom, 1972) .", |
| "cite_spans": [ |
| { |
| "start": 304, |
| "end": 312, |
| "text": "[\u00b1front]", |
| "ref_id": null |
| }, |
| { |
| "start": 315, |
| "end": 322, |
| "text": "[\u00b1high]", |
| "ref_id": null |
| }, |
| { |
| "start": 402, |
| "end": 412, |
| "text": "[\u00b1coronal]", |
| "ref_id": null |
| }, |
| { |
| "start": 415, |
| "end": 426, |
| "text": "[\u00b1anterior]", |
| "ref_id": null |
| }, |
| { |
| "start": 754, |
| "end": 787, |
| "text": "(Liljencrants and Lindblom, 1972)", |
| "ref_id": "BIBREF73" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results are given in", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, note that the cluster maps in Figure 2 and the feature recovery data in Tables 2 and 3 provide complementary perspectives on the learned representations. For example, it may at first seem surprising that the feature [\u00b1nasal] is recovered relatively poorly in both languages, given that nasals are well clustered in Figure 2 . This discrepancy indicates that nasal segments are represented similarly to each other but also similarly enough to other segments that they are not reliably differentiated as a class. Conversely, the voicing feature is well recovered in both languages despite the lack of a visible cluster of voiced segments. This indicates that voicing is reliably encoded in the latent bits, even if the representation as a whole is dominated by other kinds of information.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 39, |
| "end": 47, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 81, |
| "end": 145, |
| "text": "Tables 2 and 3 provide complementary perspectives on the learned", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 324, |
| "end": 332, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results are given in", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we used binary stochastic neural autoencoders to explore the perceptual availability of (1) theory-driven phonemic categories and (2) theory-driven phonological features, based only on the acoustic properties of segments. We found that phonemic categories exert substantial influence on a learner driven to model its auditory percepts, but that additional information -especially phonotactic and lexical (Feldman et al., 2013a ) -is likely necessary for full adult-like phone discrimination. We also found asymmetries in the perceptual availability of phonological features like [\u00b1voice] and [\u00b1nasal] and showed that these asymmetries reflect attested patterns of infant phone discrimination. Our model both replicates broad trends in the child acquisition literature (successful consonant-vowel and voicing discrimination, relatively less successful discrimination of various place and manner features) and sheds new light on potential relationships between auditory perception and language acquisition: the overall cline of perceptual availability revealed by the model in Tables 2 and 3 suggests a range of testable hypotheses about the role of perception in infant speech processing. ", |
| "cite_spans": [ |
| { |
| "start": 419, |
| "end": 441, |
| "text": "(Feldman et al., 2013a", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To the best of our knowledge, the gold Xitsonga phone transcriptions provided by the Zerospeech 2015 dataset use a non-standard pronunciation alphabet that is undocumented but isomorphic to the NCHLT transcription convention. In order to extract distinctive features for the Xitsonga phone labels, we hand-mapped the Zerospeech labels onto NCHLT labels by cross-referencing the Zerospeech phone sequences, the Zerospeech orthographic word sequences, and the NCHLT pronunciation dictionary, searching for systematic correspondences between Zerospeech and NCHLT transcription practices. Once the Zerospeech-to-NCHLT mapping was obtained, we used the International Phonetic Alphabet (IPA) phone labels provided by NCHLT to look up distinctive features in the Phonological CorpusTools (PCT) feature maps (Hall et al., 2016) . Some IPA labels from NCHLT were not found in the PCT database, and for those we used the following featurization rules:", |
| "cite_spans": [ |
| { |
| "start": 800, |
| "end": 819, |
| "text": "(Hall et al., 2016)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Xitsonga Phoneme Featurization", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Consonants with palatal offglides: We used the features associated with the non-offglide consonant and switched on the approximant, dorsal, high, front, and tense features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Xitsonga Phoneme Featurization", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Aspirated consonants: We used the features associated with the non-aspirated consonant and switched on the spread glottis feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Xitsonga Phoneme Featurization", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Ejective consonants: We used the features associated with the non-ejective consonant and switched on the constricted glottis feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Xitsonga Phoneme Featurization", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Voiceless alveolar lateral stops: We used the features associated with voiceless alveolar lateral affricates and switched off the de-layed release feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Xitsonga Phoneme Featurization", |
| "sec_num": null |
| }, |
| { |
| "text": "Our hand-made symbol correspondences and featurizations are distributed with this project's code repository.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Xitsonga Phoneme Featurization", |
| "sec_num": null |
| }, |
| { |
| "text": "For reference, counts of phonemes and features by corpus are plotted in Figures 3 and 4 . Note that the feature counts are generally larger because multiple features can be true of any one segment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 72, |
| "end": 87, |
| "text": "Figures 3 and 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "C Phoneme and feature distributions", |
| "sec_num": null |
| }, |
| { |
| "text": "To obtain class labels from the sigmoid encoder, we rounded the activations. Rounding was only used for evaluation and had no impact on the fitting procedure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Delayed release: affricates, constricted glottis: ejectives; spread glottis: glottal frication (e.g. aspirated stops).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that we are not suggesting that e.g. [\u00b1spread glottis] cannot be detected in speech. Our claim is rather that acoustic cues to [\u00b1spread glottis] are less pronounced and/or less reliable than cues to e.g.[\u00b1voice] and therefore perhaps more difficult to exploit in early infancy, since our autoencoder model does not find them particularly useful for perceptual reconstruction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank the anonymous reviewers for their helpful comments. This work was supported by National Science Foundation grant #1422987 to ME. All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| }, |
| { |
| "text": "We adopt the phonological feature definitions presented in Hayes (2011) . For full exposition of the features and their motivations, we refer readers to the source. However, for convenience, we provide the following brief (and in some cases oversimplified) definitions based on Hayes (2011): \u2022 approximant:Vowels, liquids, and glides are [+approximant] , others are [-approximant] ", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 71, |
| "text": "Hayes (2011)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 338, |
| "end": 352, |
| "text": "[+approximant]", |
| "ref_id": null |
| }, |
| { |
| "start": 366, |
| "end": 380, |
| "text": "[-approximant]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Phonological feature definitions", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems", |
| "authors": [ |
| { |
| "first": "Mart\u00edn", |
| "middle": [], |
| "last": "Abadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Barham", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Brevdo", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Citro", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Devin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjay", |
| "middle": [], |
| "last": "Ghemawat", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Goodfellow", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Harp", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Irving", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Isard", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangqing", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Rafal", |
| "middle": [], |
| "last": "Jozefowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Manjunath", |
| "middle": [], |
| "last": "Kudlur", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefow- icz, Lukasz Kaiser, Manjunath Kudlur, Josh Lev- enberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Tal- war, Paul Tucker, Vincent Vanhoucke, Vijay Vasude- van, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Automatic segmentation and clustering of speech using sparse coding and metaheuristic search", |
| "authors": [ |
| { |
| "first": "Wiehan", |
| "middle": [], |
| "last": "Agenbag", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Niesler", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Sixteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wiehan Agenbag and Thomas Niesler. 2015. Au- tomatic segmentation and clustering of speech us- ing sparse coding and metaheuristic search. In Sixteenth Annual Conference of the International Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Deep learning methods for unsupervised acoustic modelingLeap submission to ZeroSpeech challenge", |
| "authors": [ |
| { |
| "first": "Rajath", |
| "middle": [], |
| "last": "T K Ansari", |
| "suffix": "" |
| }, |
| { |
| "first": "Sonali", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Sriram", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ganapathy", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "754--761", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T K Ansari, Rajath Kumar, Sonali Singh, and Sri- ram Ganapathy. 2017a. Deep learning methods for unsupervised acoustic modelingLeap submission to ZeroSpeech challenge 2017. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 754-761. IEEE.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Unsupervised HMM posteriograms for language independent acoustic modeling in zero resource conditions", |
| "authors": [ |
| { |
| "first": "Rajath", |
| "middle": [], |
| "last": "T K Ansari", |
| "suffix": "" |
| }, |
| { |
| "first": "Sonali", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Sriram", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Susheela", |
| "middle": [], |
| "last": "Ganapathy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Devi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "762--768", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T K Ansari, Rajath Kumar, Sonali Singh, Sriram Ganapathy, and Susheela Devi. 2017b. Unsuper- vised HMM posteriograms for language indepen- dent acoustic modeling in zero resource conditions. In Automatic Speech Recognition and Understand- ing Workshop (ASRU), 2017 IEEE, pages 762-768. IEEE.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Modeling phonetic category learning from natural acoustic data", |
| "authors": [ |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Antetomaso", |
| "suffix": "" |
| }, |
| { |
| "first": "Kouki", |
| "middle": [], |
| "last": "Miyazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Naomi", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Kasia", |
| "middle": [], |
| "last": "Hitczenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Reiko", |
| "middle": [], |
| "last": "Mazuka", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the annual Boston University Conference on Language Development", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephanie Antetomaso, Kouki Miyazawa, Naomi Feld- man, Micha Elsner, Kasia Hitczenko, and Reiko Mazuka. 2017. Modeling phonetic category learn- ing from natural acoustic data. In Proceedings of the annual Boston University Conference on Language Development.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Discrimination of voice onset time by human infants: New findings and implications for the effects of early experience", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Richard N Aslin", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [ |
| "L" |
| ], |
| "last": "Pisoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "J" |
| ], |
| "last": "Hennessy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Perey", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "Child development", |
| "volume": "52", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard N Aslin, David B Pisoni, Beth L Hennessy, and Alan J Perey. 1981. Discrimination of voice on- set time by human infants: New findings and im- plications for the effects of early experience. Child development, 52(4):1135.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The Phonological Loop as a Language Learning Device", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Baddeley", |
| "suffix": "" |
| }, |
| { |
| "first": "Susan", |
| "middle": [], |
| "last": "Gathercole", |
| "suffix": "" |
| }, |
| { |
| "first": "Costanza", |
| "middle": [], |
| "last": "Papagno", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Psychological Review", |
| "volume": "105", |
| "issue": "1", |
| "pages": "158--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Baddeley, Susan Gathercole, and Costanza Pa- pagno. 1998. The Phonological Loop as a Lan- guage Learning Device. Psychological Review, 105(1):158-173.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Working Memory", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Alan", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Baddeley", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hitch", |
| "suffix": "" |
| } |
| ], |
| "year": 1974, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan D Baddeley and Graham Hitch. 1974. Working Memory. University of Stirling, Stirling, Scotland.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Discovering discrete subword units with binarized autoencoders and hidden-markovmodel encoders", |
| "authors": [ |
| { |
| "first": "Leonardo", |
| "middle": [], |
| "last": "Badino", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessio", |
| "middle": [], |
| "last": "Mereta", |
| "suffix": "" |
| }, |
| { |
| "first": "Lorenzo", |
| "middle": [], |
| "last": "Rosasco", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Sixteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leonardo Badino, Alessio Mereta, and Lorenzo Rosasco. 2015. Discovering discrete subword units with binarized autoencoders and hidden-markov- model encoders. In Sixteenth Annual Conference of the International Speech Communication Associ- ation.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", |
| "authors": [ |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "L\u00e9onard", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1308.3432" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshua Bengio, Nicholas L\u00e9onard, and Aaron Courville. 2013. Estimating or propagating gradi- ents through stochastic neurons for conditional com- putation. arXiv preprint arXiv:1308.3432.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Knowledge-based parameters for HMM speech recognition", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Nabil", |
| "suffix": "" |
| }, |
| { |
| "first": "Carol", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Bitar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Espy-Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings", |
| "volume": "1", |
| "issue": "", |
| "pages": "29--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nabil N Bitar and Carol Y Espy-Wilson. 1996. Knowledge-based parameters for HMM speech recognition. In Acoustics, Speech, and Signal Pro- cessing, 1996. ICASSP-96. Conference Proceed- ings., 1996 IEEE International Conference on, vol- ume 1, pages 29-32. IEEE.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Twitter evolution: converging mechanisms in birdsong and human speech", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Johan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuo", |
| "middle": [], |
| "last": "Bolhuis", |
| "suffix": "" |
| }, |
| { |
| "first": "Constance", |
| "middle": [], |
| "last": "Okanoya", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Scharff", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Nature Reviews Neuroscience", |
| "volume": "11", |
| "issue": "11", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan J Bolhuis, Kazuo Okanoya, and Constance Scharff. 2010. Twitter evolution: converging mech- anisms in birdsong and human speech. Nature Re- views Neuroscience, 11(11):747.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Parallel inference of Dirichlet process Gaussian mixture models for unsupervised acoustic modeling: A feasibility study", |
| "authors": [ |
| { |
| "first": "Hongjie", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Cheung-Chi", |
| "middle": [], |
| "last": "Leung", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Sixteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hongjie Chen, Cheung-Chi Leung, Lei Xie, Bin Ma, and Haizhou Li. 2015. Parallel inference of Dirich- let process Gaussian mixture models for unsuper- vised acoustic modeling: A feasibility study. In Sixteenth Annual Conference of the International Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Multilingual bottle-neck feature learning from untranscribed speech", |
| "authors": [ |
| { |
| "first": "Hongjie", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Cheung-Chi", |
| "middle": [], |
| "last": "Leung", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "727--733", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hongjie Chen, Cheung-Chi Leung, Lei Xie, Bin Ma, and Haizhou Li. 2017. Multilingual bottle-neck fea- ture learning from untranscribed speech. In Auto- matic Speech Recognition and Understanding Work- shop (ASRU), 2017 IEEE, pages 727-733. IEEE.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Toward the logical description of languages in their phonemic aspect. Language", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "Morris", |
| "middle": [], |
| "last": "Halle", |
| "suffix": "" |
| }, |
| { |
| "first": "Roman", |
| "middle": [], |
| "last": "Jakobson", |
| "suffix": "" |
| } |
| ], |
| "year": 1953, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "34--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E Colin Cherry, Morris Halle, and Roman Jakobson. 1953. Toward the logical description of languages in their phonemic aspect. Language, pages 34-46.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The perceptual basis of the feature vowel height", |
| "authors": [ |
| { |
| "first": "Katerina", |
| "middle": [], |
| "last": "Chl\u00e1dkov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Boersma", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katerina Chl\u00e1dkov\u00e1, Paul Boersma, Titia Benders, and others. 2015. The perceptual basis of the feature vowel height. In ICPhS.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The Sound Pattern of English", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Morris", |
| "middle": [], |
| "last": "Halle", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper \\& Row.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Hierarchical Multiscale Recurrent Neural Networks", |
| "authors": [ |
| { |
| "first": "Junyoung", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungjin", |
| "middle": [], |
| "last": "Ahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2017. Hierarchical Multiscale Recurrent Neural Networks. In International Conference on Learning Representations 2017.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Rapid adaptation to foreign-accented English. The Journal of the", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Constance", |
| "suffix": "" |
| }, |
| { |
| "first": "Merrill F", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Garrett", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "116", |
| "issue": "", |
| "pages": "3647--3658", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Constance M Clarke and Merrill F Garrett. 2004. Rapid adaptation to foreign-accented English. The Journal of the Acoustical Society of America, 116(6):3647-3658.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "The geometry of phonological features", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Clements", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Phonology", |
| "volume": "2", |
| "issue": "1", |
| "pages": "225--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George N Clements. 1985. The geometry of phonolog- ical features. Phonology, 2(1):225-252.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Entropy-based algorithms for best basis selection", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ronald R Coifman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Victor Wickerhauser", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "IEEE Transactions on information theory", |
| "volume": "38", |
| "issue": "2", |
| "pages": "713--718", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronald R Coifman and M Victor Wickerhauser. 1992. Entropy-based algorithms for best basis se- lection. IEEE Transactions on information theory, 38(2):713-718.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Ran El-Yaniv, and Yoshua Bengio", |
| "authors": [ |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Courbariaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Itay", |
| "middle": [], |
| "last": "Hubara", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Soudry", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1602.02830" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Bina- rized neural networks: Training deep neural net- works with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Effects of the distribution of acoustic cues on infants' perception of sibilants", |
| "authors": [ |
| { |
| "first": "Alejandrina", |
| "middle": [], |
| "last": "Cristi\u00e0", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Grant", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanda", |
| "middle": [], |
| "last": "Mcguire", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander L", |
| "middle": [], |
| "last": "Seidl", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Francis", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of phonetics", |
| "volume": "39", |
| "issue": "3", |
| "pages": "388--402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alejandrina Cristi\u00e0, Grant L McGuire, Amanda Seidl, and Alexander L Francis. 2011. Effects of the dis- tribution of acoustic cues on infants' perception of sibilants. Journal of phonetics, 39(3):388-402.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "A smartphone-based ASR data collection tool for under-resourced languages", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nic J De", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vries", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Marelie", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaco", |
| "middle": [], |
| "last": "Davel", |
| "suffix": "" |
| }, |
| { |
| "first": "Willem", |
| "middle": [ |
| "D" |
| ], |
| "last": "Badenhorst", |
| "suffix": "" |
| }, |
| { |
| "first": "Febe", |
| "middle": [], |
| "last": "Basson", |
| "suffix": "" |
| }, |
| { |
| "first": "Etienne", |
| "middle": [], |
| "last": "De Wet", |
| "suffix": "" |
| }, |
| { |
| "first": "Alta", |
| "middle": [ |
| "De" |
| ], |
| "last": "Barnard", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Waal", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "56", |
| "issue": "", |
| "pages": "119--131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nic J De Vries, Marelie H Davel, Jaco Badenhorst, Willem D Basson, Febe De Wet, Etienne Barnard, and Alta De Waal. 2014. A smartphone-based ASR data collection tool for under-resourced languages. Speech communication, 56:119-131.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Speed and cerebral correlates of syllable discrimination in infants", |
| "authors": [ |
| { |
| "first": "Ghislaine", |
| "middle": [], |
| "last": "Dehaene", |
| "suffix": "" |
| }, |
| { |
| "first": "-Lambertz", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Stanislas", |
| "middle": [], |
| "last": "Dehaene", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Nature", |
| "volume": "370", |
| "issue": "6487", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ghislaine Dehaene-Lambertz and Stanislas Dehaene. 1994. Speed and cerebral correlates of syllable dis- crimination in infants. Nature, 370(6487):292.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Nonparametric learning of phonological constraints in optimality theory", |
| "authors": [ |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Doyle", |
| "suffix": "" |
| }, |
| { |
| "first": "Klinton", |
| "middle": [], |
| "last": "Bicknell", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1094--1103", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabriel Doyle, Klinton Bicknell, and Roger Levy. 2014. Nonparametric learning of phonological con- straints in optimality theory. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1094-1103.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Data-driven learning of symbolic constraints for a log-linear model in a phonological setting", |
| "authors": [ |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Doyle", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "2217--2226", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabriel Doyle and Roger Levy. 2016. Data-driven learning of symbolic constraints for a log-linear model in a phonological setting. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 2217-2226.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "The zero resource speech challenge", |
| "authors": [ |
| { |
| "first": "Ewan", |
| "middle": [], |
| "last": "Dunbar", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuan", |
| "middle": [ |
| "Nga" |
| ], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Juan", |
| "middle": [], |
| "last": "Benjumea", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Karadayi", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathieu", |
| "middle": [], |
| "last": "Bernard", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Besacier", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Anguera", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "323--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Be- sacier, Xavier Anguera, and Emmanuel Dupoux. 2017. The zero resource speech challenge 2017. In Automatic Speech Recognition and Understand- ing Workshop (ASRU), 2017 IEEE, pages 323-330. IEEE.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "On the relation between maximum spectral transition positions and phone boundaries", |
| "authors": [ |
| { |
| "first": "Sorin", |
| "middle": [], |
| "last": "Dusan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lawrence", |
| "middle": [], |
| "last": "Rabiner", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Ninth International Conference on Spoken Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sorin Dusan and Lawrence Rabiner. 2006. On the rela- tion between maximum spectral transition positions and phone boundaries. In Ninth International Con- ference on Spoken Language Processing.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "On infant speech perception and the acquisition of language", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Joanne", |
| "middle": [ |
| "L" |
| ], |
| "last": "Eimas", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "W" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jusczyk", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "161--195", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Eimas, Joanne L. Miller, and Peter W. Jusczyk. 1987. On infant speech perception and the acquisi- tion of language. In Stevan Harnad, editor, Categor- ical perception: The groundwork of cognition, pages 161-195. Cambridge University Press, New York.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Speech perception in infants", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Peter D Eimas", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Einar", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Siqueland", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Jusczyk", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vigorito", |
| "suffix": "" |
| } |
| ], |
| "year": 1971, |
| "venue": "Science", |
| "volume": "171", |
| "issue": "3968", |
| "pages": "303--306", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D Eimas, Einar R Siqueland, Peter Jusczyk, and James Vigorito. 1971. Speech perception in infants. Science, 171(3968):303-306.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Speech segmentation with a neural encoder model of working memory", |
| "authors": [ |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Cory", |
| "middle": [], |
| "last": "Shain", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1070--1080", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Micha Elsner and Cory Shain. 2017. Speech segmen- tation with a neural encoder model of working mem- ory. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing, pages 1070-1080.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Speech listening specifically modulates the excitability of tongue muscles: a TMS study", |
| "authors": [ |
| { |
| "first": "Luciano", |
| "middle": [], |
| "last": "Fadiga", |
| "suffix": "" |
| }, |
| { |
| "first": "Laila", |
| "middle": [], |
| "last": "Craighero", |
| "suffix": "" |
| }, |
| { |
| "first": "Giovanni", |
| "middle": [], |
| "last": "Buccino", |
| "suffix": "" |
| }, |
| { |
| "first": "Giacomo", |
| "middle": [], |
| "last": "Rizzolatti", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "European journal of Neuroscience", |
| "volume": "15", |
| "issue": "2", |
| "pages": "399--402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luciano Fadiga, Laila Craighero, Giovanni Buccino, and Giacomo Rizzolatti. 2002. Speech listening specifically modulates the excitability of tongue muscles: a TMS study. European journal of Neu- roscience, 15(2):399-402.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Symbolic representation of probabilistic worlds", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Cognition", |
| "volume": "123", |
| "issue": "1", |
| "pages": "61--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Feldman. 2012. Symbolic representation of probabilistic worlds. Cognition, 123(1):61-83.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Learning phonetic categories by learning a lexicon", |
| "authors": [ |
| { |
| "first": "Naomi", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Annual Meeting of the Cognitive Science Society", |
| "volume": "31", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Naomi Feldman, Thomas Griffiths, and James Morgan. 2009a. Learning phonetic categories by learning a lexicon. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 31.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "A role for the developing lexicon in phonetic category acquisition", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Naomi", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "James L", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Psychological review", |
| "volume": "120", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Naomi H Feldman, Thomas L Griffiths, Sharon Gold- water, and James L Morgan. 2013a. A role for the developing lexicon in phonetic category acquisition. Psychological review, 120(4):751.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "The influence of categories on perception: Explaining the perceptual magnet effect as optimal statistical inference", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Naomi", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "James L", |
| "middle": [], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Psychological review", |
| "volume": "116", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Naomi H Feldman, Thomas L Griffiths, and James L Morgan. 2009b. The influence of categories on per- ception: Explaining the perceptual magnet effect as optimal statistical inference. Psychological review, 116(4):752.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Word-level information influences phonetic learning in adults and infants", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Naomi", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [ |
| "B" |
| ], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [ |
| "S" |
| ], |
| "last": "Myers", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "White", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "James L", |
| "middle": [], |
| "last": "Griffiths", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Cognition", |
| "volume": "127", |
| "issue": "3", |
| "pages": "427--438", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Naomi H Feldman, Emily B Myers, Katherine S White, Thomas L Griffiths, and James L Morgan. 2013b. Word-level information influences phonetic learning in adults and infants. Cognition, 127(3):427-438.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Weak semantic context helps phonetic learning in a model of infant language acquisition", |
| "authors": [ |
| { |
| "first": "Stella", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Naomi", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1073--1083", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stella Frank, Naomi H Feldman, and Sharon Goldwa- ter. 2014. Weak semantic context helps phonetic learning in a model of infant language acquisition. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073-1083.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "ASR-articulatory speech recognition", |
| "authors": [ |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Frankel", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "King", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Seventh European Conference on Speech Communication and Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joe Frankel and Simon King. 2001. ASR-articulatory speech recognition. In Seventh European Confer- ence on Speech Communication and Technology.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Phonological CorpusTools: A free, open-source tool for phonological analysis", |
| "authors": [ |
| { |
| "first": "Kathleen Currie", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Blake", |
| "middle": [], |
| "last": "Allen", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Fry", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Mackie", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mcauliffe", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "14th Conference for Laboratory Phonology", |
| "volume": "543", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kathleen Currie Hall, Blake Allen, Michael Fry, Scott Mackie, and Michael McAuliffe. 2016. Phono- logical CorpusTools: A free, open-source tool for phonological analysis. In 14th Conference for Lab- oratory Phonology, volume 543.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Categorical Perception", |
| "authors": [ |
| { |
| "first": "Stevan", |
| "middle": [], |
| "last": "Harnad", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Encyclopedia of Cognitive Science", |
| "volume": "67", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stevan Harnad. 2003. Categorical Perception. In Ency- clopedia of Cognitive Science, volume 67. MacMil- lan: Nature Publishing Group.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Introductory phonology", |
| "authors": [ |
| { |
| "first": "Bruce", |
| "middle": [], |
| "last": "Hayes", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "32", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bruce Hayes. 2011. Introductory phonology, vol- ume 32. John Wiley \\& Sons, Hoboken.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "The development of phonemic categorization in children aged 6-12", |
| "authors": [ |
| { |
| "first": "Valerie", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Barrett", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Journal of phonetics", |
| "volume": "28", |
| "issue": "4", |
| "pages": "377--396", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valerie Hazan and Sarah Barrett. 2000. The develop- ment of phonemic categorization in children aged 6- 12. Journal of phonetics, 28(4):377-396.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Deep residual learning for image recognition", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "770--778", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Unsupervised linear discriminant analysis for supporting DPGMM clustering in the zero resource scenario", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Heck", |
| "suffix": "" |
| }, |
| { |
| "first": "Sakriani", |
| "middle": [], |
| "last": "Sakti", |
| "suffix": "" |
| }, |
| { |
| "first": "Satoshi", |
| "middle": [], |
| "last": "Nakamura", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Procedia Computer Science", |
| "volume": "81", |
| "issue": "", |
| "pages": "73--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Heck, Sakriani Sakti, and Satoshi Nakamura. 2016. Unsupervised linear discriminant analysis for supporting DPGMM clustering in the zero resource scenario. Procedia Computer Science, 81:73-79.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Feature optimized dpgmm clustering for unsupervised subword modeling: A contribution to zerospeech 2017", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Heck", |
| "suffix": "" |
| }, |
| { |
| "first": "Sakriani", |
| "middle": [], |
| "last": "Sakti", |
| "suffix": "" |
| }, |
| { |
| "first": "Satoshi", |
| "middle": [], |
| "last": "Nakamura", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "740--746", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Heck, Sakriani Sakti, and Satoshi Nakamura. 2017. Feature optimized dpgmm clustering for un- supervised subword modeling: A contribution to ze- rospeech 2017. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 740-746. IEEE.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Perceptual linear predictive (PLP) analysis of speech", |
| "authors": [ |
| { |
| "first": "Hynek", |
| "middle": [], |
| "last": "Hermansky", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "the Journal of the Acoustical Society of America", |
| "volume": "87", |
| "issue": "4", |
| "pages": "1738--1752", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hynek Hermansky. 1990. Perceptual linear predictive (PLP) analysis of speech. the Journal of the Acous- tical Society of America, 87(4):1738-1752.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Compensation for the effect of the communication channel in auditory-like analysis of speech (RASTA-PLP)", |
| "authors": [ |
| { |
| "first": "Hynek", |
| "middle": [], |
| "last": "Hermansky", |
| "suffix": "" |
| }, |
| { |
| "first": "Nelson", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| }, |
| { |
| "first": "Aruna", |
| "middle": [], |
| "last": "Bayya", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Kohn", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Second European Conference on Speech Communication and Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hynek Hermansky, Nelson Morgan, Aruna Bayya, and Phil Kohn. 1991. Compensation for the effect of the communication channel in auditory-like analysis of speech (RASTA-PLP). In Second European Confer- ence on Speech Communication and Technology.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Learning to contend with accents in infancy: Benefits of brief speaker exposure", |
| "authors": [ |
| { |
| "first": "Elizabeth K Johnson", |
| "middle": [], |
| "last": "Marieke Van Heugten", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Experimental Psychology: General", |
| "volume": "143", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marieke van Heugten and Elizabeth K Johnson. 2014. Learning to contend with accents in infancy: Ben- efits of brief speaker exposure. Journal of Experi- mental Psychology: General, 143(1):340.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Perception of feature similarities by infants", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Hillenbrand", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Journal of Speech & Hearing Research", |
| "volume": "28", |
| "issue": "2", |
| "pages": "317--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Hillenbrand. 1985. Perception of feature sim- ilarities by infants. Journal of Speech & Hearing Research, 28(2):317-318.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Neural Networks for Machine Learning. Coursera, video lectures", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Geoffrey Hinton. 2012. Neural Networks for Machine Learning. Coursera, video lectures.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Deep neural networks for acoustic modeling in speech recognition", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Dong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Dahl", |
| "suffix": "" |
| }, |
| { |
| "first": "Abdel-Rahman", |
| "middle": [], |
| "last": "Mohamed", |
| "suffix": "" |
| }, |
| { |
| "first": "Navdeep", |
| "middle": [], |
| "last": "Jaitly", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Senior", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Vanhoucke", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Kingsbury", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "IEEE Signal processing magazine", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, and others. 2012. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal processing magazine, 29.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Consonants and vowels: different roles in early language acquisition", |
| "authors": [ |
| { |
| "first": "Jean-R\u00e9my", |
| "middle": [], |
| "last": "Hochmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvia", |
| "middle": [], |
| "last": "Benavides-Varela", |
| "suffix": "" |
| }, |
| { |
| "first": "Marina", |
| "middle": [], |
| "last": "Nespor", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacques", |
| "middle": [], |
| "last": "Mehler", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Developmental science", |
| "volume": "14", |
| "issue": "6", |
| "pages": "1445--1458", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-R\u00e9my Hochmann, Silvia Benavides-Varela, Ma- rina Nespor, and Jacques Mehler. 2011. Consonants and vowels: different roles in early language acqui- sition. Developmental science, 14(6):1445-1458.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Sensorimotor adaptation in speech production", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael I Jordan", |
| "middle": [], |
| "last": "Houde", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Science", |
| "volume": "279", |
| "issue": "5354", |
| "pages": "1213--1216", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John F Houde and Michael I Jordan. 1998. Senso- rimotor adaptation in speech production. Science, 279(5354):1213-1216.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", |
| "authors": [ |
| { |
| "first": "Sergey", |
| "middle": [], |
| "last": "Ioffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Szegedy", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "448--456", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch Nor- malization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning, pages 448-456.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Preliminaries to speech analysis: The distinctive features and their correlates", |
| "authors": [ |
| { |
| "first": "Roman", |
| "middle": [], |
| "last": "Jakobson", |
| "suffix": "" |
| }, |
| { |
| "first": "Gunnar", |
| "middle": [], |
| "last": "Fant", |
| "suffix": "" |
| }, |
| { |
| "first": "Morris", |
| "middle": [], |
| "last": "Halle", |
| "suffix": "" |
| } |
| ], |
| "year": 1951, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roman Jakobson, C Gunnar Fant, and Morris Halle. 1951. Preliminaries to speech analysis: The dis- tinctive features and their correlates. MIT press.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Towards unsupervised training of speaker independent acoustic models", |
| "authors": [ |
| { |
| "first": "Aren", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Twelfth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aren Jansen and Kenneth Church. 2011. Towards un- supervised training of speaker independent acoustic models. In Twelfth Annual Conference of the Inter- national Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Representation of speech sounds by young infants. Developmental Psychology", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Carolyn", |
| "middle": [], |
| "last": "Jusczyk", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Derrah", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "23", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter W Jusczyk and Carolyn Derrah. 1987. Represen- tation of speech sounds by young infants. Develop- mental Psychology, 23(5):648.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Unsupervised neural network based feature extraction using weak top-down constraints", |
| "authors": [ |
| { |
| "first": "Herman", |
| "middle": [], |
| "last": "Kamper", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Aren", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "5818--5822", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herman Kamper, Micha Elsner, Aren Jansen, and Sharon Goldwater. 2015. Unsupervised neural net- work based feature extraction using weak top-down constraints. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2015 IEEE International Confer- ence on, pages 5818-5822. IEEE.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "A segmental framework for fullyunsupervised large-vocabulary speech recognition", |
| "authors": [ |
| { |
| "first": "Herman", |
| "middle": [], |
| "last": "Kamper", |
| "suffix": "" |
| }, |
| { |
| "first": "Aren", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Computer Speech & Language", |
| "volume": "46", |
| "issue": "", |
| "pages": "154--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herman Kamper, Aren Jansen, and Sharon Goldwa- ter. 2017a. A segmental framework for fully- unsupervised large-vocabulary speech recognition. Computer Speech & Language, 46:154-174.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "An embedded segmental k-means model for unsupervised segmentation and clustering of speech", |
| "authors": [ |
| { |
| "first": "Herman", |
| "middle": [], |
| "last": "Kamper", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Livescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "719--726", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herman Kamper, Karen Livescu, and Sharon Gold- water. 2017b. An embedded segmental k-means model for unsupervised segmentation and cluster- ing of speech. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 719-726. IEEE.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Combining acoustic and articulatory feature information for robust speech recognition", |
| "authors": [ |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Kirchhoff", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gernot", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Fink", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sagerer", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Speech Communication", |
| "volume": "37", |
| "issue": "3-4", |
| "pages": "303--319", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katrin Kirchhoff, Gernot A Fink, and Gerhard Sagerer. 2002. Combining acoustic and articulatory feature information for robust speech recognition. Speech Communication, 37(3-4):303-319.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "The emergence of phonetic-phonological features in a biologically inspired model of speech processing", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bernd", |
| "suffix": "" |
| }, |
| { |
| "first": "Mengxue", |
| "middle": [], |
| "last": "Kr\u00f6ger", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Journal of Phonetics", |
| "volume": "53", |
| "issue": "", |
| "pages": "88--100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd J Kr\u00f6ger and Mengxue Cao. 2015. The emer- gence of phonetic-phonological features in a biolog- ically inspired model of speech processing. Journal of Phonetics, 53:88-100.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Towards a neurocomputational model of speech production and perception", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bernd", |
| "suffix": "" |
| }, |
| { |
| "first": "Jim", |
| "middle": [], |
| "last": "Kr\u00f6ger", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Kannampuzha", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Neuschaefer-Rube", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Speech Communication", |
| "volume": "51", |
| "issue": "9", |
| "pages": "793--809", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd J Kr\u00f6ger, Jim Kannampuzha, and Christiane Neuschaefer-Rube. 2009. Towards a neurocompu- tational model of speech production and perception. Speech Communication, 51(9):793-809.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "Speech perception in early infancy: Perceptual constancy for spectrally dissimilar vowel categories", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Patricia", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kuhl", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "The Journal of the Acoustical Society of America", |
| "volume": "66", |
| "issue": "6", |
| "pages": "1668--1679", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patricia K Kuhl. 1979. Speech perception in early in- fancy: Perceptual constancy for spectrally dissimilar vowel categories. The Journal of the Acoustical So- ciety of America, 66(6):1668-1679.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "Perceptual constancy for speech-sound categories in early infancy", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Patricia", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kuhl", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Child phonology", |
| "volume": "2", |
| "issue": "", |
| "pages": "41--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patricia K Kuhl. 1980. Perceptual constancy for speech-sound categories in early infancy. Child phonology, 2:41-66.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Linguistic experience alters phonetic perception in infants by 6 months of age", |
| "authors": [ |
| { |
| "first": "Karen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Patricia K Kuhl", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lacerda", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Kenneth", |
| "suffix": "" |
| }, |
| { |
| "first": "Bjrn", |
| "middle": [], |
| "last": "Stevens", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lindblom", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Science", |
| "volume": "255", |
| "issue": "5044", |
| "pages": "606--608", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patricia K Kuhl, Karen A Williams, Francisco Lacerda, Kenneth N Stevens, and Bjrn Lindblom. 1992. Lin- guistic experience alters phonetic perception in in- fants by 6 months of age. Science, 255(5044):606- 608.", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "VOT discrimination by four to six and a half month old infants from Spanish environments", |
| "authors": [ |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Robert E Lasky", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Syrdal-Lasky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Journal of Experimental Child Psychology", |
| "volume": "20", |
| "issue": "2", |
| "pages": "215--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert E Lasky, Ann Syrdal-Lasky, and Robert E Klein. 1975. VOT discrimination by four to six and a half month old infants from Spanish environ- ments. Journal of Experimental Child Psychology, 20(2):215-225.", |
| "links": null |
| }, |
| "BIBREF70": { |
| "ref_id": "b70", |
| "title": "Deep learning. nature", |
| "authors": [ |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "521", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436.", |
| "links": null |
| }, |
| "BIBREF71": { |
| "ref_id": "b71", |
| "title": "A Nonparametric {Bayesian} Approach to Acoustic Model Discovery", |
| "authors": [ |
| { |
| "first": "Chia-Ying", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "40--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chia-ying Lee and James Glass. 2012. A Nonparamet- ric {Bayesian} Approach to Acoustic Model Dis- covery. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics, pages 40-49.", |
| "links": null |
| }, |
| "BIBREF72": { |
| "ref_id": "b72", |
| "title": "An effect of learning on speech perception: The discrimination of durations of silence with and without phonemic significance", |
| "authors": [ |
| { |
| "first": "Alvin", |
| "middle": [], |
| "last": "Liberman", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [ |
| "Safford" |
| ], |
| "last": "Harris", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Eimas", |
| "suffix": "" |
| }, |
| { |
| "first": "Leigh", |
| "middle": [], |
| "last": "Lisker", |
| "suffix": "" |
| }, |
| { |
| "first": "Jarvis", |
| "middle": [], |
| "last": "Bastian", |
| "suffix": "" |
| } |
| ], |
| "year": 1961, |
| "venue": "Language and Speech", |
| "volume": "4", |
| "issue": "4", |
| "pages": "175--195", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alvin Liberman, Katherine Safford Harris, Peter Eimas, Leigh Lisker, and Jarvis Bastian. 1961. An effect of learning on speech perception: The dis- crimination of durations of silence with and with- out phonemic significance. Language and Speech, 4(4):175-195.", |
| "links": null |
| }, |
| "BIBREF73": { |
| "ref_id": "b73", |
| "title": "Numerical Simulation of Vowel Quality Systems: The Role of Perceptual Contrast", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Liljencrants", |
| "suffix": "" |
| }, |
| { |
| "first": "Bjorn", |
| "middle": [], |
| "last": "Lindblom", |
| "suffix": "" |
| } |
| ], |
| "year": 1972, |
| "venue": "Language", |
| "volume": "48", |
| "issue": "4", |
| "pages": "839--862", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan Liljencrants and Bjorn Lindblom. 1972. Numer- ical Simulation of Vowel Quality Systems: The Role of Perceptual Contrast. Language, 48(4):839-862.", |
| "links": null |
| }, |
| "BIBREF74": { |
| "ref_id": "b74", |
| "title": "Landmark detection for distinctive feature-based speech recognition", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Sharlene", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "The Journal of the Acoustical Society of America", |
| "volume": "100", |
| "issue": "5", |
| "pages": "3417--3430", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sharlene A Liu. 1996. Landmark detection for distinc- tive feature-based speech recognition. The Journal of the Acoustical Society of America, 100(5):3417- 3430.", |
| "links": null |
| }, |
| "BIBREF75": { |
| "ref_id": "b75", |
| "title": "Feature-based pronunciation modeling for speech recognition", |
| "authors": [ |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Livescu", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of HLT-NAACL 2004: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "81--84", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karen Livescu and James Glass. 2004. Feature-based pronunciation modeling for speech recognition. In Proceedings of HLT-NAACL 2004: Short Papers, pages 81-84. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF76": { |
| "ref_id": "b76", |
| "title": "An evaluation of graph clustering methods for unsupervised term discovery", |
| "authors": [ |
| { |
| "first": "Vince", |
| "middle": [], |
| "last": "Lyzinski", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Sell", |
| "suffix": "" |
| }, |
| { |
| "first": "Aren", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Sixteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vince Lyzinski, Gregory Sell, and Aren Jansen. 2015. An evaluation of graph clustering methods for unsu- pervised term discovery. In Sixteenth Annual Con- ference of the International Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF77": { |
| "ref_id": "b77", |
| "title": "Linear prediction: A tutorial review", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Makhoul", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Proceedings of the IEEE", |
| "volume": "63", |
| "issue": "4", |
| "pages": "561--580", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Makhoul. 1975. Linear prediction: A tutorial re- view. Proceedings of the IEEE, 63(4):561-580.", |
| "links": null |
| }, |
| "BIBREF78": { |
| "ref_id": "b78", |
| "title": "Bayesian modelling of visual perception", |
| "authors": [ |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Mamassian", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Landy", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurence", |
| "middle": [ |
| "T" |
| ], |
| "last": "Maloney ; Rajesh", |
| "suffix": "" |
| }, |
| { |
| "first": "P N", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bruno", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Olshausen", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "I" |
| ], |
| "last": "Lewicki", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "G" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dietterich", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Probabilistic models of the brain: Perception and neural function", |
| "volume": "", |
| "issue": "", |
| "pages": "13--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascal Mamassian, Michael Landy, and Laurence T Maloney. 2002. Bayesian modelling of visual perception. In Rajesh P N Rao, Bruno A Ol- shausen, Michael S Lewicki, Michael I Jordan, and Thomas G Dietterich, editors, Probabilistic models of the brain: Perception and neural function, pages 13-36. The MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF79": { |
| "ref_id": "b79", |
| "title": "Learning phonemes with a protolexicon", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Peperkamp", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Cognitive Science", |
| "volume": "37", |
| "issue": "1", |
| "pages": "103--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Martin, Sharon Peperkamp, and Emmanuel Dupoux. 2013. Learning phonemes with a proto- lexicon. Cognitive Science, 37(1):103-124.", |
| "links": null |
| }, |
| "BIBREF80": { |
| "ref_id": "b80", |
| "title": "The weckud wetch of the wast: Lexical adaptation to a novel accent", |
| "authors": [ |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Maye", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "K" |
| ], |
| "last": "Richard N Aslin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tanenhaus", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Cognitive Science", |
| "volume": "32", |
| "issue": "3", |
| "pages": "543--562", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jessica Maye, Richard N Aslin, and Michael K Tanen- haus. 2008a. The weckud wetch of the wast: Lexi- cal adaptation to a novel accent. Cognitive Science, 32(3):543-562.", |
| "links": null |
| }, |
| "BIBREF81": { |
| "ref_id": "b81", |
| "title": "Statistical phonetic learning in infants: Facilitation and feature generalization", |
| "authors": [ |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Maye", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Daniel", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "N" |
| ], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Aslin", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Developmental science", |
| "volume": "11", |
| "issue": "1", |
| "pages": "122--134", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jessica Maye, Daniel J Weiss, and Richard N Aslin. 2008b. Statistical phonetic learning in infants: Fa- cilitation and feature generalization. Developmental science, 11(1):122-134.", |
| "links": null |
| }, |
| "BIBREF82": { |
| "ref_id": "b82", |
| "title": "A crossmodal account for synchronic and diachronic patterns of/f/and/$\u03b8$/in English", |
| "authors": [ |
| { |
| "first": "Grant", |
| "middle": [], |
| "last": "Mcguire", |
| "suffix": "" |
| }, |
| { |
| "first": "Molly", |
| "middle": [], |
| "last": "Babel", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Laboratory Phonology", |
| "volume": "3", |
| "issue": "2", |
| "pages": "251--272", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grant McGuire and Molly Babel. 2012. A cross- modal account for synchronic and diachronic pat- terns of/f/and/$\u03b8$/in English. Laboratory Phonol- ogy, 3(2):251-272.", |
| "links": null |
| }, |
| "BIBREF83": { |
| "ref_id": "b83", |
| "title": "Distance measures for speech recognition, psychological and instrumental. Pattern recognition and artificial intelligence", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Mermelstein", |
| "suffix": "" |
| } |
| ], |
| "year": 1976, |
| "venue": "", |
| "volume": "116", |
| "issue": "", |
| "pages": "374--388", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Mermelstein. 1976. Distance measures for speech recognition, psychological and instrumental. Pat- tern recognition and artificial intelligence, 116:374- 388.", |
| "links": null |
| }, |
| "BIBREF84": { |
| "ref_id": "b84", |
| "title": "Segment inventories", |
| "authors": [ |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Mielke", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Language and linguistics compass", |
| "volume": "3", |
| "issue": "2", |
| "pages": "700--718", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeff Mielke. 2009. Segment inventories. Language and linguistics compass, 3(2):700-718.", |
| "links": null |
| }, |
| "BIBREF85": { |
| "ref_id": "b85", |
| "title": "Robust speech recognition using articulatory gestures in a Dynamic Bayesian Network framework", |
| "authors": [ |
| { |
| "first": "Vikramjit", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| }, |
| { |
| "first": "Hosung", |
| "middle": [], |
| "last": "Nam", |
| "suffix": "" |
| }, |
| { |
| "first": "Carol", |
| "middle": [], |
| "last": "Espy-Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", |
| "volume": "2011", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vikramjit Mitra, Hosung Nam, and Carol Espy-Wilson. 2011. Robust speech recognition using articulatory gestures in a Dynamic Bayesian Network frame- work. 2011 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2011, Pro- ceedings.", |
| "links": null |
| }, |
| "BIBREF86": { |
| "ref_id": "b86", |
| "title": "Consonant cue perception by twenty-to twenty-four-week-old infants", |
| "authors": [ |
| { |
| "first": "Alan R", |
| "middle": [], |
| "last": "Moffitt", |
| "suffix": "" |
| } |
| ], |
| "year": 1971, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "717--731", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan R Moffitt. 1971. Consonant cue perception by twenty-to twenty-four-week-old infants. Child de- velopment, pages 717-731.", |
| "links": null |
| }, |
| "BIBREF87": { |
| "ref_id": "b87", |
| "title": "Language experienced in utero affects vowel perception after birth: A two-country study", |
| "authors": [ |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Moon", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Lagercrantz", |
| "suffix": "" |
| }, |
| { |
| "first": "Patricia", |
| "middle": [ |
| "K" |
| ], |
| "last": "Kuhl", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Acta Paediatrica", |
| "volume": "102", |
| "issue": "2", |
| "pages": "156--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christine Moon, Hugo Lagercrantz, and Patricia K Kuhl. 2013. Language experienced in utero affects vowel perception after birth: A two-country study. Acta Paediatrica, 102(2):156-160.", |
| "links": null |
| }, |
| "BIBREF88": { |
| "ref_id": "b88", |
| "title": "Structure and substance in artificial-phonology learning, part I: Structure", |
| "authors": [ |
| { |
| "first": "Elliott", |
| "middle": [], |
| "last": "Moreton", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Pater", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Language and linguistics compass", |
| "volume": "6", |
| "issue": "11", |
| "pages": "686--701", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elliott Moreton and Joe Pater. 2012a. Structure and substance in artificial-phonology learning, part I: Structure. Language and linguistics compass, 6(11):686-701.", |
| "links": null |
| }, |
| "BIBREF89": { |
| "ref_id": "b89", |
| "title": "Structure and substance in artificial-phonology learning, part II: Substance", |
| "authors": [ |
| { |
| "first": "Elliott", |
| "middle": [], |
| "last": "Moreton", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Pater", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Language and linguistics compass", |
| "volume": "6", |
| "issue": "11", |
| "pages": "702--718", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elliott Moreton and Joe Pater. 2012b. Structure and substance in artificial-phonology learning, part II: Substance. Language and linguistics compass, 6(11):702-718.", |
| "links": null |
| }, |
| "BIBREF90": { |
| "ref_id": "b90", |
| "title": "The interaction between acoustic salience and language experience in developmental speech perception: Evidence from nasal place discrimination", |
| "authors": [ |
| { |
| "first": "Janet", |
| "middle": [ |
| "F" |
| ], |
| "last": "Chandan R Narayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrice", |
| "middle": [ |
| "Speeter" |
| ], |
| "last": "Werker", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Beddor", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Developmental Science", |
| "volume": "13", |
| "issue": "3", |
| "pages": "407--420", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chandan R Narayan, Janet F Werker, and Patrice Speeter Beddor. 2010. The interaction between acoustic salience and language experience in developmental speech perception: Evidence from nasal place discrimination. Developmental Science, 13(3):407-420.", |
| "links": null |
| }, |
| "BIBREF91": { |
| "ref_id": "b91", |
| "title": "Use of phonetic specificity during the acquisition of new words: Differences between consonants and vowels", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Thierry Nazzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Cognition", |
| "volume": "98", |
| "issue": "1", |
| "pages": "13--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thierry Nazzi. 2005. Use of phonetic specificity during the acquisition of new words: Differences between consonants and vowels. Cognition, 98(1):13-30.", |
| "links": null |
| }, |
| "BIBREF92": { |
| "ref_id": "b92", |
| "title": "Challenging the notion of innate phonetic boundaries", |
| "authors": [ |
| { |
| "first": "Susan", |
| "middle": [], |
| "last": "Nittrouer", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "The Journal of the Acoustical Society of America", |
| "volume": "110", |
| "issue": "3", |
| "pages": "1598--1605", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Susan Nittrouer. 2001. Challenging the notion of in- nate phonetic boundaries. The Journal of the Acous- tical Society of America, 110(3):1598-1605.", |
| "links": null |
| }, |
| "BIBREF93": { |
| "ref_id": "b93", |
| "title": "Structurally biased phonology: complexity in learning and typology", |
| "authors": [ |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Pater", |
| "suffix": "" |
| }, |
| { |
| "first": "Elliott", |
| "middle": [], |
| "last": "Moreton", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "The EFL Journal", |
| "volume": "3", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joe Pater and Elliott Moreton. 2014. Structurally bi- ased phonology: complexity in learning and typol- ogy. The EFL Journal, 3(2).", |
| "links": null |
| }, |
| "BIBREF94": { |
| "ref_id": "b94", |
| "title": "Scikitlearn: Machine learning in Python", |
| "authors": [ |
| { |
| "first": "Fabian", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "Gal", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "Bertrand", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathieu", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of machine learning research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabian Pedregosa, Gal Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, and others. 2011. Scikit- learn: Machine learning in Python. Journal of ma- chine learning research, 12(Oct):2825-2830.", |
| "links": null |
| }, |
| "BIBREF95": { |
| "ref_id": "b95", |
| "title": "The acquisition of allophonic rules: Statistical learning with linguistic constraints", |
| "authors": [ |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Peperkamp", |
| "suffix": "" |
| }, |
| { |
| "first": "Rozenn", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Calvez", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-Pierre", |
| "middle": [], |
| "last": "Nadal", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Cognition", |
| "volume": "101", |
| "issue": "3", |
| "pages": "31--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sharon Peperkamp, Rozenn Le Calvez, Jean-Pierre Nadal, and Emmanuel Dupoux. 2006. The acqui- sition of allophonic rules: Statistical learning with linguistic constraints. Cognition, 101(3):B31-B41.", |
| "links": null |
| }, |
| "BIBREF96": { |
| "ref_id": "b96", |
| "title": "The Buckeye corpus of conversational speech: labeling conventions and a test of transcriber reliability", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mark", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Pitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Hume", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Kiesling", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Raymond", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Speech Communication", |
| "volume": "45", |
| "issue": "1", |
| "pages": "89--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark A Pitt, Keith Johnson, Elizabeth Hume, Scott Kiesling, and William Raymond. 2005. The Buck- eye corpus of conversational speech: labeling con- ventions and a test of transcriber reliability. Speech Communication, 45(1):89-95.", |
| "links": null |
| }, |
| "BIBREF97": { |
| "ref_id": "b97", |
| "title": "A cross-language comparison of/d/-/{\\dh}/perception: evidence for a new developmental pattern", |
| "authors": [ |
| { |
| "first": "Linda", |
| "middle": [], |
| "last": "Polka", |
| "suffix": "" |
| }, |
| { |
| "first": "Connie", |
| "middle": [], |
| "last": "Colantonio", |
| "suffix": "" |
| }, |
| { |
| "first": "Megha", |
| "middle": [], |
| "last": "Sundara", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "The Journal of the Acoustical Society of America", |
| "volume": "109", |
| "issue": "5", |
| "pages": "2190--2201", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Linda Polka, Connie Colantonio, and Megha Sun- dara. 2001. A cross-language comparison of/d/- /{\\dh}/perception: evidence for a new developmen- tal pattern. The Journal of the Acoustical Society of America, 109(5):2190-2201.", |
| "links": null |
| }, |
| "BIBREF98": { |
| "ref_id": "b98", |
| "title": "Structural generalizations over consonants and vowels in 11-monthold infants", |
| "authors": [ |
| { |
| "first": "Ferran", |
| "middle": [], |
| "last": "Pons", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Juan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Toro", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Cognition", |
| "volume": "116", |
| "issue": "3", |
| "pages": "361--367", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ferran Pons and Juan M Toro. 2010. Structural gener- alizations over consonants and vowels in 11-month- old infants. Cognition, 116(3):361-367.", |
| "links": null |
| }, |
| "BIBREF99": { |
| "ref_id": "b99", |
| "title": "Motor cortex maps articulatory features of speech sounds", |
| "authors": [ |
| { |
| "first": "Friedemann", |
| "middle": [], |
| "last": "Pulverm\u00fcller", |
| "suffix": "" |
| }, |
| { |
| "first": "Martina", |
| "middle": [], |
| "last": "Huss", |
| "suffix": "" |
| }, |
| { |
| "first": "Ferath", |
| "middle": [], |
| "last": "Kherif", |
| "suffix": "" |
| }, |
| { |
| "first": "Fermin", |
| "middle": [], |
| "last": "Moscoso Del Prado Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Olaf", |
| "middle": [], |
| "last": "Hauk", |
| "suffix": "" |
| }, |
| { |
| "first": "Yury", |
| "middle": [], |
| "last": "Shtyrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "103", |
| "issue": "20", |
| "pages": "7865--7870", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Friedemann Pulverm\u00fcller, Martina Huss, Ferath Kherif, Fermin Moscoso del Prado Martin, Olaf Hauk, and Yury Shtyrov. 2006. Motor cortex maps articulatory features of speech sounds. Proceedings of the National Academy of Sciences, 103(20):7865- 7870.", |
| "links": null |
| }, |
| "BIBREF100": { |
| "ref_id": "b100", |
| "title": "Unsupervised optimal phoneme segmentation: Objectives, algorithm and comparisons", |
| "authors": [ |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Qiao", |
| "suffix": "" |
| }, |
| { |
| "first": "Naoya", |
| "middle": [], |
| "last": "Shimomura", |
| "suffix": "" |
| }, |
| { |
| "first": "Nobuaki", |
| "middle": [], |
| "last": "Minematsu", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Acoustics, Speech and Signal Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "3989--3992", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yu Qiao, Naoya Shimomura, and Nobuaki Minematsu. 2008. Unsupervised optimal phoneme segmenta- tion: Objectives, algorithm and comparisons. In Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, pages 3989-3992. IEEE.", |
| "links": null |
| }, |
| "BIBREF101": { |
| "ref_id": "b101", |
| "title": "Unsupervised word discovery from speech using automatic segmentation into syllable-like units", |
| "authors": [ |
| { |
| "first": "Okko", |
| "middle": [], |
| "last": "R\u00e4s\u00e4nen", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Doyle", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael C", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Sixteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Okko R\u00e4s\u00e4nen, Gabriel Doyle, and Michael C Frank. 2015. Unsupervised word discovery from speech using automatic segmentation into syllable-like units. In Sixteenth Annual Conference of the Inter- national Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF102": { |
| "ref_id": "b102", |
| "title": "Pre-linguistic segmentation of speech into syllable-like units", |
| "authors": [ |
| { |
| "first": "Okko", |
| "middle": [], |
| "last": "Johannes R\u00e4s\u00e4nen", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Doyle", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael C", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Cognition", |
| "volume": "171", |
| "issue": "", |
| "pages": "130--150", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Okko Johannes R\u00e4s\u00e4nen, Gabriel Doyle, and Michael C Frank. 2018. Pre-linguistic segmentation of speech into syllable-like units. Cognition, 171:130-150.", |
| "links": null |
| }, |
| "BIBREF103": { |
| "ref_id": "b103", |
| "title": "A comparison of neural network methods for unsupervised representation learning on the zero resource speech challenge", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Renshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Herman", |
| "middle": [], |
| "last": "Kamper", |
| "suffix": "" |
| }, |
| { |
| "first": "Aren", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Sixteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Renshaw, Herman Kamper, Aren Jansen, and Sharon Goldwater. 2015. A comparison of neu- ral network methods for unsupervised representa- tion learning on the zero resource speech challenge. In Sixteenth Annual Conference of the International Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF104": { |
| "ref_id": "b104", |
| "title": "Neural networks and brain function", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Edmund", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Rolls", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Treves", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "572", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edmund T Rolls and Alessandro Treves. 1998. Neural networks and brain function, volume 572. Oxford University Press, Oxford.", |
| "links": null |
| }, |
| "BIBREF105": { |
| "ref_id": "b105", |
| "title": "Vmeasure: A conditional entropy-based external cluster evaluation measure", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Rosenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hirschberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural lan- guage learning (EMNLP-CoNLL).", |
| "links": null |
| }, |
| "BIBREF106": { |
| "ref_id": "b106", |
| "title": "The perceptron: a probabilistic model for information storage and organization in the brain", |
| "authors": [ |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Rosenblatt", |
| "suffix": "" |
| } |
| ], |
| "year": 1958, |
| "venue": "Psychological review", |
| "volume": "65", |
| "issue": "6", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank Rosenblatt. 1958. The perceptron: a probabilis- tic model for information storage and organization in the brain. Psychological review, 65(6):386.", |
| "links": null |
| }, |
| "BIBREF107": { |
| "ref_id": "b107", |
| "title": "An auditory-based feature for robust speech recognition", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Shao", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaozhang", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| }, |
| { |
| "first": "Deliang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Soundararajan", |
| "middle": [], |
| "last": "Srinivasan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "2009 IEEE International Conference on Acoustics, Speech and Signal Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "4625--4628", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Shao, Zhaozhang Jin, DeLiang Wang, and Soundararajan Srinivasan. 2009. An auditory-based feature for robust speech recognition. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4625-4628. IEEE.", |
| "links": null |
| }, |
| "BIBREF108": { |
| "ref_id": "b108", |
| "title": "Composite embedding systems for ZeroSpeech2017 Track1", |
| "authors": [ |
| { |
| "first": "Hayato", |
| "middle": [], |
| "last": "Shibata", |
| "suffix": "" |
| }, |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kato", |
| "suffix": "" |
| }, |
| { |
| "first": "Takahiro", |
| "middle": [], |
| "last": "Shinozaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Shinji", |
| "middle": [], |
| "last": "Watanabet", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "747--753", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hayato Shibata, Taku Kato, Takahiro Shinozaki, and Shinji Watanabet. 2017. Composite embedding sys- tems for ZeroSpeech2017 Track1. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 747-753. IEEE.", |
| "links": null |
| }, |
| "BIBREF109": { |
| "ref_id": "b109", |
| "title": "Sensory cortex is optimized for prediction of future input", |
| "authors": [ |
| { |
| "first": "Yosef", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yayoi", |
| "middle": [], |
| "last": "Teramoto", |
| "suffix": "" |
| }, |
| { |
| "first": "D B", |
| "middle": [], |
| "last": "Ben", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan W H", |
| "middle": [], |
| "last": "Willmore", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "J" |
| ], |
| "last": "Schnupp", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicol S", |
| "middle": [], |
| "last": "King", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harper", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "7", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yosef Singer, Yayoi Teramoto, Ben D B Willmore, Jan W H Schnupp, Andrew J King, and Nicol S Harper. 2018. Sensory cortex is optimized for prediction of future input. eLife, 7:e31557.", |
| "links": null |
| }, |
| "BIBREF110": { |
| "ref_id": "b110", |
| "title": "Articulatory gesture rich representation learning of phonological units in low resource settings", |
| "authors": [ |
| { |
| "first": "Lal", |
| "middle": [], |
| "last": "Brij Mohan", |
| "suffix": "" |
| }, |
| { |
| "first": "Manish", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shrivastava", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "International Conference on Statistical Language and Speech Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "80--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brij Mohan Lal Srivastava and Manish Shrivastava. 2016. Articulatory gesture rich representation learn- ing of phonological units in low resource settings. In International Conference on Statistical Language and Speech Processing, pages 80-95. Springer.", |
| "links": null |
| }, |
| "BIBREF111": { |
| "ref_id": "b111", |
| "title": "Contributions of infant word learning to language development", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Swingley", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Philosophical Transactions of the Royal Society of London B: Biological Sciences", |
| "volume": "364", |
| "issue": "", |
| "pages": "3617--3632", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Swingley. 2009. Contributions of infant word learning to language development. Philosophical Transactions of the Royal Society of London B: Bio- logical Sciences, 364(1536):3617-3632.", |
| "links": null |
| }, |
| "BIBREF112": { |
| "ref_id": "b112", |
| "title": "Infants' sensitivity to vowel and tonal contrasts", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sandra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Trehub", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "Developmental Psychology", |
| "volume": "9", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sandra E Trehub. 1973. Infants' sensitivity to vowel and tonal contrasts. Developmental Psychology, 9(1):91.", |
| "links": null |
| }, |
| "BIBREF113": { |
| "ref_id": "b113", |
| "title": "What determines the capacity of autoassociative memories in the brain? Network: Computation in Neural Systems", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Treves", |
| "suffix": "" |
| }, |
| { |
| "first": "Edmund", |
| "middle": [ |
| "T" |
| ], |
| "last": "Rolls", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "371--397", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessandro Treves and Edmund T Rolls. 1991. What determines the capacity of autoassociative memories in the brain? Network: Computation in Neural Sys- tems, 2(4):371-397.", |
| "links": null |
| }, |
| "BIBREF114": { |
| "ref_id": "b114", |
| "title": "Grundz\u00fcge der phonologie", |
| "authors": [ |
| { |
| "first": "Trubetskoy", |
| "middle": [], |
| "last": "Nikola\u00ef Sergeyevich", |
| "suffix": "" |
| } |
| ], |
| "year": 1939, |
| "venue": "Travaux du Cercle Linguistique de Prague", |
| "volume": "7", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikola\u00ef Sergeyevich Trubetskoy. 1939. Grundz\u00fcge der phonologie. In Travaux du Cercle Linguistique de Prague, volume 7. Van den Hoeck & Ruprecht.", |
| "links": null |
| }, |
| "BIBREF115": { |
| "ref_id": "b115", |
| "title": "Unsupervised learning of vowel categories from infant-directed speech", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Gautam", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "L" |
| ], |
| "last": "Vallabha", |
| "suffix": "" |
| }, |
| { |
| "first": "Ferran", |
| "middle": [], |
| "last": "Mcclelland", |
| "suffix": "" |
| }, |
| { |
| "first": "Janet", |
| "middle": [ |
| "F" |
| ], |
| "last": "Pons", |
| "suffix": "" |
| }, |
| { |
| "first": "Shigeaki", |
| "middle": [], |
| "last": "Werker", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Amano", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "104", |
| "issue": "33", |
| "pages": "13273--13278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gautam K Vallabha, James L McClelland, Ferran Pons, Janet F Werker, and Shigeaki Amano. 2007. Unsupervised learning of vowel categories from infant-directed speech. Proceedings of the National Academy of Sciences, 104(33):13273-13278.", |
| "links": null |
| }, |
| "BIBREF116": { |
| "ref_id": "b116", |
| "title": "Unsupervised learning of acoustic sub-word units", |
| "authors": [ |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Balakrishnan Varadarajan", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "165--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Balakrishnan Varadarajan, Sanjeev Khudanpur, and Emmanuel Dupoux. 2008. Unsupervised learning of acoustic sub-word units. In Proceedings of the 46th Annual Meeting of the Association for Compu- tational Linguistics on Human Language Technolo- gies: Short Papers, pages 165-168. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF117": { |
| "ref_id": "b117", |
| "title": "The zero resource speech challenge 2015", |
| "authors": [ |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "Versteegh", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Thiolli\u00e8re", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Schatz", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuan", |
| "middle": [ |
| "Nga" |
| ], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Anguera", |
| "suffix": "" |
| }, |
| { |
| "first": "Aren", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Sixteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maarten Versteegh, Roland Thiolli\u00e8re, Thomas Schatz, Xuan Nga Cao, Xavier Anguera, Aren Jansen, and Emmanuel Dupoux. 2015. The zero resource speech challenge 2015. In Sixteenth Annual Conference of the International Speech Communication Associa- tion.", |
| "links": null |
| }, |
| "BIBREF118": { |
| "ref_id": "b118", |
| "title": "Perceptual restoration of missing speech sounds", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Richard", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Warren", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "Science", |
| "volume": "167", |
| "issue": "3917", |
| "pages": "392--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard M Warren. 1970. Perceptual restoration of missing speech sounds. Science, 167(3917):392- 393.", |
| "links": null |
| }, |
| "BIBREF119": { |
| "ref_id": "b119", |
| "title": "Seeing and hearing speech excites the motor system involved in speech production", |
| "authors": [ |
| { |
| "first": "Kate", |
| "middle": [ |
| "E" |
| ], |
| "last": "Watkins", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [ |
| "P" |
| ], |
| "last": "Strafella", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom\u00e1\u0161", |
| "middle": [], |
| "last": "Paus", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Neuropsychologia", |
| "volume": "41", |
| "issue": "8", |
| "pages": "989--994", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kate E Watkins, Antonio P Strafella, and Tom\u00e1\u0161 Paus. 2003. Seeing and hearing speech excites the motor system involved in speech production. Neuropsy- chologia, 41(8):989-994.", |
| "links": null |
| }, |
| "BIBREF120": { |
| "ref_id": "b120", |
| "title": "Crosslanguage speech perception: Evidence for perceptual reorganization during the first year of life", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Janet", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard C", |
| "middle": [], |
| "last": "Werker", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tees", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "fant behavior and development", |
| "volume": "7", |
| "issue": "", |
| "pages": "49--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janet F Werker and Richard C Tees. 1984. Cross- language speech perception: Evidence for percep- tual reorganization during the first year of life. In- fant behavior and development, 7(1):49-63.", |
| "links": null |
| }, |
| "BIBREF121": { |
| "ref_id": "b121", |
| "title": "Subsegmental detail in early lexical representations", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Katherine", |
| "suffix": "" |
| }, |
| { |
| "first": "James L", |
| "middle": [], |
| "last": "White", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Memory and Language", |
| "volume": "59", |
| "issue": "1", |
| "pages": "114--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katherine S White and James L Morgan. 2008. Sub- segmental detail in early lexical representations. Journal of Memory and Language, 59(1):114-132.", |
| "links": null |
| }, |
| "BIBREF122": { |
| "ref_id": "b122", |
| "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ronald", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Machine learning", |
| "volume": "8", |
| "issue": "3-4", |
| "pages": "229--256", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229-256.", |
| "links": null |
| }, |
| "BIBREF123": { |
| "ref_id": "b123", |
| "title": "Listening to speech activates motor areas involved in speech production", |
| "authors": [ |
| { |
| "first": "Ay\u015fe", |
| "middle": [], |
| "last": "Stephen M Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [ |
| "I" |
| ], |
| "last": "Pinar Saygin", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Sereno", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Iacoboni", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Nature Neuroscience", |
| "volume": "7", |
| "issue": "7", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen M Wilson, Ay\u015fe Pinar Saygin, Martin I Sereno, and Marco Iacoboni. 2004. Listening to speech activates motor areas involved in speech pro- duction. Nature Neuroscience, 7(7):701.", |
| "links": null |
| }, |
| "BIBREF124": { |
| "ref_id": "b124", |
| "title": "A context constructivist account of contextual diversity", |
| "authors": [ |
| { |
| "first": "Shaorong", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Mollica", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "K" |
| ], |
| "last": "Tanenhaus", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 40th Annual Cognitive Science Society Meeting", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shaorong Yan, Francis Mollica, and Michael K Tanen- haus. 2018. A context constructivist account of con- textual diversity. In Proceedings of the 40th Annual Cognitive Science Society Meeting.", |
| "links": null |
| }, |
| "BIBREF125": { |
| "ref_id": "b125", |
| "title": "Extracting bottleneck features and word-like pairs from untranscribed speech for feature representation", |
| "authors": [ |
| { |
| "first": "Yougen", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Cheung-Chi", |
| "middle": [], |
| "last": "Leung", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongjie", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "734--739", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yougen Yuan, Cheung-Chi Leung, Lei Xie, Hongjie Chen, Bin Ma, and Haizhou Li. 2017. Extracting bottleneck features and word-like pairs from untran- scribed speech for feature representation. In Auto- matic Speech Recognition and Understanding Work- shop (ASRU), 2017 IEEE, pages 734-739. IEEE.", |
| "links": null |
| }, |
| "BIBREF126": { |
| "ref_id": "b126", |
| "title": "A deep scattering spectrumdeep siamese network pipeline for unsupervised acoustic modeling", |
| "authors": [ |
| { |
| "first": "Neil", |
| "middle": [], |
| "last": "Zeghidour", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Synnaeve", |
| "suffix": "" |
| }, |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "Versteegh", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "4965--4969", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Neil Zeghidour, Gabriel Synnaeve, Maarten Versteegh, and Emmanuel Dupoux. 2016. A deep scattering spectrumdeep siamese network pipeline for unsu- pervised acoustic modeling. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE Inter- national Conference on, pages 4965-4969. IEEE.", |
| "links": null |
| }, |
| "BIBREF127": { |
| "ref_id": "b127", |
| "title": "Comparison of different implementations of MFCC", |
| "authors": [ |
| { |
| "first": "Fang", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Guoliang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhanjiang", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Journal of Computer science and Technology", |
| "volume": "16", |
| "issue": "6", |
| "pages": "582--589", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fang Zheng, Guoliang Zhang, and Zhanjiang Song. 2001. Comparison of different implementations of MFCC. Journal of Computer science and Technol- ogy, 16(6):582-589.", |
| "links": null |
| }, |
| "BIBREF128": { |
| "ref_id": "b128", |
| "title": "Subdivision of the audible frequency range into critical bands (Frequenzgruppen)", |
| "authors": [ |
| { |
| "first": "Eberhard", |
| "middle": [], |
| "last": "Zwicker", |
| "suffix": "" |
| } |
| ], |
| "year": 1961, |
| "venue": "The Journal of the Acoustical Society of America", |
| "volume": "33", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eberhard Zwicker. 1961. Subdivision of the audible frequency range into critical bands (Frequenzgrup- pen). The Journal of the Acoustical Society of Amer- ica, 33(2):248.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "The binary stochastic neural autoencoder architecture with encoder layers E 1,...,e and decoder layers D 1,...,d . For expository purposes, acoustics are represented as pressure waves. In reality, the system uses frames of Mel frequency cepstral coefficients." |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Mean activation pattern by gold segment label from the BSN model with speaker embeddings, with darker color indexing higher average activation." |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Xitsonga phoneme and feature distributions." |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "English phoneme and feature distributions." |
| }, |
| "TABREF0": { |
| "num": null, |
| "text": ".023 0.013 0.016 0.006 0.004 0.005 Sigmoid 0.281 0.191 0.227 0.246 0.166 0.198 Sigmoid+Speaker 0.302 0.185 0.230 0.205 0.180 0.192 BSN 0.360 0.206 0.262 0.240 0.161 0.193 Our model (BSN+Speaker) 0.462 0.268 0.339 0.270 0.180 0.216", |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td>Xitsonga</td><td/><td/><td>English</td><td/></tr><tr><td>Model</td><td>H</td><td>C</td><td>V</td><td>H</td><td>C</td><td>V</td></tr><tr><td>Baseline 0</td><td/><td/><td/><td/><td/><td/></tr></table>" |
| }, |
| "TABREF1": { |
| "num": null, |
| "text": "Phone clustering scores. Homogeneity (H), completeness (C) and V-measure (V) across the Zerospeech 2015 Xitsonga and English challenge datasets.", |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "num": null, |
| "text": "Perceptual availability by feature in Xitsonga", |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Feature</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"4\">voice 0.9244 0.8567 0.8893</td></tr><tr><td colspan=\"4\">sonorant 0.8544 0.8862 0.8700</td></tr><tr><td colspan=\"4\">approximant 0.8005 0.8370 0.8183</td></tr><tr><td colspan=\"4\">continuant 0.8577 0.7669 0.8098</td></tr><tr><td colspan=\"4\">consonantal 0.8249 0.7357 0.7777</td></tr><tr><td colspan=\"4\">syllabic 0.6624 0.8426 0.7417</td></tr><tr><td colspan=\"4\">dorsal 0.7046 0.7114 0.7080</td></tr><tr><td colspan=\"4\">strident 0.5505 0.9027 0.6839</td></tr><tr><td colspan=\"4\">coronal 0.5758 0.7066 0.6345</td></tr><tr><td colspan=\"4\">anterior 0.5251 0.7280 0.6101</td></tr><tr><td colspan=\"4\">delayed release 0.4413 0.7374 0.5521</td></tr><tr><td colspan=\"4\">front 0.4322 0.7407 0.5459</td></tr><tr><td colspan=\"4\">high 0.3841 0.6931 0.4943</td></tr><tr><td colspan=\"4\">tense 0.3275 0.7101 0.4483</td></tr><tr><td colspan=\"4\">back 0.3128 0.7504 0.4416</td></tr><tr><td colspan=\"4\">nasal 0.2796 0.7544 0.4080</td></tr><tr><td colspan=\"4\">labial 0.2541 0.7077 0.3739</td></tr><tr><td colspan=\"4\">low 0.2410 0.7787 0.3680</td></tr><tr><td colspan=\"4\">distributed 0.2203 0.6881 0.3337</td></tr><tr><td colspan=\"4\">diphthong 0.2039 0.8051 0.3254</td></tr><tr><td colspan=\"4\">round 0.1665 0.7012 0.2692</td></tr><tr><td colspan=\"4\">lateral 0.1484 0.8333 0.2519</td></tr><tr><td colspan=\"4\">labiodental 0.0787 0.6756 0.1410</td></tr><tr><td colspan=\"4\">spread glottis 0.0377 0.6683 0.0714</td></tr></table>" |
| }, |
| "TABREF4": { |
| "num": null, |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>: Perceptual availability by feature in English</td></tr></table>" |
| }, |
| "TABREF5": { |
| "num": null, |
| "text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations 2015.", |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar</td></tr><tr><td>Muthukumar, and Alan W Black. 2015. Using artic-</td></tr><tr><td>ulatory features and inferred phonological segments</td></tr><tr><td>in zero resource speech processing. In Sixteenth An-</td></tr><tr><td>nual Conference of the International Speech Com-</td></tr><tr><td>munication Association.</td></tr><tr><td>Mary E Beckman and Jan Edwards. 2000. The on-</td></tr><tr><td>togeny of phonological categories and the primacy</td></tr><tr><td>of lexical learning in linguistic development. Child</td></tr><tr><td>development, 71(1):240-249.</td></tr><tr><td>Trevor Bekolay. 2016. Biologically inspired methods</td></tr><tr><td>in speech recognition and synthesis: Closing the</td></tr><tr><td>loop. Ph.D. thesis, University of Waterloo.</td></tr></table>" |
| } |
| } |
| } |
| } |