| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:38:54.199912Z" |
| }, |
| "title": "Modeling unsupervised phonetic and phonological learning in Generative Adversarial Phonology", |
| "authors": [ |
| { |
| "first": "Ga\u0161per", |
| "middle": [], |
| "last": "Begu\u0161", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": {} |
| }, |
| "email": "begus@uw.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper models phonetic and phonological learning as a dependency between random space and generated speech data in the Generative Adversarial Neural network architecture and proposes a methodology to uncover the network's internal representation that corresponds to phonetic and phonological features. A Generative Adversarial Network (Goodfellow et al. 2014; implemented as WaveGAN for acoustic data by Donahue et al. 2019) was trained on an allophonic distribution in English, where voiceless stops surface as aspirated word-initially before stressed vowels except if preceded by a sibilant [s]. The network successfully learns the allophonic alternation: the network's generated speech signal contains the conditional distribution of aspiration duration. Additionally, the network generates innovative outputs for which no evidence is available in the training data, suggesting that the network segments continuous speech signal into units that can be productively recombined. The paper also proposes a technique for establishing the network's internal representations. We identify latent variables that directly correspond to presence of [s] in the output. By manipulating these variables, we actively control the presence of [s], its frication amplitude, and spectral shape of the frication noise in the generated outputs.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper models phonetic and phonological learning as a dependency between random space and generated speech data in the Generative Adversarial Neural network architecture and proposes a methodology to uncover the network's internal representation that corresponds to phonetic and phonological features. A Generative Adversarial Network (Goodfellow et al. 2014; implemented as WaveGAN for acoustic data by Donahue et al. 2019) was trained on an allophonic distribution in English, where voiceless stops surface as aspirated word-initially before stressed vowels except if preceded by a sibilant [s]. The network successfully learns the allophonic alternation: the network's generated speech signal contains the conditional distribution of aspiration duration. Additionally, the network generates innovative outputs for which no evidence is available in the training data, suggesting that the network segments continuous speech signal into units that can be productively recombined. The paper also proposes a technique for establishing the network's internal representations. We identify latent variables that directly correspond to presence of [s] in the output. By manipulating these variables, we actively control the presence of [s], its frication amplitude, and spectral shape of the frication noise in the generated outputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Modeling phonetic and phonological data with neural networks has seen a rapid increase in the past few years (Alderete et al. 2013; Avcu et al. 2017; Alderete and Tupper 2018; Mahalunkar and Kelleher 2018; Weber et al. 2018; Dupoux 2018; Prickett et al. 2019; Pater 2019 , for cautionary notes, see Rawski and Heinz 2019) . The majority of existing computational models in phonology, however, model learning as symbol manipulation and operate with discrete units-either with completely abstract made-up units or with discrete units that feature some phonetic properties that can be approximated as phonemes. This means that either the phonetic and phonological learning are modeled separately or one is assumed to have already been completed with a pre-assumed level of abstraction (Martin et al., 2013; Dupoux, 2018) . This is true for both proposals that model phonological distributions or derivations (Alderete et al., 2013; Prickett et al., 2019) and featural organizations (Faruqui et al., 2016; Silfverberg et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 131, |
| "text": "(Alderete et al. 2013;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 132, |
| "end": 149, |
| "text": "Avcu et al. 2017;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 150, |
| "end": 175, |
| "text": "Alderete and Tupper 2018;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 176, |
| "end": 205, |
| "text": "Mahalunkar and Kelleher 2018;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 206, |
| "end": 224, |
| "text": "Weber et al. 2018;", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 225, |
| "end": 237, |
| "text": "Dupoux 2018;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 238, |
| "end": 259, |
| "text": "Prickett et al. 2019;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 260, |
| "end": 270, |
| "text": "Pater 2019", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 299, |
| "end": 321, |
| "text": "Rawski and Heinz 2019)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 782, |
| "end": 803, |
| "text": "(Martin et al., 2013;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 804, |
| "end": 817, |
| "text": "Dupoux, 2018)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 905, |
| "end": 928, |
| "text": "(Alderete et al., 2013;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 929, |
| "end": 951, |
| "text": "Prickett et al., 2019)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 979, |
| "end": 1001, |
| "text": "(Faruqui et al., 2016;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1002, |
| "end": 1027, |
| "text": "Silfverberg et al., 2018)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Most models in the subset of the proposals that operate with continuous phonetic data assume at least some level of abstraction and operate with already extracted features (e.g. formant values) on limited \"toy\" data (e.g. Pierrehumbert 2001 ; Kirby and Sonderegger 2015 for a discussion, see Dupoux 2018) . Guenther and Vladusich (2012) , Guenther (2016) and Oudeyer (2001 Oudeyer ( , 2002 Oudeyer ( , 2005 Oudeyer ( , 2006 , for example, propose models that use simple neural maps that are based on actual correlates of neurons involved in speech production in the human brain (based on various brain imaging techniques). Their models, however, do not operate with raw acoustic data (or require extraction of features in a highly abstract model of articulators; Oudeyer 2005 Oudeyer , 2006 , require a level of abstraction in the input to the model, and do not model phonological processes -i.e. allophonic distributions. Phonological learning in most of these proposals is thus modeled as if phonetic learning (or at least a subset of phonetic learning) had already taken place: the initial state already includes phonemic inventories, phonemes as discrete units, feature matrices that had already been learned, or extracted phonetic values.", |
| "cite_spans": [ |
| { |
| "start": 222, |
| "end": 240, |
| "text": "Pierrehumbert 2001", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 292, |
| "end": 304, |
| "text": "Dupoux 2018)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 307, |
| "end": 336, |
| "text": "Guenther and Vladusich (2012)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 339, |
| "end": 354, |
| "text": "Guenther (2016)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 359, |
| "end": 372, |
| "text": "Oudeyer (2001", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 373, |
| "end": 389, |
| "text": "Oudeyer ( , 2002", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 390, |
| "end": 406, |
| "text": "Oudeyer ( , 2005", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 407, |
| "end": 423, |
| "text": "Oudeyer ( , 2006", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 763, |
| "end": 775, |
| "text": "Oudeyer 2005", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 776, |
| "end": 790, |
| "text": "Oudeyer , 2006", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Prominent among the few models that operate with raw phonetic data are Gaussian mixture models for category-learning or phoneme extraction (Schatz et al., 2019; Lee and Glass, 2012) . Schatz et al. (2019) propose a Dirichlet process Gaussian mixture model that learns categories from raw acoustic input in an unsupervised learning task. The primary purpose of the proposal in Schatz et al. (2019) is modeling perception and categorization: they model how a learner is able to categorize raw acoustic data into sets of discrete categorical units that have phonetic values (i.e. phonemes) . No phonological processes are modeled in the proposal.", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 160, |
| "text": "(Schatz et al., 2019;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 161, |
| "end": 181, |
| "text": "Lee and Glass, 2012)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 184, |
| "end": 204, |
| "text": "Schatz et al. (2019)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 376, |
| "end": 396, |
| "text": "Schatz et al. (2019)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 571, |
| "end": 586, |
| "text": "(i.e. phonemes)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, neural network models for unsupervised feature extraction have seen success in modeling acquisition of phonetic features from raw acoustic data (Kamper et al., 2015) . The model in Shain and Elsner (2019) , for example, is an autoencoder neural network that is trained on presegmented acoustic data. The model takes as an input segmented acoustic data and outputs values that can be correlated to phonological features. Learning is, however, not completely unsupervised as the network is trained on pre-segmented phones. Thiolli\u00e8re et al. (2015) similarly propose an architecture that extracts units from unsupervised speech data. These proposals, however, do not model learning of phonological distributions, but only of feature representations, and crucially are not generative, meaning that the models do not output innovative data, but try to replicate the input as closely as possible (e.g. in the autoencoder architecture).", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 175, |
| "text": "(Kamper et al., 2015)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 191, |
| "end": 214, |
| "text": "Shain and Elsner (2019)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 531, |
| "end": 555, |
| "text": "Thiolli\u00e8re et al. (2015)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As argued below, the model based on a Generative Adversarial network learns not only to generate innovative data that closely resemble human speech, but also learns internal representations that resemble phonological features simultaneously with unsupervised phonetic learning from raw acoustic data. Additionally, the model is generative and outputs both the conditional allophonic distributions in the data and innovative data that can be compared to productive outputs in human speech acquisition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The advantage of the GAN architecture (Goodfellow et al., 2014; Radford et al., 2015; Donahue et al., 2019) is that learning is completely unsupervised and that phonetic learning is simultaneous with phonological learning in its broadest sense. A network that models learning of phonet-ics from raw data and shows signs of learning discrete phonological units at the same time is likely one step closer to reality than models that operate with symbolic computation and assume phonetic learning had already taken place and is independent of phonology and vice versa. The Generator's outputs can be approximated as the basis for articulatory targets in human speech that are sent to articulators for execution. The latent variables in the input of the Generator can be modeled as featural representation that the Generator learns to output into a speech signal by attempting to maximize the error rate of a Discriminator network that distinguishes between real data and generated outputs. The Discriminator network thus has a parallel in human speech perception, production, and acquisition: the imitation principle (Nguyen and Delvaux, 2015) . The Discriminator's function is to enforce that the Generator's outputs resemble (but not replicate) the inputs as closely as possible. The GAN network thus incorporates both the pre-articulatory production elements (the Generator) as well as the perceptual element (the Discriminator) in speech acquisition. While other neural network architectures might be appropriate for modeling phonetic and phonological learning, GAN is unique in that it is a generative model with the production-perception loop parallel and that, unlike for example autoencoders, generates innovative data rather than data that resembles the input as closely as possible. To our knowledge, this is the first proposal that tests whether neural networks are able to learn an allophonic distribution based on raw acoustic data.", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 63, |
| "text": "(Goodfellow et al., 2014;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 64, |
| "end": 85, |
| "text": "Radford et al., 2015;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 86, |
| "end": 107, |
| "text": "Donahue et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1114, |
| "end": 1140, |
| "text": "(Nguyen and Delvaux, 2015)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Generative Adversarial model of phonology", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "We train a Generative Adversarial Network architecture implemented for audio files in Donahue et al. (2019) (WaveGAN; which is based on DC-GAN; Radford et al. 2015) on continuous raw speech data that contains information for an allophonic distribution: word-initial pre-vocalic aspiration of voiceless stops ([\"p h It] \u21e0 [\"spIt]). The data is curated in order to control for non-desired effects, which is why only sequences of the shape #TV and #sTV (T = stop, V = vowel) are fed to the model. This allophonic distribution is uniquely appropriate for testing learnability in a GAN setting, because the dependency between the presence of [s] and duration of VOT is not strictly local. To be sure, the dependency is local in phonological terms, as [s] and T are two segments and immediate neighbors, but in phonetic terms, a pe-riod of closure intervenes between the aspiration and the period (or absence thereof) of frication noise of [s] .", |
| "cite_spans": [ |
| { |
| "start": 934, |
| "end": 937, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Generative Adversarial model of phonology", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "The hypothesis of the computational experiment presented in Section 3 is the following: if VOT duration is conditioned on the presence of [s] in output data generated from noise by the Generator network, it means that the Generator network has successfully learned a phonetically non-local allophonic distribution. Because the allophonic distribution is not strictly local and not automatic, but has to be learned and actively controlled by speakers, evidence for this type of learning is considered phonological learning in the broadest sense. Conditioning the presence of a phonetic feature based on the presence or absence of a phoneme that is not automatic is, in most models, considered part of phonology and is derived with phonological computation. That the tested distribution is non-automatic and has to be actively controlled by the speakers is evident from L1 acquisition: failure to learn the distribution results in longer VOT durations in the sT condition documented in L1 acquisition (McLeod et al., 1996; Bond, 1981) . Additional evidence that the GAN's learning resembles phonemic representations (such as presence of [s]) is obtained from recovering the networks' internal representations (see below and Section 3.2).", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 141, |
| "text": "[s]", |
| "ref_id": null |
| }, |
| { |
| "start": 999, |
| "end": 1020, |
| "text": "(McLeod et al., 1996;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1021, |
| "end": 1032, |
| "text": "Bond, 1981)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Generative Adversarial model of phonology", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "This paper also proposes a technique for establishing the Generator's internal representations. What neural networks actually learn is a challenging question with no easy solutions. The inability to uncover networks' representations has been used as an argument against neural network approaches to linguistic data (Rawski and Heinz, 2019) . We argue that internal representation of a network can be, at least partially, uncovered. By regressing annotated dependencies between the Generator's latent space and output data, we identify values in the latent space that correspond to linguistically meaningful features in generated outputs. This paper demonstrates that manipulating the chosen values in the latent space have phonetic and phonological effects in the generated outputs, such as the presence of [s] and the amplitude of its frication. In other words, the GAN network learns to use random noise as an approximation of phonetic and phonological features. This paper proposes that dependencies, learned during training in a latent space that is limited by some interval, extend beyond that interval. This crucial step allows for the discovery of several phonetic properties.", |
| "cite_spans": [ |
| { |
| "start": 315, |
| "end": 339, |
| "text": "(Rawski and Heinz, 2019)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Generative Adversarial model of phonology", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "2.1 The model: Donahue et al. (2019) basedon Radford et al. (2015) Generative Adversarial Networks, proposed by Goodfellow et al. (2014) , have seen a rapid expansion in a variety of tasks, including but not limited to computer vision and image generation (Radford et al., 2015) . The main characteristic of GANs is the architecture that involves two networks: the Generator network and the Discriminator network (Goodfellow et al., 2014) . The Generator network is trained to generate data from random noise, while the Discriminator is trained to distinguish real data from the outputs of the Generator network ( Figure 1 ). The Generator is trained to generate data that maximizes the error rate of the Discriminator network. The training results in a Generator (G) network that takes random noise as its input (e.g. multiple variables with uniform distributions) and outputs data such that the Discriminator is inaccurate in distinguishing the generated from the real data. Applying the GAN architecture on time-series data such as a continuous speech stream faces several challenges. Recently, Donahue et al. (2019) proposed an implementation of a Deep Convolutional Generative Adversarial Network proposed by Radford et al. (2015) for audio data (Wave-GAN); the model along with the code in Donahue et al. (2019) was used for training in this paper. The model takes one-second long raw audio files as inputs, sampled at 16 kHz with 16-bit quantization. The audio files are converted into a vector and fed to the Discriminator network as real data. Instead of the two-dimensional 5 \u21e5 5 filters, the WaveGAN model uses one-dimensional 1 \u21e5 25 filters and larger upsampling (Donahue et al., 2019) . The main architecture is preserved as in DCGAN, except that an additional layer is introduced in order to generate longer samples. The Generator network takes as input z, a vector of one hundred uniformly distributed variables (z \u21e0 U ( 1, 1)) and outputs 16,384 data points, which constitutes the output audio signal. The network has five 1D convolutional layers (Donahue et al., 2019) . The Discriminator network takes 16,384 data points (raw audio files) as its input and outputs a sin- gle logit. The initial GAN design as proposed by Goodfellow et al. (2014) trained the Discriminator network to distinguish real from generated data. Training such models, however, faced substantial challenges (Donahue et al., 2019) . Donahue et al. 2019implement the WGAN-GP strategy Gulrajani et al., 2017) , which means that the Discriminator is trained \"as a function that assists in computing the Wasserstein distance\" (Donahue et al., 2019) . The Wave-GAN model (Donahue et al., 2019) uses ReLU activation in all but the last layer for the Generator network, and Leaky ReLU in all layers in the Discriminator network (as recommended for DCGAN in Radford et al. 2015) . For exact dimensions of each layer and other details of the model, see Donahue et al. (2019).", |
| "cite_spans": [ |
| { |
| "start": 45, |
| "end": 66, |
| "text": "Radford et al. (2015)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 112, |
| "end": 136, |
| "text": "Goodfellow et al. (2014)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 256, |
| "end": 278, |
| "text": "(Radford et al., 2015)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 413, |
| "end": 438, |
| "text": "(Goodfellow et al., 2014)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1098, |
| "end": 1119, |
| "text": "Donahue et al. (2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1214, |
| "end": 1235, |
| "text": "Radford et al. (2015)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 1675, |
| "end": 1697, |
| "text": "(Donahue et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 2063, |
| "end": 2085, |
| "text": "(Donahue et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 2238, |
| "end": 2262, |
| "text": "Goodfellow et al. (2014)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 2398, |
| "end": 2420, |
| "text": "(Donahue et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 2473, |
| "end": 2496, |
| "text": "Gulrajani et al., 2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 2612, |
| "end": 2634, |
| "text": "(Donahue et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 2656, |
| "end": 2678, |
| "text": "(Donahue et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 2840, |
| "end": 2860, |
| "text": "Radford et al. 2015)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 614, |
| "end": 622, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Materials", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Input variables z n = 100 z \u21e0 U( 1, 1) Generator network G(z)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Materials", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The model was trained on the allophonic distribution of voiceless stops in English. Voiceless stops /p, t, k/ surface as aspirated [p h , t h , k h ] in English in word-initial position when immediately followed by a stressed vowel (Lisker, 1984; Iverson and Salmons, 1995; Vaux, 2002; Vaux and Samuels, 2005; Davis and Cho, 2006) . If an alveolar sibilant [s] precedes the stop, however, the aspiration is blocked and the stop surfaces as unaspirated [p, t, k] (Lisker, 1984) . A minimal pair illustrating this allophonic distribution is [\"p h It] 'pit' vs.", |
| "cite_spans": [ |
| { |
| "start": 232, |
| "end": 246, |
| "text": "(Lisker, 1984;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 247, |
| "end": 273, |
| "text": "Iverson and Salmons, 1995;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 274, |
| "end": 285, |
| "text": "Vaux, 2002;", |
| "ref_id": null |
| }, |
| { |
| "start": 286, |
| "end": 309, |
| "text": "Vaux and Samuels, 2005;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 310, |
| "end": 330, |
| "text": "Davis and Cho, 2006)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 462, |
| "end": 476, |
| "text": "(Lisker, 1984)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "[\"spIt] 'spit'. The most prominent phonetic correlate of this allophonic distribution is the difference in Voice Onset Time (VOT) duration (Abramson and Whalen, 2017) between the aspirated and unaspirated voiceless stops. The model was trained on data from the TIMIT database (Garofolo et al., 1993 data consist of 16-bit .wav files with 16 kHz sampling rate of word initial sequences of voiceless stops /p, t, k/ (= T) that were followed by a vowel (#TV) and word initial sequences of /s/ + /p, t, k/, followed by a vowel (#sTV). The training data includes 4,930 sequences with the structure #TV and 533 sequences with the structure #sTV (5,463 total). Both stressed and unstressed vowels are included in the training data, as this condition crucially complicates learning and makes the task for the neural network more challenging.", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 166, |
| "text": "(Abramson and Whalen, 2017)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 276, |
| "end": 298, |
| "text": "(Garofolo et al., 1993", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3 Experiment", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The Generator network after 12,255 steps (\u21e0 716 epochs) generates an acoustic signal that appears close to actual speech data. The number of training steps was chosen manually as a compromise between output interpretability and the number of epochs, where we try to approximately maximize the first and minimize the latter parameter. Figure 2 illustrates a typical generated sample of #TV (left) and #sTV (right) structures with a substantial difference in VOT durations. To test whether the Generator learns the conditional distribution of VOT duration, the generated samples were annotated for VOT duration. VOT duration was measured from the release of closure to the onset of periodic vibration with clear formant structure. Altogether 96 generated samples were annotated; 62 in which no period of frication of [s] preceded and 34 in which [s] precedes the TV sequence. The generated data were fit to a linear model with only one predictor: presence of [s] (STRUCTURE). Place of articulation or foling phonological learning, because the model is trained on a continuous speech stream and the generated sample fails to produce analyzable results for phonological purposes. lowing vowel were not added in the model, because they are often difficult to recover. STRUC-TURE is a significant predictor of VOT duration: While VOT duration is significantly shorter if [s] precedes the #TV sequence in the generated data, the model shows clear traces that the learning is incomplete and that the generator network fails to learn the distribution categorically at 12,255 steps. The three longest VOT durations in the #sTV condition in the generated data are 68.3 ms, 75.7 ms, and 76.2 ms. In all three cases the VOT is longer than the longest VOT duration of any #sTV sequence in the training data (longest is 65 ms). This generalization holds even in proportional terms (i.e. while controlling for \"speech rate\"): the generated data contains the highest ratio between the VOT duration and the frication duration of [s].", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 334, |
| "end": 340, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model: 12,255 steps", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "F(1) = 53.1, p < 0.0001.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model: 12,255 steps", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Longer VOT duration in the #sTV condition in the generated data compared to training data is not the only violation of the training data that the Generator outputs and that resembles linguistic behavior in humans. Occasionally, the Generator outputs a linguistically valid #sV sequence for which no evidence was available in the training data. The minimal duration of closure in #sTV sequences in the training data is 9.2 ms, the minimal duration of VOT is 9.4 ms. All sequences containing a [s] from the training data were manually inspected, and none of them contain a #sV sequence without a period of closure and VOT. Homorganic sequences of [s] followed by an alveolar stop [t] (#stV) are occasionally acoustically similar to the sequence without the stop (#sV) because frication noise from [s] carries onto the homorganic alveolar closure which can be very short. However, there is a clear fall and a second rise of noise amplitude after the release of the stop in #stV sequences. Figure 3 shows one case of the Generator network outputting a #sV sequence without any stop-like fall of the amplitude. In other words, the Generator network outputs a linguistically valid sequence #sV without any evidence for existence of this sequence in the training data. Similarly, the Generator occasionally outputs a se- Figure 3: Waveforms and spectrograms (0-8000 Hz) of two innovative generated outputs of the shape #sV and #TTV. The sample on the left was generated after 16,715 steps.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 986, |
| "end": 994, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model: 12,255 steps", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "quence with two stops (two periods of aspiration noise with intervening short period of closure) and a vowel (#TTV) ( Figure 3) . Measuring overfitting is a substantial problem for Generative Adversarial Networks with no consensus on the most appropriate quantitative approach to the problem (Goodfellow et al., 2014; Radford et al., 2015) . The danger with overfitting in a GAN architecture is that the Generator network would learn to fully replicate the input. Donahue et al. (2019) test overfitting on models trained with a substantially higher number of steps (200,000) compared to our model (12,255) and presents evidence that GAN models trained on audio data do not overfit even with substantially higher number of training steps. The best evidence against overfitting is precisely the fact that the Generator network outputs samples that substantially violate output distributions.", |
| "cite_spans": [ |
| { |
| "start": 292, |
| "end": 317, |
| "text": "(Goodfellow et al., 2014;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 318, |
| "end": 339, |
| "text": "Radford et al., 2015)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 464, |
| "end": 485, |
| "text": "Donahue et al. (2019)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 127, |
| "text": "Figure 3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model: 12,255 steps", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Establishing internal representations of a neural network is a challenging task (Lillicrap and Kording, 2019) . Below, we propose a technique for uncovering dependencies between the network's latent space and generated data based on logistic regression. This method has the potential to shed light on the network's internal representations: using the proposed technique, we can estimate how the network learns to map latent space into phonetically and phonologically meaningful units in the generated data.", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 109, |
| "text": "(Lillicrap and Kording, 2019)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To identify dependencies between the latent space and generated data, we correlate annotations of the output data with the variables in the latent space. As a starting point, we choose to identify correlates of the most prominent feature in the training data: presence or absence of [s] . Any number of other phonetic features can be correlated with this approach; applying this technique to other features and other alternations should yield a better understanding of the network's learning mechanisms. Focusing on more than the chosen feature, however, is beyond the scope of this paper.", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 286, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We propose a method based on logistic regression. First, 3,800 outputs from the Generator network trained after 12,255 steps were generated and manually annotated for presence or absence of The annotated data together with values of latent variables for each generated sample (z) were fit to a logistic regression generalized additive model (using the mgcv package; Wood 2011 in R Core Team 2018) with the presence or absence of [s] as the dependent variable (binomial distribution of successes and failures) and smooth terms of latent variables (z) as predictors of interest (estimated as penalized thin plate regression splines; Wood 2011). Generalized additive models were chosen in order to avoid assumptions of linearity: it is possible that latent variables are not linearly correlated with features of interest in the output of the Generator network. The initial full model (FULL) includes smooths for all 100 variables in the latent space that are uniformly distributed within the interval ( 1, 1) as predictors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To reduce the number of variables, models with different shrinkage techniques are refit and compared: the latent variables for further analysis are then chosen based on combined results of different extratory models. We refit the model with various modifications: with modified smoothing penalty (MODIFIED); with original smoothing penalty, but with an additional penalty for each term if all smoothing parameters tend to infinity (SELECT; Wood 2011); and with manual removal of non-significant terms by Wald test for each term (EXCLUDED).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The estimated smooths appear mostly linear. We also fit the data to a linear logistic regression model (LINEAR) with all 100 predictors. To reduce the number of predictors, another model is fit (LINEAR EXCLUDED) with those predictors re-moved that do not improve fit.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To identify latent variables with highest correlation with [s] in the output, we extract estimates for each term from the generalized additive models and estimates of slopes from the linear model. Figure 4 plots those values. The plot points to a substantial difference between the highest seven predictors and the rest of the latent space. Seven latent variables are thus identified (z 5 , z 11 , z 49 , z 29 , z 74 , z 26 , z 14 ) as potentially having the largest effect on presence or absence of [s] in output. Lasso regression (Simon et al., 2011) and Random Forest models (Liaw and Wiener, 2002) give almost identical results.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 62, |
| "text": "[s]", |
| "ref_id": null |
| }, |
| { |
| "start": 532, |
| "end": 552, |
| "text": "(Simon et al., 2011)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 578, |
| "end": 601, |
| "text": "(Liaw and Wiener, 2002)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 197, |
| "end": 205, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To conduct an independent generative test of whether the chosen values correlate with [s] in the output data of the Generator network, we set values of the seven identified predictors (z 5 , z 11 , z 49 , z 29 , z 74 , z 26 , z 14 ) to the marginal value of 1 or 1 (depending on whether the correlation is positive or negative) and generated 100 outputs. Altogether seven values in the latent space were thus manipulated, which represents only 7% of the entire latent space. Of the 100 outputs with manipulated values, 73 outputs included a [s] or [s]-like element, either with the stop closure and vowel or without them. The rate of outputs that contain [s] is thus significantly higher when the seven values are manipulated to the marginal levels compared to randomly chosen latent space. In the output data without manipulated values, only 271 out of 3800 generated outputs (or 7.13%) contained an [s] . The difference is significant (c 2 (1) = 559.0, p < 0.00001).", |
| "cite_spans": [ |
| { |
| "start": 901, |
| "end": 904, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "High proportions of [s] in the output can be achieved with manipulation of single latent variables, but the values need to be highly marginal, i.e. extend well beyond the training space. Setting the z 11 value outside the training interval to 15, for example, causes the Generator to output [s] in 87 out of 100 generated (87%) sequences, which is again significantly more than with random input (c 2 (1) = 792.7, p < 0.0001). When z 11 is 25, the rate goes up to 96 out of 100, also significantly different from random inputs (c 2 (1) = 959.8, p < 0.0001).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "While there is a consistent drop in estimates of the regression models after the seven identified variables (Figure 4 ) and while several independent generation tests confirm that the seven variables correspond the to presence of [s] in the output, the cutoff point between the seven variables and the rest of the latent space is still somewhat arbitrary. It is likely that other latent variables directly or indirectly influence the presence of [s] as well: the learning at this point is not yet categorical and several dependencies not discovered here likely affect the results. Nevertheless, further explorations of the latent space suggest the variables identified with the logistic regression (and other) models (Figure 4 ) are indeed the main variables involved with the presence or absence of [s] in the output.", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 233, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 108, |
| "end": 117, |
| "text": "(Figure 4", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 717, |
| "end": 726, |
| "text": "(Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Establishing internal representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We further explore whether the mapping between the uniformly distributed input (z) variables can be associated with specific phonetic or phonological features in that output. The crucial step in this direction is to explore values of the latent space beyond the training interval, i.e. beyond ( 1, 1) . Crucially, we observe that the Generator network, while being trained on latent space limited to the interval ( 1, 1), learns representations that extend this interval. Even if the input latent variables (z) exceed the training interval, the Generator network outputs samples that closely resemble human speech. Furthermore, the dependencies learned during training extend outside of the ( 1, 1) interval. Exploring phonetic properties at these marginal values might reveal the actual underlying function of each latent variable.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 293, |
| "end": 300, |
| "text": "( 1, 1)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To explore phonetic correlates of the seven latent variables, we set each of the seven variables separately to the marginal value 4.5 and interpolate to its opposite marginal value 4.5 in 0.5 increments, while keeping randomly-sampled values of the other 99 latent variables z constant. The \u00b14.5 value was chosen based on manual inspection of generated samples: amplitude rises of [s] gradually weaken when variables have a value greater than \u00b13.5. Seven sets of generated samples are thus created, one for each of the seven z values (with the other 99 z-values randomly sampled, but kept constant for all seven manipulated variables). Each set contains a subset of 19 generated outputs that correspond to the interpolated variables from 4.5 to 4.5 in 0.5 increments. Twenty-nine such sets containing an [s] in at least one set are extracted for analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "A clear pattern emerges in the generated data: the latent variables identified as corresponding to the presence of [s] via regression (Figure 4 ) have direct phonetic correlates and cause changes in amplitude and presence/absence of frication noise of [s] when each of the seven values in the latent space are manipulated to the chosen values, including values that exceed the training interval. In other words, by manipulating the identified latent variables, we control the presence/absence of [s] in the output as well as the amplitude of its frication noise. [s] gradually decreases by increasing the value of z 11 until it completely disappears. The exact value of z 11 for which the [s] disappears differs across examples and likely interacts with other features. It is possible that frication noise in the training has a higher amplitude in some conditions, which is why such cases require a higher magnitude of manipulation of z 11 . The figure also shows that as the frication noise of [s] disappears, aspiration of a stop in what appears to be a #TV sequences starts surfacing and replaces the frication noise of [s] . Occasionally, frication noise of [s] gradually transforms into aspiration noise. The exact transformation is likely dependent on the 99 other z-variables held constant and their underlying phonetic effect. Regardless of the underlying phonetic effect of the other variables in the latent space, we can force [s] in the output when generating data and manipulating the chosen variables.", |
| "cite_spans": [ |
| { |
| "start": 496, |
| "end": 499, |
| "text": "[s]", |
| "ref_id": null |
| }, |
| { |
| "start": 1123, |
| "end": 1126, |
| "text": "[s]", |
| "ref_id": null |
| }, |
| { |
| "start": 1437, |
| "end": 1440, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 134, |
| "end": 143, |
| "text": "(Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To test the significance of the effects of the seven identified features on the presence of [s] and the amplitude of its frication noise, the 29 generated sets of 19 outputs (with z-value from 4.5 to 4.5) for each of the seven variables were analyzed. The outputs were manually annotated for [s] and the following vowel. Outputs gradually change from #sTV to #TV. Only sequences containing an [s] were analyzed; as soon as [s] stops in the output, annotations were stopped and the outputs were not further analyzed. For each data point, maximum intensity of the fricative and the vowel was extracted in Praat (Boersma and Weenink, 2015; Lennes, 2003) with a 13.3 ms window length. To test whether the decreased frication noise is not part of a general effect of decreased amplitude, we perform significance tests on the ratio of maximum intensity between the frication noise of [s] and the following vowel in the #sTV sequences. Figure 6 plots the ratio of maximum intensity of the fricative divided by the sum of two maximum intensities: of the fricative ([s]) and of the vowel (V). The manipulated z-values are additionally normalized to the interval [0,1], where 0 represents the most marginal value with [s] (usually \u00b14.5; referred to as STRONG henceforth) and 1 represents the last value before [s] disappears (WEAK). Note that the point at which [s] is not present in the output anymore, but the vowel still surfaces (which would yield the ratio at 0) is not included in the model. The data were fit to a beta regression generalized additive mixed model (Wood 2011) with random smooths for (i) trajectory and for (ii) value of other variables in the latent space of the Generator network, see Figure 6 . All smooths (except for z 74 ) are significantly different from 0 and the plots show a clear negative trajectory.", |
| "cite_spans": [ |
| { |
| "start": 609, |
| "end": 636, |
| "text": "(Boersma and Weenink, 2015;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 637, |
| "end": 650, |
| "text": "Lennes, 2003)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 929, |
| "end": 937, |
| "text": "Figure 6", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 1699, |
| "end": 1707, |
| "text": "Figure 6", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The seven variables thus strongly correspond to the presence or absence of [s] in the output; by manipulating the chosen variables to the identified values we can attenuate frication noise of [s] and cause its presence or complete disappearance in the generated data. Again, the discovery of these features is possible because we extend the initial training interval and test predictions on marginal values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Interpolation of latent variables reveals that the presence of [s] is not controlled by a single latent variable, but by at least seven of them. The different latent variables that correspond to the presence of [s], however, are not phonetically vacuous: individually, they have distinct phonetic correspondences. The generated samples reveal that the variables' secondary effect (besides outputting [s] and controlling its intensity) is likely reflected in spectral properties of the frication noise. The seven variables are thus similar in the sense that manipulation of their values results in the presence of [s] by controlling its frication noise. They crucially differ, however, in the effects on the spectral properties of the outputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To test this prediction, spectral properties of the output fricatives are analyzed in the same 29 sets of generated samples. Spectral properties of the generated fricatives are generally not significantly different at the value of z right before [s] disappears from the outputs. As values of z increase toward the marginal levels (in most cases, \u00b14.5), however, clear differentiation in spectral properties emerge between the seven z-variables. The trajectory for center of gravity, for example, significantly differs between z 11 and most of the other six variables. Overall kurtosis is significantly different when z 11 is manipulated, compared to, for example, z 26 and z 29 . Similarly, while z 74 does not significantly attenuate amplitude of [s], it significantly differs in skew trajectory of [s] . The main function of z 74 is thus likely in its control of spectral properties of frication of [s] (e.g. skew).", |
| "cite_spans": [ |
| { |
| "start": 800, |
| "end": 803, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In sum, manipulating the latent variables that correspond to [s] in the output not only attenuates frication noise (when vocalic amplitude is controlled for) and causes [s] to surface or disappear from the output, but the different z-variables likely correspond to different phonetic features of the frication noise. By setting the values to the marginal levels well beyond the training interval, however, significant differences emerge both in overall levels as well as in trajectories of COG, kurtosis, and skew. It is thus likely that the variables collectively control the presence or absence of [s] , but that individually, they control various phonetic features -spectral properties of the frication noise.", |
| "cite_spans": [ |
| { |
| "start": 600, |
| "end": 603, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interpolation and phonetic features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The results of this paper suggest that we can model phonology not only with rules (Chomsky and Halle, 1968) , finite-state automata (Heinz, 2010; Chandlee, 2014) , input-output optimization (Prince and Smolensky, 1993/2004) , or with neural network architecture that already assumes some level of abstraction (see Section 1), but as the dependency between the latent space and generated data in Generative Adversarial Networks that are trained in an unsupervised manner from raw acoustic data. We train a Generative Adversarial Network (as implemented in Donahue et al. 2019 based on DCGAN architecture; Radford et al. 2015) ; the results of the computational experiment suggest that the network learns the conditional allophonic distribution of VOT duration. To the author's knowledge, this is the first paper testing learning of allophonic distributions in an unsupervised manner from raw acoustic data using neural networks. This paper also proposes a technique that identifies variables that correspond to the presence of [s] in the output and shows that by manipulating these values, we can generate data with or without [s] in the output as well as control its intensity and spectral properties of its frication noise. While at least seven latent variables control the presence of [s], each of them has a phonetic function that controls spectral properties of the frication noise. The proposed technique thus suggests that the Generator network learns to encode phonetic and phonological information in its latent space.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 107, |
| "text": "(Chomsky and Halle, 1968)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 132, |
| "end": 145, |
| "text": "(Heinz, 2010;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 146, |
| "end": 161, |
| "text": "Chandlee, 2014)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 190, |
| "end": 201, |
| "text": "(Prince and", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 202, |
| "end": 223, |
| "text": "Smolensky, 1993/2004)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 604, |
| "end": 624, |
| "text": "Radford et al. 2015)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 1026, |
| "end": 1029, |
| "text": "[s]", |
| "ref_id": null |
| }, |
| { |
| "start": 1126, |
| "end": 1129, |
| "text": "[s]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Training GAN networks on further processes and on languages other than English should yield more information about learning representations of phonetic and phonological processes. This paper outlines methodology for establishing internal representations and testing predictions against generated data, but represents just a first step in a broader task of establishing learning representation of phonetic and phonological data in a Generative Adversarial framework of phonology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was funded by a grant to new faculty at the University of Washington. I would like to thank Sameer Arshad for slicing data from the TIMIT database and Heather Morrison for annotating data. All mistakes are my own.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Voice onset time (vot) at 50: Theoretical and practical issues in measuring voicing distinctions", |
| "authors": [ |
| { |
| "first": "Arthur", |
| "middle": [ |
| "S" |
| ], |
| "last": "Abramson", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "H" |
| ], |
| "last": "Whalen", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Journal of Phonetics", |
| "volume": "63", |
| "issue": "", |
| "pages": "75--86", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.wocn.2017.05.002" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arthur S. Abramson and D.H. Whalen. 2017. Voice onset time (vot) at 50: Theoretical and practical is- sues in measuring voicing distinctions. Journal of Phonetics, 63:75 -86.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Connectionist approaches to generative phonology", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Alderete", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Tupper", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "The Routledge Handbook of Phonological Theory", |
| "volume": "", |
| "issue": "", |
| "pages": "360--390", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Alderete and Paul Tupper. 2018. Connectionist approaches to generative phonology. In Anna Bosch and S. J. Hannahs, editors, The Routledge Handbook of Phonological Theory, pages 360-390. Routledge, New York.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Phonological constraint induction in a connectionist network: learning ocp-place constraints from data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Alderete", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Tupper", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [ |
| "A" |
| ], |
| "last": "Frisch", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Language Sciences", |
| "volume": "37", |
| "issue": "", |
| "pages": "52--69", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.langsci.2012.10.002" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Alderete, Paul Tupper, and Stefan A. Frisch. 2013. Phonological constraint induction in a connectionist network: learning ocp-place constraints from data. Language Sciences, 37:52 -69.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Wasserstein generative adversarial networks", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Arjovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Soumith", |
| "middle": [], |
| "last": "Chintala", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "214--223", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Arjovsky, Soumith Chintala, and L\u00e9on Bottou. 2017. Wasserstein generative adversarial networks. In International Conference on Machine Learning, pages 214-223.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Subregular complexity and deep learning", |
| "authors": [ |
| { |
| "first": "Enes", |
| "middle": [], |
| "last": "Avcu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chihiro", |
| "middle": [], |
| "last": "Shibata", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Heinz", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Enes Avcu, Chihiro Shibata, and Jeffrey Heinz. 2017. Subregular complexity and deep learning. In Pro- ceedings of the Conference on Logic and Machine Learning in Natural Language (LaML).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Praat: doing phonetics by computer", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Boersma", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Weenink", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Boersma and David Weenink. 2015. Praat: do- ing phonetics by computer [computer program]. ver- sion 5.4.06. Retrieved 21 February 2015 from http://www.praat.org/.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A note concerning /s/ plus stop clusters in the speech of language-delayed children", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [ |
| "S" |
| ], |
| "last": "Bond", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "Applied Psycholinguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "55--63", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/S0142716400000655" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. S. Bond. 1981. A note concerning /s/ plus stop clus- ters in the speech of language-delayed children. Ap- plied Psycholinguistics, 2(1):55-63.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Strictly local phonological processes", |
| "authors": [ |
| { |
| "first": "Jane", |
| "middle": [], |
| "last": "Chandlee", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jane Chandlee. 2014. Strictly local phonological pro- cesses. Ph.D. thesis, University of Delaware.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The Sound Pattern of English", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Morris", |
| "middle": [], |
| "last": "Halle", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper & Row, New York.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The distribution of aspirated stops and /h/ in American English and Korean: an alignment approach with typological implications", |
| "authors": [ |
| { |
| "first": "Stuart", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Mi-Hui", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Linguistic", |
| "volume": "41", |
| "issue": "4", |
| "pages": "607--652", |
| "other_ids": { |
| "DOI": [ |
| "10.1515/ling.2003.020" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart Davis and Mi-Hui Cho. 2006. The distribution of aspirated stops and /h/ in American English and Korean: an alignment approach with typological im- plications. Linguistic, 41(4):607-652.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Adversarial audio synthesis", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Donahue", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Mcauley", |
| "suffix": "" |
| }, |
| { |
| "first": "Miller", |
| "middle": [], |
| "last": "Puckette", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Donahue, Julian McAuley, and Miller Puck- ette. 2019. Adversarial audio synthesis. In ICLR. github.com/chrisdonahue/wavegan.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Cognitive science in the era of artificial intelligence: A roadmap for reverseengineering the infant language-learner", |
| "authors": [ |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Cognition", |
| "volume": "173", |
| "issue": "", |
| "pages": "43--59", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.cognition.2017.11.008" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emmanuel Dupoux. 2018. Cognitive science in the era of artificial intelligence: A roadmap for reverse- engineering the infant language-learner. Cognition, 173:43 -59.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Morphological inflection generation using character sequence to sequence learning", |
| "authors": [ |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "634--643", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N16-1077" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection genera- tion using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 634-643, San Diego, California. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Timit acoustic-phonetic continuous speech corpus", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Garofolo", |
| "suffix": "" |
| }, |
| { |
| "first": "Lori", |
| "middle": [], |
| "last": "Lamel", |
| "suffix": "" |
| }, |
| { |
| "first": "W M", |
| "middle": [], |
| "last": "Fisher", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Fiscus", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "S" |
| ], |
| "last": "Pallett", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "L" |
| ], |
| "last": "Dahlgren", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Zue", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. S. Garofolo, Lori Lamel, W M Fisher, Jonathan Fis- cus, D S. Pallett, N L. Dahlgren, and V Zue. 1993. Timit acoustic-phonetic continuous speech corpus. Linguistic Data Consortium.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Generative adversarial nets", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Goodfellow", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Pouget-Abadie", |
| "suffix": "" |
| }, |
| { |
| "first": "Mehdi", |
| "middle": [], |
| "last": "Mirza", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Warde-Farley", |
| "suffix": "" |
| }, |
| { |
| "first": "Sherjil", |
| "middle": [], |
| "last": "Ozair", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "27", |
| "issue": "", |
| "pages": "2672--2680", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672-2680. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Neural control of speech", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Guenther", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank H Guenther. 2016. Neural control of speech. MIT Press.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "A neural theory of speech acquisition and production", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "Tony", |
| "middle": [], |
| "last": "Guenther", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vladusich", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Is a neural theory of language possible? Issues from an interdisciplinary perspective", |
| "volume": "25", |
| "issue": "", |
| "pages": "408--422", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.jneuroling.2009.08.006" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank H. Guenther and Tony Vladusich. 2012. A neural theory of speech acquisition and production. Journal of Neurolinguistics, 25(5):408 -422. Is a neural theory of language possible? Issues from an interdisciplinary perspective.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Improved training of wasserstein gans", |
| "authors": [ |
| { |
| "first": "Ishaan", |
| "middle": [], |
| "last": "Gulrajani", |
| "suffix": "" |
| }, |
| { |
| "first": "Faruk", |
| "middle": [], |
| "last": "Ahmed", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Arjovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Dumoulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron C", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "30", |
| "issue": "", |
| "pages": "5767--5777", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vin- cent Dumoulin, and Aaron C Courville. 2017. Im- proved training of wasserstein gans. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5767-5777. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning long-distance phonotactics", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Heinz", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Linguistic Inquiry", |
| "volume": "41", |
| "issue": "4", |
| "pages": "623--661", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/LING/_a_00015" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Heinz. 2010. Learning long-distance phonotac- tics. Linguistic Inquiry, 41(4):623-661.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Aspiration and laryngeal representation in germanic", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Gregory", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "C" |
| ], |
| "last": "Iverson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Salmons", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Phonology", |
| "volume": "12", |
| "issue": "3", |
| "pages": "369--396", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gregory K. Iverson and Joseph C. Salmons. 1995. As- piration and laryngeal representation in germanic. Phonology, 12(3):369-396.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Unsupervised neural network based feature extraction using weak top-down constraints", |
| "authors": [ |
| { |
| "first": "Herman", |
| "middle": [], |
| "last": "Kamper", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Aren", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
| "volume": "", |
| "issue": "", |
| "pages": "5818--5822", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herman Kamper, Micha Elsner, Aren Jansen, and Sharon Goldwater. 2015. Unsupervised neural net- work based feature extraction using weak top-down constraints. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5818-5822.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Bias and population structure in the actuation of sound change. arXiv e-prints", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Kirby", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Sonderegger", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1507.04420" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Kirby and Morgan Sonderegger. 2015. Bias and population structure in the actuation of sound change. arXiv e-prints, page arXiv:1507.04420.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "A nonparametric Bayesian approach to acoustic model discovery", |
| "authors": [ |
| { |
| "first": "Chia-Ying", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "40--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chia-ying Lee and James Glass. 2012. A nonparamet- ric Bayesian approach to acoustic model discovery. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 40-49, Jeju Island, Korea. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "f0-f1-f2-intensity praat script. praat script", |
| "authors": [ |
| { |
| "first": "Mietta", |
| "middle": [], |
| "last": "Lennes", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mietta Lennes. 2003. f0-f1-f2-intensity praat script. praat script. Modified by Dan McCloy, Esther Le Gr\u00e9sauze, and Ga\u0161per Begu\u0161.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Classification and regression by randomforest", |
| "authors": [ |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Liaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Wiener", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "R News", |
| "volume": "2", |
| "issue": "3", |
| "pages": "18--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andy Liaw and Matthew Wiener. 2002. Classification and regression by randomforest. R News, 2(3):18- 22.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "What does it mean to understand a neural network? arXiv e-prints", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [ |
| "P" |
| ], |
| "last": "Lillicrap", |
| "suffix": "" |
| }, |
| { |
| "first": "Konrad", |
| "middle": [ |
| "P" |
| ], |
| "last": "Kording", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1907.06374" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy P. Lillicrap and Konrad P. Kording. 2019. What does it mean to understand a neural network? arXiv e-prints, page arXiv:1907.06374.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "How is the aspiration of english", |
| "authors": [ |
| { |
| "first": "Leigh", |
| "middle": [], |
| "last": "Lisker", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Language and Speech", |
| "volume": "27", |
| "issue": "4", |
| "pages": "391--394", |
| "other_ids": { |
| "DOI": [ |
| "10.1177/002383098402700409" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leigh Lisker. 1984. How is the aspiration of english /p, t, k/ \"predictable\"? Language and Speech, 27(4):391-394.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Using regular languages to explore the representational capacity of recurrent neural architectures", |
| "authors": [ |
| { |
| "first": "Abhijit", |
| "middle": [], |
| "last": "Mahalunkar", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Kelleher", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Artificial Neural Networks and Machine Learning -ICANN 2018", |
| "volume": "", |
| "issue": "", |
| "pages": "189--198", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhijit Mahalunkar and John D. Kelleher. 2018. Using regular languages to explore the representational ca- pacity of recurrent neural architectures. In Artificial Neural Networks and Machine Learning -ICANN 2018, pages 189-198, Cham. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Learning phonemes with a protolexicon", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Peperkamp", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Cognitive Science", |
| "volume": "37", |
| "issue": "1", |
| "pages": "103--124", |
| "other_ids": { |
| "DOI": [ |
| "10.1111/j.1551-6709.2012.01267.x" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Martin, Sharon Peperkamp, and Emmanuel Dupoux. 2013. Learning phonemes with a proto- lexicon. Cognitive Science, 37(1):103-124.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Homonyms and cluster reduction in the normal development of children's speech", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Mcleod", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Van Doorn", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Reed", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the Sixth Australian International Conference on Speech Science & Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "331--336", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S McLeod, J van Doorn, and V Reed. 1996. Homonyms and cluster reduction in the normal de- velopment of children's speech. In Proceedings of the Sixth Australian International Conference on Speech Science & Technology, pages 331-336.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Role of imitation in the emergence of phonological systems", |
| "authors": [ |
| { |
| "first": "No\u00ebl", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "V\u00e9ronique", |
| "middle": [], |
| "last": "Delvaux", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Journal of Phonetics", |
| "volume": "53", |
| "issue": "", |
| "pages": "46--54", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.wocn.2015.08.004" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "No\u00ebl Nguyen and V\u00e9ronique Delvaux. 2015. Role of imitation in the emergence of phonological systems. Journal of Phonetics, 53:46 -54. On the cognitive nature of speech sound systems.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Coupled neural maps for the origins of vowel systems", |
| "authors": [ |
| { |
| "first": "Pierre-Yves", |
| "middle": [], |
| "last": "Oudeyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the International conference on artificial neural networks. Lecture notes in computer science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/3-540-44668-0_163" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre-Yves Oudeyer. 2001. Coupled neural maps for the origins of vowel systems. In Proceedings of the International conference on artificial neural net- works. Lecture notes in computer science, pages 1171-1176. Springer. Volume: 2130.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Phonemic coding might result from sensory-motor coupling dynamics", |
| "authors": [ |
| { |
| "first": "Pierre-Yves", |
| "middle": [], |
| "last": "Oudeyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "From animals to animats 7: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior", |
| "volume": "", |
| "issue": "", |
| "pages": "406--416", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre-Yves Oudeyer. 2002. Phonemic coding might result from sensory-motor coupling dynamics. In From animals to animats 7: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior, pages 406-416. MIT Press.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "The self-organization of speech sounds", |
| "authors": [ |
| { |
| "first": "Pierre-Yves", |
| "middle": [], |
| "last": "Oudeyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Journal of Theoretical Biology", |
| "volume": "233", |
| "issue": "3", |
| "pages": "435--449", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.jtbi.2004.10.025" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre-Yves Oudeyer. 2005. The self-organization of speech sounds. Journal of Theoretical Biology, 233(3):435 -449.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Self-organization in the evolution of speech. Studies in the evolution of language", |
| "authors": [ |
| { |
| "first": "Pierre-Yves", |
| "middle": [], |
| "last": "Oudeyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "6", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre-Yves Oudeyer. 2006. Self-organization in the evolution of speech. Studies in the evolution of lan- guage ; 6. Oxford University Press, Oxford.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Generative linguistics and neural networks at 60: Foundation, friction, and fusion", |
| "authors": [ |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Pater", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joe Pater. 2019. Generative linguistics and neural net- works at 60: Foundation, friction, and fusion. Lan- guage.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Exemplar dynamics: Word frequency, lenition, and contrast", |
| "authors": [ |
| { |
| "first": "Janet", |
| "middle": [], |
| "last": "Pierrehumbert", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Frequency Effects and the Emergence of Lexical Structure", |
| "volume": "", |
| "issue": "", |
| "pages": "137--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janet Pierrehumbert. 2001. Exemplar dynamics: Word frequency, lenition, and contrast. In Joan L. Bybee and Paul J. Hopper, editors, Frequency Effects and the Emergence of Lexical Structure, pages 137-157. John Benjamins, Amsterdam.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Learning reduplication with a variable-free neural network", |
| "authors": [ |
| { |
| "first": "Brandon", |
| "middle": [], |
| "last": "Prickett", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Traylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Pater", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brandon Prickett, Aaron Traylor, and Joe Pa- ter. 2019. Learning reduplication with a variable-free neural network. Ms., Uni- versity of Massachusetts, Amherst. http: //works.bepress.com/joe_pater/38/ (accessed 23 May 2019).", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Optimality Theory: Constraint Interaction in Generative Grammar", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Prince", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Smolensky", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Tech. Rep", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Prince and Paul Smolensky. 1993/2004. Opti- mality Theory: Constraint Interaction in Generative Grammar. Blackwell, Malden, MA. First published in Tech. Rep. 2, Rutgers University Center for Cog- nitive Science.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "R Core Team", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R Core Team. 2018. R: A Language and Environment for Statistical Computing. R Foundation for Statis- tical Computing, Vienna, Austria.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Metz", |
| "suffix": "" |
| }, |
| { |
| "first": "Soumith", |
| "middle": [], |
| "last": "Chintala", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.06434" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "No free lunch in linguistics or machine learning: Response to pater. Language", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Rawski", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Heinz", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Rawski and Jeffrey Heinz. 2019. No free lunch in linguistics or machine learning: Response to pater. Language.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Early phonetic learning without phonetic categories -insights from machine learning", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Schatz", |
| "suffix": "" |
| }, |
| { |
| "first": "Naomi", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.31234/osf.io/fc4wh" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Schatz, Naomi Feldman, Sharon Goldwa- ter, Xuan Nga Cao, and Emmanuel Dupoux. 2019. Early phonetic learning without phonetic categories -insights from machine learning.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Measuring the perceptual availability of phonological features during language acquisition using unsupervised binary stochastic autoencoders", |
| "authors": [ |
| { |
| "first": "Cory", |
| "middle": [], |
| "last": "Shain", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "69--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cory Shain and Micha Elsner. 2019. Measuring the perceptual availability of phonological features dur- ing language acquisition using unsupervised binary stochastic autoencoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 69-85, Minneapolis, Minnesota. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Sound analogies with phoneme embeddings", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Miikka", |
| "suffix": "" |
| }, |
| { |
| "first": "Lingshuang", |
| "middle": [], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Mao", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Society for Computation in Linguistics (SCiL", |
| "volume": "", |
| "issue": "", |
| "pages": "136--144", |
| "other_ids": { |
| "DOI": [ |
| "10.7275/R5NZ85VD" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miikka P. Silfverberg, Lingshuang Mao, and Mans Hulden. 2018. Sound analogies with phoneme em- beddings. In Proceedings of the Society for Compu- tation in Linguistics (SCiL) 2018, pages 136-144.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Regularization paths for cox's proportional hazards model via coordinate descent", |
| "authors": [ |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Simon", |
| "suffix": "" |
| }, |
| { |
| "first": "Jerome", |
| "middle": [], |
| "last": "Friedman", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Hastie", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Tibshirani", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Statistical Software", |
| "volume": "39", |
| "issue": "5", |
| "pages": "1--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah Simon, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2011. Regularization paths for cox's proportional hazards model via coordinate de- scent. Journal of Statistical Software, 39(5):1-13.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "A hybrid dynamic time warping-deep neural network architecture for unsupervised acoustic modeling", |
| "authors": [ |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Thiolli\u00e8re", |
| "suffix": "" |
| }, |
| { |
| "first": "Ewan", |
| "middle": [], |
| "last": "Dunbar", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Synnaeve", |
| "suffix": "" |
| }, |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "Versteegh", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roland Thiolli\u00e8re, Ewan Dunbar, Gabriel Synnaeve, Maarten Versteegh, and Emmanuel Dupoux. 2015. A hybrid dynamic time warping-deep neural net- work architecture for unsupervised acoustic model- ing. In Proceedings of Interspeech.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Laryngeal markedness and aspiration", |
| "authors": [ |
| { |
| "first": "Bert", |
| "middle": [], |
| "last": "Vaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Bridget", |
| "middle": [], |
| "last": "Samuels", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Phonology", |
| "volume": "22", |
| "issue": "3", |
| "pages": "395--436", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/S0952675705000667" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bert Vaux and Bridget Samuels. 2005. Laryn- geal markedness and aspiration. Phonology, 22(3):395-436.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "The fine line between linguistic generalization and failure in Seq2Seq-attention models", |
| "authors": [ |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Weber", |
| "suffix": "" |
| }, |
| { |
| "first": "Leena", |
| "middle": [], |
| "last": "Shekhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Niranjan", |
| "middle": [], |
| "last": "Balasubramanian", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Workshop on Generalization in the Age of Deep Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "24--27", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W18-1004" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah Weber, Leena Shekhar, and Niranjan Balasubra- manian. 2018. The fine line between linguistic gen- eralization and failure in Seq2Seq-attention models. In Proceedings of the Workshop on Generalization in the Age of Deep Learning, pages 24-27, New Orleans, Louisiana. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "N" |
| ], |
| "last": "Wood", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of the Royal Statistical Society (B)", |
| "volume": "73", |
| "issue": "1", |
| "pages": "3--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. N. Wood. 2011. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B), 73(1):3-36.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "A diagram showing the Generative Adversarial architecture as proposed inGoodfellow et al. (2014);Donahue et al. (2019) and trained on data from the TIMIT database in this paper.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "[s]. 271 outputs (7.13%) were annotated as involving a segment [s]. Frication that resembled [s]-like aspiration noise after the alveolar stop and before high vowels was not annotated as including [s]. Innovative outputs such as an #[s] without the following vowel or #sV sequences were annotated as including an [s].", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "text": "Plot of c 2 values (left scale) for the 100 predictors across the four generalized additive models. For the two linear models (LINEAR and LINEAR EXCLUDED), estimates of slopes in absolute values (|b |) are plotted (right scale). The blue vertical line indicates the division between the seven chosen predictors and the rest of the predictor space with a clear drop in estimates between the first seven values (z 5 , z 11 , z 49 , z 29 , z 74 , z 26 , z 14 ) and the rest of the space.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "text": "illustrates this effect. Frication noise of Waveforms and two spectrograms (both 0 8, 000 Hz) of generated data with z 11 variable manipulated and interpolated. The values on the left of waveforms indicate the value of z 11 . The two spectrograms represent the highest and the lowest value of z 11 . A clear attenuation of the frication noise is visible until complete disappearance.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "text": "Ratio of max. intensity ([s]/([s]+V)) b (a) Plots of ratios of maximum intensity between the frication of [s] and phonation of the vowel in #sTV sequences across the seven variables and (b) predicted values with 95% CIs of the ratio based on beta regression generalized additive model.", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "type_str": "table", |
| "text": "). 1 The training 1Donahue et al. (2019) trained the model on the SC09 and TIMIT databases, but the results are not useful for model-", |
| "num": null, |
| "content": "<table><tr><td>8000</td><td/><td>8000</td><td/></tr><tr><td>Frequency (Hz)</td><td>Frequency (Hz)</td><td/><td/></tr><tr><td>0</td><td/><td>0</td><td/></tr><tr><td>15.39</td><td>15.59</td><td>87.06</td><td>87.4</td></tr><tr><td>Time (s)</td><td/><td>Time (s)</td><td/></tr><tr><td colspan=\"4\">Figure 2: Waveforms and spectrograms (0-8,000 Hz)</td></tr><tr><td colspan=\"4\">of a typical generated samples of #TV (left) and #sTV</td></tr><tr><td colspan=\"4\">(right) sequences from a Generator trained after 12,255</td></tr><tr><td>steps.</td><td/><td/><td/></tr></table>" |
| }, |
| "TABREF1": { |
| "html": null, |
| "type_str": "table", |
| "text": "The estimates for Intercept (duration of VOT when no [s] precedes) are b = 56.2 ms,t = 25.74, p < 0.0001. VOT is on average 26.8 ms shorter if [s] precedes the TV sequence and this difference is significant (b = 26.8 ms,t = 7.29, p < 0.0001).", |
| "num": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |