ACL-OCL / Base_JSON /prefixK /json /K17 /K17-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K17-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:07:02.502483Z"
},
"title": "Encoding of phonology in a recurrent neural model of grounded speech",
"authors": [
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": "",
"affiliation": {},
"email": "a.alishahi@uvt.nl"
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": "",
"affiliation": {},
"email": "g.chrupala@uvt.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.",
"pdf_parse": {
"paper_id": "K17-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spoken language is a universal human means of communication. As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species. In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs. More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g. Wehbe et al., 2014; Khalighinejad et al., 2017) .",
"cite_spans": [
{
"start": 584,
"end": 603,
"text": "Wehbe et al., 2014;",
"ref_id": "BIBREF40"
},
{
"start": 604,
"end": 631,
"text": "Khalighinejad et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding. Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al. (2016) ; Harwath and Glass (2017); Chrupa\u0142a et al. (2017a) . This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting. It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models. There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g. Elman, 1991; Frank et al., 2013; K\u00e1d\u00e1r et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) . Taking it a step further, Gelderloos and Chrupa\u0142a (2016) and Chrupa\u0142a et al. (2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively. Neither of these tackles the representation of phonology in any great depth. Instead they work with relatively coarse-grained distinctions between form and meaning.",
"cite_spans": [
{
"start": 159,
"end": 180,
"text": "Harwath et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 209,
"end": 232,
"text": "Chrupa\u0142a et al. (2017a)",
"ref_id": "BIBREF5"
},
{
"start": 824,
"end": 836,
"text": "Elman, 1991;",
"ref_id": "BIBREF9"
},
{
"start": 837,
"end": 856,
"text": "Frank et al., 2013;",
"ref_id": "BIBREF10"
},
{
"start": 857,
"end": 876,
"text": "K\u00e1d\u00e1r et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 877,
"end": 894,
"text": "Li et al., 2016a;",
"ref_id": "BIBREF21"
},
{
"start": 895,
"end": 912,
"text": "Adi et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 913,
"end": 930,
"text": "Li et al., 2016b;",
"ref_id": "BIBREF22"
},
{
"start": 931,
"end": 951,
"text": "Linzen et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 980,
"end": 1010,
"text": "Gelderloos and Chrupa\u0142a (2016)",
"ref_id": "BIBREF13"
},
{
"start": 1015,
"end": 1038,
"text": "Chrupa\u0142a et al. (2017a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception. We use these representations to carry out analyses of the representation of phonemes at a fine-grained level. In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy. We further investigate how the phoneme inventory is organised in the activation space of the model. Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics. Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives. Below we review both types of work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme. In an early experiment, Liberman et al. (1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/. They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables. This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same. In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT. Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/. There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically. Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).",
"cite_spans": [
{
"start": 141,
"end": 163,
"text": "Liberman et al. (1967)",
"ref_id": "BIBREF23"
},
{
"start": 594,
"end": 620,
"text": "Lisker and Abramson (1967)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme perception",
"sec_num": "2.1"
},
{
"text": "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above. As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently. This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories. Dehaene- Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task. Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use. Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015). These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not. Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input. The forms around peaks in this distribution are then perceived as being a distinct category. Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Eimas et al., 1971)",
"ref_id": "BIBREF8"
},
{
"start": 612,
"end": 637,
"text": "Lambertz and Gliga (2004)",
"ref_id": "BIBREF7"
},
{
"start": 1920,
"end": 1945,
"text": "(ter Schure et al., 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme perception",
"sec_num": "2.1"
},
{
"text": "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input. With the experiments in Section 4 we attempt to gain some insight into this issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme perception",
"sec_num": "2.1"
},
{
"text": "There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system. King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database. Frankel et al. (2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system. Siniscalchi et al. (2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "King and Taylor (2000)",
"ref_id": "BIBREF20"
},
{
"start": 336,
"end": 357,
"text": "Frankel et al. (2007)",
"ref_id": "BIBREF11"
},
{
"start": 464,
"end": 501,
"text": "ASR system. Siniscalchi et al. (2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational models",
"sec_num": "2.2"
},
{
"text": "Mohamed et al. (2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech. They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model. They show that the representations learned by the model are more speakerinvariant than the MFCC features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational models",
"sec_num": "2.2"
},
{
"text": "These works directly supervise the networks to recognize phonological information. Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images. We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.",
"cite_spans": [
{
"start": 138,
"end": 148,
"text": "Sun (2016)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational models",
"sec_num": "2.2"
},
{
"text": "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion. For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes. Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning. These language learning models use rich input signals, but are very limited in scale and variation.",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "Roy and Pentland (2002)",
"ref_id": "BIBREF30"
},
{
"start": 290,
"end": 311,
"text": "Yu and Ballard (2004)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational models",
"sec_num": "2.2"
},
{
"text": "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective. Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e. the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) . Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries. Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes. In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place. Moreover, the early connectionist work deals with constrained, toy datasets. Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.",
"cite_spans": [
{
"start": 113,
"end": 137,
"text": "Burgess and Hitch (1999)",
"ref_id": "BIBREF4"
},
{
"start": 290,
"end": 316,
"text": "(Baddeley and Hitch, 1974)",
"ref_id": "BIBREF1"
},
{
"start": 319,
"end": 340,
"text": "Gasser and Lee (1989)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational models",
"sec_num": "2.2"
},
{
"text": "As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupa\u0142a et al. (2017a) . This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode. Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2. We refer the reader to Chrupa\u0142a et al. (2017a) for a detailed description of the model architecture. Here we give a brief overview.",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "Chrupa\u0142a et al. (2017a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The model exploits correlations between two modalities, i.e. speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception. The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e. an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin \u03b1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "(1) u,i u max[0, \u03b1 + d(u, i) \u2212 d(u , i)] + i max[0, \u03b1 + d(u, i) \u2212 d(u, i )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space. The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to:",
"cite_spans": [
{
"start": 109,
"end": 139,
"text": "(Simonyan and Zisserman, 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u))))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "(2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions. It is followed by RHN k,L which consists of k residualized recurrent layers. Specifically these are Recurrent Highway Network layers (Zilly et al., 2016), which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps. Finally, both utterance and image projections are L2-normalized. See Section 4.1 for details of the model configuration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "i I U u e E @ \u00c4 OI O o aI ae 2 A aU Approximants j \u00f4 l w Nasals m n N Plosives ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowels",
"sec_num": null
},
{
"text": "p b t d k g Fricatives f v T D s z S Z h Affricates \u00d9 \u00c3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowels",
"sec_num": null
},
{
"text": "The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input. The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental data and setup",
"sec_num": "4"
},
{
"text": "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set. Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental data and setup",
"sec_num": "4"
},
{
"text": "We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupa\u0142a et al. (2017a) . 1 The details of the model configuration are as follows: convolutional layer with length 6, size 64, stride 3, 5 Recurrent Highway Network layers with 512 dimensions and 2 microsteps, attention Multi-Layer Perceptron with 512 hidden units, Adam optimizer, initial learning rate 0.0002. The 4096-dimensional image feature vectors come from the final fully connect layer of VGG-16 (Simonyan and Zisserman, 2014) pretrained on Imagenet (Russakovsky et al., 2014) , and are averages of feature vectors for ten crops of each image. The total number of learnable parameters is 9,784,193. Table 2 sketches the architecture of the utterance encoder part of the model.",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Bastien et al., 2012)",
"ref_id": null
},
{
"start": 116,
"end": 139,
"text": "Chrupa\u0142a et al. (2017a)",
"ref_id": "BIBREF5"
},
{
"start": 142,
"end": 143,
"text": "1",
"ref_id": null
},
{
"start": 575,
"end": 601,
"text": "(Russakovsky et al., 2014)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 724,
"end": 731,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model settings",
"sec_num": "4.1"
},
{
"text": "The Speech COCO model was trained on the Synthetically Spoken COCO dataset (Chrupa\u0142a et al., 2017b) , which is a version of the MS COCO dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS. 2",
"cite_spans": [
{
"start": 75,
"end": 99,
"text": "(Chrupa\u0142a et al., 2017b)",
"ref_id": "BIBREF6"
},
{
"start": 144,
"end": 162,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetically Spoken COCO",
"sec_num": "4.2"
},
{
"text": "We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) . It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal. This fails for a small number of utterances, which we remove from the data. In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers. For each utterance the representations (i.e. MFCC features and activations) are stored in a t r \u00d7 D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.",
"cite_spans": [
{
"start": 132,
"end": 152,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forced alignment",
"sec_num": "4.3"
},
{
"text": "In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model. In Section 5.1 we quantify how easy it is to decode phoneme identity from activations. In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli. Section 5.3 shows how the phoneme inventory is organized in the activation space of the model. Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech. As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme decoding",
"sec_num": "5.1"
},
{
"text": "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model. We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations. Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector. For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion. Figure 1 shows the results. As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers. Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter. This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information. It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.",
"cite_spans": [],
"ref_spans": [
{
"start": 627,
"end": 635,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Phoneme decoding",
"sec_num": "5.1"
},
{
"text": "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers. Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns. data. They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme. They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme decoding",
"sec_num": "5.1"
},
{
"text": "Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al. (2013) . This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs. For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/. The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.",
"cite_spans": [
{
"start": 83,
"end": 103,
"text": "Schatz et al. (2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme discrimination",
"sec_num": "5.2"
},
{
"text": "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) . We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g. /baU/). We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not. This resulted in 34,288 tuples in total. For each tuple, we measure sign(dist(A, X) \u2212 dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class. Shaded area extends \u00b11 standard error from the mean.",
"cite_spans": [
{
"start": 135,
"end": 148,
"text": "Gorman (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 583,
"end": 591,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phoneme discrimination",
"sec_num": "5.2"
},
{
"text": "resentations of syllables i and j. These representations are either the audio feature vectors or the layer activation vectors. A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables. Table 3 shows the discrimination accuracy in this task using various representations. The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model. The accuracy is lowest when final embedding features are used for this task. However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class. Figure 2 shows the accuracies for this subset of cases, broken down by class. As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features. Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers. There are also clear differences in the pattern of discriminability for the phoneme classes. The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers. Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2. The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 782,
"end": 790,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phoneme discrimination",
"sec_num": "5.2"
},
{
"text": "In this section we take a closer look at the underlying organization of phonemes in the model. Our experiment is inspired by Khalighinejad et al. (2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features. We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input. First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features. Similar to what Khalighinejad et al. (2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model. Figure 3 shows the results.",
"cite_spans": [
{
"start": 125,
"end": 152,
"text": "Khalighinejad et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 884,
"end": 911,
"text": "Khalighinejad et al. (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1125,
"end": 1133,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Organization of phonemes",
"sec_num": "5.3"
},
{
"text": "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) . Figure 5 shows the clustering results for the activation vectors on the first hidden layer. The leaf nodes are color-coded according to phoneme classes as specified in Table 1 . There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position. We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes. There is a notable drop between the match from MFCC to the activation of the convolutional layer. We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC. The match improves markedly at recurrent layer 1.",
"cite_spans": [
{
"start": 180,
"end": 195,
"text": "(Ward Jr, 1963)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 198,
"end": 206,
"text": "Figure 5",
"ref_id": null
},
{
"start": 366,
"end": 373,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Organization of phonemes",
"sec_num": "5.3"
},
{
"text": "Next we simulate the task of distinguishing between pairs of synonyms, i.e. words with different acoustic forms but the same meaning. With a representation encoding phonological form, our Figure 5 : Hierarchical clustering of phoneme activation vectors on the first hidden layer. expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 196,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synonym discrimination",
"sec_num": "5.4"
},
{
"text": "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion. Out of these generated word pairs, we select synonyms for the experiment based on the following criteria:",
"cite_spans": [
{
"start": 102,
"end": 116,
"text": "(Miller, 1995)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym discrimination",
"sec_num": "5.4"
},
{
"text": "\u2022 both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, \u2022 both forms appear more than 20 times in the validation data, \u2022 the words differ clearly in form (i.e. they are not simply variant spellings like donut/doughnut, grey/gray), \u2022 the more frequent form constitutes less than 95% of the occurrences. This gives us 2 verb, 2 adjective and 21 noun pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym discrimination",
"sec_num": "5.4"
},
{
"text": "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears. We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g. play and show are synonyms when used as nouns, but not when used as verbs). We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.",
"cite_spans": [
{
"start": 147,
"end": 159,
"text": "(Bird, 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym discrimination",
"sec_num": "5.4"
},
{
"text": "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features. For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains. We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers. Figure 6 shows the error rate in this classification task for each layer and each synonym pair. Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer. Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model. However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.",
"cite_spans": [],
"ref_spans": [
{
"start": 464,
"end": 472,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Synonym discrimination",
"sec_num": "5.4"
},
{
"text": "Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible. In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal. We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis. To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Earlier work (Chrupa\u0142a et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker. The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model. This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality. Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer. Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task. The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.",
"cite_spans": [
{
"start": 13,
"end": 37,
"text": "(Chrupa\u0142a et al., 2017a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech. While small-scale databases of natural speech and image are available (e.g. the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015), they are not large enough to reliably train models such as ours. In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input. We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Code, data and pretrained models available from https://github.com/gchrupala/visually-grounded-speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at https://github.com/pndurette/gTTS. 3 Available at https://github.com/lowerquality/gentle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.04207"
]
},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. arXiv preprint arXiv:1608.04207 .",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Working memory",
"authors": [
{
"first": "D",
"middle": [],
"last": "Alan",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Baddeley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hitch",
"suffix": ""
}
],
"year": 1974,
"venue": "Psychology of learning and motivation",
"volume": "8",
"issue": "",
"pages": "47--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan D Baddeley and Graham Hitch. 1974. Work- ing memory. Psychology of learning and motivation 8:47-89.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop",
"authors": [
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Bastien",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Lamblin",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bergstra",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Berg- eron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "NLTK: the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Interactive presentation sessions. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "69--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird. 2006. NLTK: the natural language toolkit. In Proceedings of the COLING/ACL on Interac- tive presentation sessions. Association for Compu- tational Linguistics, pages 69-72.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Memory for serial order: a network model of the phonological loop and its timing",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Burgess",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hitch",
"suffix": ""
}
],
"year": 1999,
"venue": "Psychological Review",
"volume": "106",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil Burgess and Graham J Hitch. 1999. Memory for serial order: a network model of the phono- logical loop and its timing. Psychological Review 106(3):551.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Representations of language in a model of visually grounded speech signal",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Lieke",
"middle": [],
"last": "Gelderloos",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a, Lieke Gelderloos, and Afra Al- ishahi. 2017a. Representations of language in a model of visually grounded speech signal. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Synthetically spoken COCO",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Lieke",
"middle": [],
"last": "Gelderloos",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.400926"
]
},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a, Lieke Gelderloos, and Afra Al- ishahi. 2017b. Synthetically spoken COCO. https://doi.org/10.5281/zenodo.400926.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Common neural basis for phoneme processing in infants and adults",
"authors": [
{
"first": "Ghislaine",
"middle": [],
"last": "Dehaene-Lambertz",
"suffix": ""
},
{
"first": "Teodora",
"middle": [],
"last": "Gliga",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Cognitive Neuroscience",
"volume": "16",
"issue": "8",
"pages": "1375--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ghislaine Dehaene-Lambertz and Teodora Gliga. 2004. Common neural basis for phoneme processing in in- fants and adults. Journal of Cognitive Neuroscience 16(8):1375-1387.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Speech perception in infants",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter D Eimas",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Einar",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Siqueland",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Juscyk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vigorito",
"suffix": ""
}
],
"year": 1971,
"venue": "Science",
"volume": "171",
"issue": "3968",
"pages": "303--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Eimas, Einar R Siqueland, Peter Juscyk, and James Vigorito. 1971. Speech perception in infants. Science 171(3968):303-306.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distributed representations, simple recurrent networks, and grammatical structure",
"authors": [
{
"first": "",
"middle": [],
"last": "Jeffrey L Elman",
"suffix": ""
}
],
"year": 1991,
"venue": "Machine learning",
"volume": "7",
"issue": "2-3",
"pages": "195--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L Elman. 1991. Distributed representations, simple recurrent networks, and grammatical struc- ture. Machine learning 7(2-3):195-225.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The acquisition of anaphora by simple recurrent networks",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Mathis",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Badecker",
"suffix": ""
}
],
"year": 2013,
"venue": "Language Acquisition",
"volume": "20",
"issue": "3",
"pages": "181--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Frank, Donald Mathis, and William Badecker. 2013. The acquisition of anaphora by simple re- current networks. Language Acquisition 20(3):181- 227.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Articulatory feature recognition using dynamic Bayesian networks",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Frankel",
"suffix": ""
},
{
"first": "Mirjam",
"middle": [],
"last": "Wester",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2007,
"venue": "Computer Speech & Language",
"volume": "21",
"issue": "4",
"pages": "620--640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Frankel, Mirjam Wester, and Simon King. 2007. Articulatory feature recognition using dynamic Bayesian networks. Computer Speech & Language 21(4):620-640.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Networks that learn phonology",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Gasser",
"suffix": ""
},
{
"first": "Ch",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Gasser and Ch Lee. 1989. Networks that learn phonology. Technical report, Computer Science De- partment, Indiana University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning",
"authors": [
{
"first": "Lieke",
"middle": [],
"last": "Gelderloos",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lieke Gelderloos and Grzegorz Chrupa\u0142a. 2016. From phonemes to images: levels of representation in a recurrent neural model of visually-grounded lan- guage learning. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generative phonotactics",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman. 2013. Generative phonotactics. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep multimodal semantic embeddings for speech and images",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Automatic Speech Recognition and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Harwath and James Glass. 2015. Deep multi- modal semantic embeddings for speech and images. In IEEE Automatic Speech Recognition and Under- standing Workshop.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning word-like units from joint audio-visual analysis",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James R Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.07481"
]
},
"num": null,
"urls": [],
"raw_text": "David Harwath and James R Glass. 2017. Learn- ing word-like units from joint audio-visual analysis. arXiv preprint arXiv:1701.07481 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised learning of spoken language with visual context",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1858--1866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Harwath, Antonio Torralba, and James Glass. 2016. Unsupervised learning of spoken language with visual context. In Advances in Neural Infor- mation Processing Systems. pages 1858-1866.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Representation of linguistic form and function in recurrent neural networks",
"authors": [
{
"first": "Akos",
"middle": [],
"last": "K\u00e1d\u00e1r",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akos K\u00e1d\u00e1r, Grzegorz Chrupa\u0142a, and Afra Alishahi. 2016. Representation of linguistic form and function in recurrent neural networks. CoRR abs/1602.08952.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic encoding of acoustic features in neural responses to continuous speech",
"authors": [
{
"first": "Bahar",
"middle": [],
"last": "Khalighinejad",
"suffix": ""
},
{
"first": "Guilherme",
"middle": [],
"last": "Cruzatto Da",
"suffix": ""
},
{
"first": "Nima",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mesgarani",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Neuroscience",
"volume": "37",
"issue": "8",
"pages": "2176--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahar Khalighinejad, Guilherme Cruzatto da Silva, and Nima Mesgarani. 2017. Dynamic encoding of acoustic features in neural responses to continuous speech. Journal of Neuroscience 37(8):2176-2185.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Detection of phonological features in continuous speech using neural networks",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Speech & Language",
"volume": "14",
"issue": "4",
"pages": "333--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon King and Paul Taylor. 2000. Detection of phonological features in continuous speech using neural networks. Computer Speech & Language 14(4):333 -353.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Visualizing and understanding neural models in NLP",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "681--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural mod- els in NLP. In Proceedings of NAACL-HLT. pages 681-691.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Understanding neural networks through representation erasure",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Un- derstanding neural networks through representation erasure. CoRR abs/1612.08220.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Perception of the speech code",
"authors": [
{
"first": "",
"middle": [],
"last": "Alvin M Liberman",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Franklin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Donald",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Shankweiler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Studdert-Kennedy",
"suffix": ""
}
],
"year": 1967,
"venue": "Psychological review",
"volume": "74",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvin M Liberman, Franklin S Cooper, Donald P Shankweiler, and Michael Studdert-Kennedy. 1967. Perception of the speech code. Psychological review 74(6):431.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Microsoft COCO: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision-ECCV 2014",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Computer Vision- ECCV 2014, Springer, pages 740-755.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4:521- 535.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The voicing dimension: some experiments in comparative phonetics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Lisker",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Abramson",
"suffix": ""
}
],
"year": 1967,
"venue": "Proceedings of the 6th International Congress of Phonetic Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Lisker and A.S. Abramson. 1967. The voicing di- mension: some experiments in comparative pho- netics. In Proceedings of the 6th International Congress of Phonetic Sciences.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "WordNet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. WordNet: a lexical database for english. Communications of the ACM 38(11):39-41.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Understanding how deep belief networks perform acoustic modelling",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Abdel-Rahman Mohamed",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2012,
"venue": "Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE",
"volume": "",
"issue": "",
"pages": "4273--4276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdel-rahman Mohamed, Geoffrey Hinton, and Ger- ald Penn. 2012. Understanding how deep belief net- works perform acoustic modelling. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE, pages 4273- 4276.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Silovsky",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Vesely",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Pro- cessing Society. IEEE Catalog No.: CFP11SRW- USB.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning words from sights and sounds: a computational model",
"authors": [
{
"first": "K",
"middle": [],
"last": "Deb",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"P"
],
"last": "Roy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pentland",
"suffix": ""
}
],
"year": 2002,
"venue": "Cognitive Science",
"volume": "26",
"issue": "1",
"pages": "113--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deb K Roy and Alex P Pentland. 2002. Learning words from sights and sounds: a computational model. Cognitive Science 26(1):113 -146.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Evaluating speech features with the minimal-pair ABX task: Analysis of the classical MFC/PLP pipeline",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Schatz",
"suffix": ""
},
{
"first": "Vijayaditya",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Hynek",
"middle": [],
"last": "Hermansky",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2013,
"venue": "INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Schatz, Vijayaditya Peddinti, Francis Bach, Aren Jansen, Hynek Hermansky, and Emmanuel Dupoux. 2013. Evaluating speech features with the minimal-pair ABX task: Analysis of the clas- sical MFC/PLP pipeline. In INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association. pages 1-5.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Exploiting deep neural networks for detection-based speech recognition",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Sabato",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Siniscalchi",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chin-Hui",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Neurocomputing",
"volume": "106",
"issue": "",
"pages": "148--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabato Marco Siniscalchi, Dong Yu, Li Deng, and Chin-Hui Lee. 2013. Exploiting deep neural net- works for detection-based speech recognition. Neu- rocomputing 106:148 -157.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Speech representation models for speech synthesis and multimodal speech recognition",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Sun. 2016. Speech representation models for speech synthesis and multimodal speech recogni- tion. Ph.D. thesis, Massachusetts Institute of Tech- nology.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Semantics guide infants' vowel learning: computational and experimental evidence",
"authors": [
{
"first": "Cmm",
"middle": [],
"last": "Smm Ter Schure",
"suffix": ""
},
{
"first": "Ppg",
"middle": [],
"last": "Junge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boersma",
"suffix": ""
}
],
"year": 2016,
"venue": "Infant Behavior and Development",
"volume": "43",
"issue": "",
"pages": "44--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SMM ter Schure, CMM Junge, and PPG Boersma. 2016. Semantics guide infants' vowel learning: computational and experimental evidence. Infant Behavior and Development 43:44-57.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A computational basis for phonology",
"authors": [
{
"first": "S",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [
"W"
],
"last": "Touretzky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wheeler",
"suffix": ""
}
],
"year": 1989,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "372--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David S Touretzky and Deirdre W Wheeler. 1989. A computational basis for phonology. In NIPS. pages 372-379.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Hierarchical grouping to optimize an objective function",
"authors": [
{
"first": "H",
"middle": [],
"last": "Joe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 1963,
"venue": "Journal of the American statistical association",
"volume": "58",
"issue": "301",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe H Ward Jr. 1963. Hierarchical grouping to opti- mize an objective function. Journal of the American statistical association 58(301):236-244.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Wehbe",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Alona",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "Aaditya",
"middle": [],
"last": "Ramdas",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "PloS one",
"volume": "9",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. 2014. Simultaneously uncovering the patterns of brain re- gions involved in different story reading subpro- cesses. PloS one 9(11):e112575.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Critical periods in speech perception: new directions",
"authors": [
{
"first": "F",
"middle": [],
"last": "Janet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Werker",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Takao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hensch",
"suffix": ""
}
],
"year": 2015,
"venue": "Annual review of psychology",
"volume": "66",
"issue": "",
"pages": "173--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janet F Werker and Takao K Hensch. 2015. Critical pe- riods in speech perception: new directions. Annual review of psychology 66:173-196.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A multimodal learning interface for grounding spoken language in sensory perceptions",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dana",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ballard",
"suffix": ""
}
],
"year": 2004,
"venue": "ACM Transactions on Applied Perception (TAP)",
"volume": "1",
"issue": "1",
"pages": "57--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yu and Dana H Ballard. 2004. A multimodal learning interface for grounding spoken language in sensory perceptions. ACM Transactions on Applied Perception (TAP) 1(1):57-80.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Schatz et al. (2013) propose a framework for evaluating speech features learned in an unsupervised setup that does not depend on phonetically labeled Accuracy of phoneme decoding with input MFCC features and COCO Speech model activations. The boxplot shows error rates bootstrapped with 1000 resamples."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Pearson's correlation coefficients r between the distance matrix of MFCCs and distance matrices on activation vectors."
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Adjusted Rand Index for the comparison of the phoneme type hierarchy induced from representations against phoneme classes."
},
"FIGREF6": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Synonym discrimination error rates, per representation and synonym pair."
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Phonemes of General American English.",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "COCO Speech utterance encoder architecture.",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">Representation Accuracy</td></tr><tr><td>MFCC</td><td>0.72</td></tr><tr><td>Convolutional</td><td>0.73</td></tr><tr><td>Recurrent 1</td><td>0.83</td></tr><tr><td>Recurrent 2</td><td>0.84</td></tr><tr><td>Recurrent 3</td><td>0.80</td></tr><tr><td>Recurrent 4</td><td>0.77</td></tr><tr><td>Recurrent 5</td><td>0.75</td></tr><tr><td>Embeddings</td><td>0.67</td></tr></table>",
"text": "Accuracy of choosing the correct target in an ABX task using different representations.",
"html": null
}
}
}
}