ACL-OCL / Base_JSON /prefixW /json /W18 /W18-0208.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W18-0208",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:31:05.609402Z"
},
"title": "New Baseline in Automatic Speech Recognition for Northern S\u00e1mi",
"authors": [
{
"first": "Juho",
"middle": [],
"last": "Leinonen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {}
},
"email": "juho.leinonen@aalto.fi"
},
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {}
},
"email": "peter.smit@aalto.fi"
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {}
},
"email": "sami.virpioja@aalto.fi"
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {}
},
"email": "mikko.kurimo@aalto.fi"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic speech recognition has gone through many changes in recent years. Advances both in computer hardware and machine learning have made it possible to develop systems far more capable and complex than the previous state-of-theart. However, almost all of these improvements have been tested in major wellresourced languages. In this paper, we show that these techniques are capable of yielding improvements even in a small data scenario. We experiment with different deep neural network architectures for acoustic modeling for Northern S\u00e1mi and report up to 50% relative error rate reductions. We also run experiments to compare the performance of subwords as language modeling units in Northern S\u00e1mi. Tiivistelm\u00e4 Automaattinen puheentunnistus on kehittynyt viime vuosina merkitt\u00e4v\u00e4sti. Uudet innovaatiot sek\u00e4 laitteistossa ett\u00e4 koneoppimisessa ovat mahdollistaneet entist\u00e4 paljon tehokkaammat ja monimutkaisemmat j\u00e4rjestelm\u00e4t. Suurin osa n\u00e4ist\u00e4 parannuksista on kuitenkin testattu vain valtakielill\u00e4, joiden kehitt\u00e4miseen on tarjolla runsaasti aineistoja. T\u00e4ss\u00e4 paperissa n\u00e4yt\u00e4mme ett\u00e4 n\u00e4m\u00e4 tekniikat tuottavat parannuksia my\u00f6s kielill\u00e4, joista aineistoa on v\u00e4h\u00e4n. Kokeilemme ja vertailemme erilaisia syvi\u00e4 neuroverkkoja pohjoissaamen akustisina malleina ja onnistumme v\u00e4hent\u00e4m\u00e4\u00e4n tunnistusvirheit\u00e4 jopa 50%:lla. Tutkimme my\u00f6s tapoja pilkkoa sanoja pienempiin osiin pohjoissaamen kielimalleissa.",
"pdf_parse": {
"paper_id": "W18-0208",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic speech recognition has gone through many changes in recent years. Advances both in computer hardware and machine learning have made it possible to develop systems far more capable and complex than the previous state-of-theart. However, almost all of these improvements have been tested in major wellresourced languages. In this paper, we show that these techniques are capable of yielding improvements even in a small data scenario. We experiment with different deep neural network architectures for acoustic modeling for Northern S\u00e1mi and report up to 50% relative error rate reductions. We also run experiments to compare the performance of subwords as language modeling units in Northern S\u00e1mi. Tiivistelm\u00e4 Automaattinen puheentunnistus on kehittynyt viime vuosina merkitt\u00e4v\u00e4sti. Uudet innovaatiot sek\u00e4 laitteistossa ett\u00e4 koneoppimisessa ovat mahdollistaneet entist\u00e4 paljon tehokkaammat ja monimutkaisemmat j\u00e4rjestelm\u00e4t. Suurin osa n\u00e4ist\u00e4 parannuksista on kuitenkin testattu vain valtakielill\u00e4, joiden kehitt\u00e4miseen on tarjolla runsaasti aineistoja. T\u00e4ss\u00e4 paperissa n\u00e4yt\u00e4mme ett\u00e4 n\u00e4m\u00e4 tekniikat tuottavat parannuksia my\u00f6s kielill\u00e4, joista aineistoa on v\u00e4h\u00e4n. Kokeilemme ja vertailemme erilaisia syvi\u00e4 neuroverkkoja pohjoissaamen akustisina malleina ja onnistumme v\u00e4hent\u00e4m\u00e4\u00e4n tunnistusvirheit\u00e4 jopa 50%:lla. Tutkimme my\u00f6s tapoja pilkkoa sanoja pienempiin osiin pohjoissaamen kielimalleissa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The field of automatic speech recognition (ASR) has advanced rapidly in the last couple of years, in large part thanks to deep neural networks (DNNs). For decades there has been active research trying to replace Gaussian mixture models (GMM) with various neural network configurations. Yet, only after 2010 the full power of neural networks started to be noticed when multiple groups started reporting huge improvements in their implementations (Hinton et al., 2012) . At the same time, the computational power of modern graphics processing units (GPU) has made it feasible to utilize very large DNNs with very large training data sets. For speech recognition, this has meant that the decades-old best practices are quickly being replaced by new and more powerful methods.",
"cite_spans": [
{
"start": 445,
"end": 466,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we have documented our work to build a new baseline for Northern S\u00e1mi. Using DNNs for acoustic modeling has provided large improvements for wellresourced Uralic languages, but for under-resourced languages, the applicability has yet to be tested. For broadcast news data sets, the latest improvements for applying neural networks instead of GMM-based acoustic models have been in the range of 14% smaller relative word error rate (WER) for Finnish and 6% for Estonian (Smit et al., 2017b) .",
"cite_spans": [
{
"start": 483,
"end": 503,
"text": "(Smit et al., 2017b)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In languages with a rich morphological structure it is difficult to build statistical language models using words. If using n-gram word models, the vocabulary size becomes computationally challenging, and even worse, the growing lexicon decreases out-of-vocabulary (OOV) rate rather slowly. Furthermore, the lack of data for underresourced languages makes building a large lexicon and n-gram difficult. For Finnish, Estonian, Arabic and Turkish it is common to use subword units such as morphs (Hirsim\u00e4ki et al., 2006) or syllables (Choueiter et al., 2006) instead of words. In this work we follow this tradition and apply statistical morphs as subword units for Northern S\u00e1mi.",
"cite_spans": [
{
"start": 494,
"end": 518,
"text": "(Hirsim\u00e4ki et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 532,
"end": 556,
"text": "(Choueiter et al., 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because the pronunciation in Northern S\u00e1mi can be rather well covered by rules, a simple grapheme-to-phoneme conversion can be applied for our lexicon. This gives Northern S\u00e1mi and other such languages a significant advantage in ASR, since building a proper lexicon is one of the most arduous data preparation tasks for speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will use a popular open-source toolkit for speech recognition, Kaldi, and document the building of a speech recognizer. In addition to DNN-based acoustic modeling, we test new methods of subword modeling for morphologically rich languages, originally developed for Finnish. The main focus of the paper is to demonstrate these new techniques in building a new baseline for Northern S\u00e1mi for further research and comparison. We will compare our results to the previous Northern S\u00e1mi baseline results from Smit et al. (2016) .",
"cite_spans": [
{
"start": 506,
"end": 524,
"text": "Smit et al. (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our baseline system builds on the Northern S\u00e1mi recognizer by Smit et al. (2016) , but with a few important changes. In acoustic modeling, we model triphones by hidden Markov models with Gaussian mixture model emission distributions (GMM-HMM) using mel frequency cepstral coefficients (MFCCs) as input features. The lexicon is based on subword units found by a data-driven method, and a long-context n-gram model is used for language modeling. However, while Smit et al. (2016) used the token-pass decoder of the AaltoASR toolkit (Pylkk\u00f6nen, 2005; Hirsim\u00e4ki et al., 2009) , our system is based on the Kaldi toolkit (Povey et al., 2011) that has a decoder based on weighted finite-state transducers (WFST). Kaldi has also implemented quite a few improvements to the standard GMM-HMM methodology. To further improve the speech recognition accuracy in Northern S\u00e1mi we test recent developments on creating subword lexicon for Kaldi and acoustic modeling based on DNNs.",
"cite_spans": [
{
"start": 62,
"end": 80,
"text": "Smit et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 459,
"end": 477,
"text": "Smit et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 530,
"end": 547,
"text": "(Pylkk\u00f6nen, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 548,
"end": 571,
"text": "Hirsim\u00e4ki et al., 2009)",
"ref_id": "BIBREF6"
},
{
"start": 615,
"end": 635,
"text": "(Povey et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Kaldi is an open source toolkit for speech recognition developed since the year 2009 by researchers from many different universities, lead by the John Hopkins University and Brno University of Technology (Povey et al., 2011) . It is based on the use of weighted finite-state transducers (WFST) complimenting the work by Mohri et al. (2008) . The advantage of WFST-based recognizers is that once the search network has been constructed and optimized effectively by the WFST methods, the decoding is very fast and accurate. Moreover, Kaldi's GMM-HMMs are improved by subspace Gaussians, word-position-dependent phones and advanced silence models.",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "(Povey et al., 2011)",
"ref_id": null
},
{
"start": 320,
"end": 339,
"text": "Mohri et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WFST-based speech recognition",
"sec_num": "2.1"
},
{
"text": "The small amount of training data and the morphological complexity of Northern S\u00e1mi make it problematic to build language models (LM) using words as the basic units. We applied the data-driven Morfessor Baseline method Lagus, 2002, 2007) to segment the words into subword units. Because all words in the language can be composed from these subword units, this approach provides an unlimited vocabulary for ASR (Hirsim\u00e4ki et al., 2006) . While Morfessor was developed to find units of language that resemble the surface forms of linguistic morphemes, the current implementation includes a parameter for adjusting the level of segmentation that the method produces (Virpioja et al., 2013) . The optimal level of segmentation for ASR varies between languages, but a wide range of lexicon seems to produce near-optimal results (Smit et al., 2017b) . We did not experiment with this parameter.",
"cite_spans": [
{
"start": 219,
"end": 237,
"text": "Lagus, 2002, 2007)",
"ref_id": null
},
{
"start": 410,
"end": 434,
"text": "(Hirsim\u00e4ki et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 663,
"end": 686,
"text": "(Virpioja et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 823,
"end": 843,
"text": "(Smit et al., 2017b)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subword lexicon FSTs and language models",
"sec_num": "2.2"
},
{
"text": "Recently, Smit et al. (2017b) implemented effective subword modeling in the WFSTbased ASR framework. It modifies the basic lexicon FST by introducing different models for all four different positions where a subword can appear (as prefix, infix, suffix, or complete word) and provides the appropriate word-position-dependent phones. In Figure 1 a normal word lexicon is shown where $words is replaced by a linear FST of all pronunciations in the lexicon. In Figure 2 the same basic structure is shown for a subword lexicon. When the ASR system uses a subword lexicon, the subword units in the output need to be joined back to construct complete word forms. This can be accomplished in different ways; popular approaches are using a separate word boundary units (e.g. Hirsim\u00e4ki et al., 2009) or using a special character to indicate that there is no word boundary directly preceding the subword (e.g. Arisoy et al., 2009; Tarj\u00e1n et al., 2014) . Smit et al. (2017b) experimented on different styles of subword markings and the conclusion was that the optimal boundary marking style might depend on the language. Other work by the same authors (Smit et al., 2017a ) supports this hypothesis. Therefore, in this work, we also experiment on different boundary marking styles to select the one that fits best for Northern S\u00e1mi. In Table 1 the four possible styles of marking 0 start 1 2 \u03f5:\u03f5 $words SIL:\u03f5 #a:\u03f5 Figure 1 : Prototype Lexicon FST for word-based lexicon. On each vertice in this graph is shown an input and output symbol. For example 'SIL:\u03f5' indicates a SIL phone as input and a skip-token (\u03f5) as output. The symbol #a is a disambiguation symbol which is required in Kaldi to make the FST determinizable. $words is a placeholder that is supposed to be replaced by a linear FST that maps all words to their appropriate wordposition dependent phoneneme sequences. are shown. Note that the actual realization of the boundary character (here a +-sign) does not matter, but the locations of these markers do.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Smit et al. (2017b)",
"ref_id": "BIBREF17"
},
{
"start": 767,
"end": 790,
"text": "Hirsim\u00e4ki et al., 2009)",
"ref_id": "BIBREF6"
},
{
"start": 900,
"end": 920,
"text": "Arisoy et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 921,
"end": 941,
"text": "Tarj\u00e1n et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 944,
"end": 963,
"text": "Smit et al. (2017b)",
"ref_id": "BIBREF17"
},
{
"start": 1141,
"end": 1160,
"text": "(Smit et al., 2017a",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 1",
"ref_id": null
},
{
"start": 458,
"end": 466,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1325,
"end": 1371,
"text": "Table 1 the four possible styles of marking 0",
"ref_id": null
},
{
"start": 1404,
"end": 1412,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subword lexicon FSTs and language models",
"sec_num": "2.2"
},
{
"text": "Boundary Tag(<w>) <w> dan <w>r\u00e1dje riikka t <w> left-marked (+m) dan r\u00e1dje +riikka +t right-marked (m+) dan r\u00e1dje+ riikka+ t left+right-marked (+m+) dan r\u00e1dje+ +riikka+ +t Table 1 : Four methods to mark the subword units in the sequence \"dan r\u00e1djeriikkat\"",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Style (abbreviation) Example",
"sec_num": null
},
{
"text": "As the n-gram language models are trained on the subword units, high-order n-grams are needed to provide a context of a reasonable length. We use the Kneser-Ney growing algorithm (Siivola et al., 2007) to train high-order Kneser-Ney smoothed varigram models.",
"cite_spans": [
{
"start": 179,
"end": 201,
"text": "(Siivola et al., 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style (abbreviation) Example",
"sec_num": null
},
{
"text": "We experiment with three different neural network architectures, all of which have demonstrated the ability to model speech well with large amounts of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep neural networks",
"sec_num": "2.3"
},
{
"text": "A time delay neural network (TDNN, Peddinti et al., 2015) is a type of a feedforward network. The main benefit for speech recognition is modeling the changes in duration and varying boundaries of phonemes in the speech signal. It is constructed by having also a time delayed copy of the signal as an input. This helps the network to disregard varying start and end points of the pattern in its classification. TDNN models can be improved by using different training criteria that match the task of speech recognition better. Regular TDNN models are trained on a frame-based cross-entropy criterion. This means that the recognizer optimizes for the recognition of phones in each separate frame. Although this sounds ideal and works well in practice, it can be further improved upon by using a criterion that actually looks to the power to predict a sequence of phones. In Povey et al. (2016) these models are introduced and named \"Lattice-free maximum mutual information\" or colloquially \"chain models\". During the training of the network, a window of frames is not only classified, but a simple forward-backward algorithm is run to estimate the sequence that will be predicted by the real speech recognizer.",
"cite_spans": [
{
"start": 35,
"end": 57,
"text": "Peddinti et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 871,
"end": 890,
"text": "Povey et al. (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep neural networks",
"sec_num": "2.3"
},
{
"text": "Long short-term memory (LSTM) networks are a variant of recurrent neural networks (RNN). In basic RNNs the state of the hidden layer is fed back to the next step as one of the inputs, giving the network a memory of the previous inputs. However, having many hidden layers might lead to a vanishing gradient problem, where during training the gradient \"vanishes\" while it propagates back in the network. To correct for this, LSTMs use a so-called memory cell, to balance which information should be carried for multiple steps in the network in \"long-term memory\", and when to use this information in the calculations for the current state in \"short term\". For a bidirectional-LSTM (BLSTM), this is happening in both directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep neural networks",
"sec_num": "2.3"
},
{
"text": "We start by demonstrating the improvements obtained without DNNs by Kaldi and WFST-based decoding in relation to the AaltoASR and token-passing decoding. We continue by comparing different subword boundary markings and choose the overall best for the next experiments, where we compare different types of DNN architectures for acoustic modeling. Finally, we show the effects of increasing the size of the language model training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We use the same data sets as Smit et al. (2016) to provide a fair comparison. The data includes audio data from the UIT-SME-TTS corpus with one female and male speaker. For both speakers we train a speaker-dependent recognizer using 2.5 hours of audio. Rest of the data is divided into development and evaluation sets 3:2, roughly 1-1.5 hours total. Our initial language models are based on 10 000 randomly selected sentences from the Northern S\u00e1mi Wikipedia dump in addition to the acoustic model training sentences (TRAIN+WIKI). Further tests with a larger corpus are based on \"Den samiske textbanken\" (BIG). ",
"cite_spans": [
{
"start": 29,
"end": 47,
"text": "Smit et al. (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "We started by first building a simple monophone-based model on MFCCs extracted from the training data and used this to better align our audio data to the transcript. After this step, we trained a traditional triphone GMM-HMM model on these improved alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "For our TDNN we iterate the previous step by again aligning our data with the GMM-HMM model and used these alignments together with speed and volume perturbated training data for higher dimensional MFCC features. As a result, we get a five layers deep TDNN. A similar process was used to train the BLSTM and Chain model to generate networks with seven and six layers respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "For a word-based system, we trained a Kneser-Ney smoothed 3-gram model with the SRILM toolkit (Stolcke, 2002) . For subword language modeling, we first trained a Morfessor model based on the TRAIN+WIKI corpus. We used Morfessor 2.0 implementation (Virpioja et al., 2013) with token-based training and the corpus weight parameter as 1.5. The words in the corpus were segmented to subword units with the aforementioned model using each of the different subword boundary markings. The subword n-gram models were then trained on the corpora using the VariKN toolkit (Siivola et al., 2007) with maximum n-gram length as 10.",
"cite_spans": [
{
"start": 94,
"end": 109,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF18"
},
{
"start": 247,
"end": 270,
"text": "(Virpioja et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 562,
"end": 584,
"text": "(Siivola et al., 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "For the BIG corpus we trained both 3-gram and 10-gram models with the same tools. The smaller model was used for first pass scoring and 10-gram model used afterward to rescore the lattices. In TRAIN+WIKI all results are with a single-pass 10-gram model. Table 3 shows the size of the different language models (LM) and lexicons. The ASR lexicon size varies due to the different subword boundary markings even if the words are segmented with the same Morfessor model.",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "We report for all experiments both the word error rate (WER) as well as the letter error rate (LER). The former is more common in general speech recognition research, while the latter is more common in evaluating speech recognition for agglutinative languages, where minor mistakes such as selecting a wrong inflectional suffix or splitting a compound word have very strong effects on WER. Table 3 : Lexicon and language model sizes for word models and subword models with different boundary marking styles.",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "AaltoASR 37.5 8.5 39.5 9.4 Kaldi 32.3 6.9 34.9 7.4 Table 4 : Comparison between AaltoASR (Smit et al., 2016) and Kaldi with 10-gram LM based on TRAIN+WIKI and 2.5h of audio for both speakers.",
"cite_spans": [
{
"start": 89,
"end": 108,
"text": "(Smit et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "SF1 SM1 Toolkit WER LER WER LER",
"sec_num": null
},
{
"text": "toolkits, the decoders, and the GMM-HMMs implementations. Table 5 continues with the Kaldi system to compare the four subword boundary markings. The differences are small given the size of the test data, but the traditional word boundary tag <w> seems to be a good choice and was used in the further experiments. It has the smallest lexicon, but because the boundary tag consumes one position in each n-gram context longer n-grams are utilized than in the other models. However, because the subword LMs are trained with the VariKN toolkit, the increase in the LM size is minimal.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "SF1 SM1 Toolkit WER LER WER LER",
"sec_num": null
},
{
"text": "word 3-gram 43.9 9.2 49.7 10.4 subword 10-gram, <w> 32.3 6.9 34.9 7.4 subword 10-gram, +m+ 33.8 7.1 38.1 8.2 subword 10-gram, +m 32.5 6.9 36.2 7.5 subword 10-gram, m+ 36.5 7.0 38.9 7.4 Table 5 : Error Rates for different subword boundary markings. All models were trained with the TRAIN+WIKI corpus and 2.5h of audio. Table 6 : Error Rates between TRAIN+WIKI and the BIG language model. Same acoustic data was used in all models. AaltoASR results are from Smit et al. (2016) .",
"cite_spans": [
{
"start": 456,
"end": 474,
"text": "Smit et al. (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 5",
"ref_id": null
},
{
"start": 318,
"end": 325,
"text": "Table 6",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "SF1 SM1 Language Model WER LER WER LER",
"sec_num": null
},
{
"text": "structures in data that the previous frameworks could not take into account. In speech recognition, this has been taken to mean that DNNs require large amounts of training data. However, it is possible that in limited applications such as speaker-dependent systems, DNNs may be able to find useful structures even from small amounts of data. Table 6 shows clear improvements in every DNN architecture compared to the GMM-HMM method. At the point of writing, our simplest network TDNN is at least as good or better than the more complex Chain model and BLSTM, but given more time to study optimal hyperparameters for small data settings, we might be able to train models surpassing the now new baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 6",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "SF1 SM1 Language Model WER LER WER LER",
"sec_num": null
},
{
"text": "Finally, Table 7 shows that the relative differences between different subword boundary markings do not change much even when the language models are trained using the larger corpus. As in Table 5 , the relative differences are small given the size of the test data, but the traditional word boundary tag <w> is still unbeaten and all subword models are better than the word-based model. ",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 7",
"ref_id": "TABREF5"
},
{
"start": 189,
"end": 196,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "SF1 SM1 Language Model WER LER WER LER",
"sec_num": null
},
{
"text": "In this paper, we applied the state-of-the-art ASR framework based on Kaldi and DNN acoustic models to get a new baseline for Northern S\u00e1mi. The results were quite im-pressive with up to 50% relative error rate reduction. The only drawback in WFSTbased speech recognition with large LMs is the size of the WFST search graph, which makes the memory consumption of the single pass decoding sometimes prohibitive. However, in most cases this can be compensated by a two-pass recognition where the second pass is used to rescore the existing search graph with the large LM. The single pass approach does also provide reasonable results already with a low order n-gram models. In addition, the modeling of position-dependent phones and other advanced acoustic modeling developments implemented in Kaldi was a clear benefit. Considering these it is recommended to apply Kaldi for the following research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "The results show clearly that at least in speaker-dependent systems, even with relatively small amounts of audio data, the DNNs were capable of finding structures in data that made them superior to the old state-of-the-art GMM-HMM models. DNNs are also very complex, and their techniques and methods are continuously advancing, so we expect to still achieve further significant improvements in near future. Also, even with the current techniques we should be able to improve the results further by more thoroughly optimizing the layer sizes and hyperparameters of the neural networks. For example, Mansikkaniemi et al. (2017) was able to improve the state-ofthe art results for Finnish broadcast news results by 3% relative with such optimizations.",
"cite_spans": [
{
"start": 598,
"end": 625,
"text": "Mansikkaniemi et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "For the different types of subword boundary markings, our experiments resulted only small differences for Northern S\u00e1mi. Although the traditional word boundary tags gave slightly better results than the other marking styles more studies should be performed on how much the results depends on the language, data, and the length of the subword units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "The next step for improving the LMs in Northern S\u00e1mi is to apply recurrent neural networks. For RNNLMs, the whole word units have further disadvantages in morphologically rich languages, because the large vocabulary increases the dimensions of the input and output layers. For Finnish, using RNN language models with subword units has lowered the WER by 11% with a large training corpus (Smit et al., 2017a) . Reducing the corpus size from 160 million tokens to 16 million tokens, which is close to our BIG data set for Northern S\u00e1mi, reduced the improvement only slightly to 9%. Smit et al. (2017a) show also promising results for Finnish and Arabic with purely character-based models.",
"cite_spans": [
{
"start": 387,
"end": 407,
"text": "(Smit et al., 2017a)",
"ref_id": "BIBREF15"
},
{
"start": 580,
"end": 599,
"text": "Smit et al. (2017a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "For under-resourced languages specifically, an interesting future direction is to develop methods to better take advantage of a well-resourced related language. Even simple methods such as data pooling, acoustic model adaptation or bootstrapping with large amounts of unlabeled data have been popular. For Northern S\u00e1mi we could, for example, try to apply the data and expertise available in Finnish and Estonian. Regardless of the approach taken to improve the ASR, the system build in this paper provides a good baseline for further experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "We thank the University of Troms\u00f8 for the access to their Northern S\u00e1mi datasets and acknowledge the computational resources provided by the Aalto Science-IT project.This work was financially supported by the Tekes Challenge Finland project TELLme, Academy of Finland under the grant number 251170, and Kone foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Turkish broadcast news transcription and retrieval",
"authors": [
{
"first": "Ebru",
"middle": [],
"last": "Arisoy",
"suffix": ""
},
{
"first": "Dogan",
"middle": [],
"last": "Can",
"suffix": ""
},
{
"first": "Siddika",
"middle": [],
"last": "Parlak",
"suffix": ""
},
{
"first": "Hasim",
"middle": [],
"last": "Sak",
"suffix": ""
},
{
"first": "Murat",
"middle": [],
"last": "Saraclar",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "17",
"issue": "5",
"pages": "874--883",
"other_ids": {
"DOI": [
"10.1109/TASL.2008.2012313"
]
},
"num": null,
"urls": [],
"raw_text": "Ebru Arisoy, Dogan Can, Siddika Parlak, Hasim Sak, and Murat Saraclar. 2009. Turkish broadcast news transcription and retrieval. IEEE Transactions on Audio, Speech, and Language Processing 17(5):874-883. https://doi.org/10.1109/TASL.2008.2012313.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Morpheme-based language modeling for Arabic LVCSR",
"authors": [
{
"first": "Ghinwa",
"middle": [],
"last": "Choueiter",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Stanley",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2006,
"venue": "ICASSP 2006 -IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "1053--1056",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2006.1660205"
]
},
"num": null,
"urls": [],
"raw_text": "Ghinwa Choueiter, Daniel Povey, Stanley F. Chen, and Geoffrey Zweig. 2006. Morpheme-based language modeling for Arabic LVCSR. In ICASSP 2006 -IEEE In- ternational Conference on Acoustics, Speech and Signal Processing. pages 1053-1056. https://doi.org/10.1109/ICASSP.2006.1660205.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised discovery of morphemes",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL 2002 Workshop on Morphological and Phonological Learning",
"volume": "6",
"issue": "",
"pages": "21--30",
"other_ids": {
"DOI": [
"10.3115/1118647.1118650"
]
},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL 2002 Workshop on Morphological and Phonological Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, volume 6 of MPL '02, pages 21-30. https://doi.org/10.3115/1118647.1118650.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised models for morpheme segmentation and morphology learning",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Transactions on Speech and Language Processing",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme seg- mentation and morphology learning. ACM Transactions on Speech and Language Processing (TSLP) 4(1):3.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Goerge",
"middle": [
"E"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Abdel-Rahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Tara",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Signal Processing Magazine",
"volume": "29",
"issue": "6",
"pages": "82--97",
"other_ids": {
"DOI": [
"10.1109/MSP.2012.2205597"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Li Deng, Dong Yu, Goerge E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. 2012. Deep neural networks for acoustic model- ing in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29(6):82-97. https://doi.org/10.1109/MSP.2012.2205597.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unlimited vocabulary speech recognition with morph language models applied to Finnish",
"authors": [
{
"first": "Teemu",
"middle": [],
"last": "Hirsim\u00e4ki",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Vesa",
"middle": [],
"last": "Siivola",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Janne",
"middle": [],
"last": "Pylkk\u00f6nen",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Speech & Language",
"volume": "20",
"issue": "4",
"pages": "515--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teemu Hirsim\u00e4ki, Mathias Creutz, Vesa Siivola, Mikko Kurimo, Sami Virpioja, and Janne Pylkk\u00f6nen. 2006. Unlimited vocabulary speech recognition with morph lan- guage models applied to Finnish. Computer Speech & Language 20(4):515-541.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Importance of high-order n-gram models in morph-based speech recognition",
"authors": [
{
"first": "Teemu",
"middle": [],
"last": "Hirsim\u00e4ki",
"suffix": ""
},
{
"first": "Janne",
"middle": [],
"last": "Pylkk\u00f6nen",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "17",
"issue": "4",
"pages": "724--732",
"other_ids": {
"DOI": [
"10.1109/TASL.2008.2012323"
]
},
"num": null,
"urls": [],
"raw_text": "Teemu Hirsim\u00e4ki, Janne Pylkk\u00f6nen, and Mikko Kurimo. 2009. Importance of high-order n-gram models in morph-based speech recognition. IEEE Transactions on Audio, Speech, and Language Processing 17(4):724-732. https://doi.org/10.1109/TASL.2008.2012323.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic construction of the Finnish Parliament Speech Corpus",
"authors": [
{
"first": "Andr\u00e9",
"middle": [],
"last": "Mansikkaniemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2017,
"venue": "INTERSPEECH 2017 -18t\u02b0 Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 Mansikkaniemi, Peter Smit, and Mikko Kurimo. 2017. Automatic construction of the Finnish Parliament Speech Corpus. In INTERSPEECH 2017 -18t\u02b0 Annual Con- ference of the International Speech Communication Association. Stockholm, Sweden.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Speech recognition with weighted finite-state transducers",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 2008,
"venue": "Springer Handbook of Speech Processing",
"volume": "",
"issue": "",
"pages": "559--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri, Fernando Pereira, and Michael Riley. 2008. Speech recognition with weighted finite-state transducers. In Springer Handbook of Speech Processing, Springer, pages 559-584.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A time delay neural network architecture for efficient modeling of long temporal contexts",
"authors": [
{
"first": "Vijayaditya",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "INTER-SPEECH 2015 -16t\u02b0 Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "3214--3218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. 2015. A time delay neu- ral network architecture for efficient modeling of long temporal contexts. In INTER- SPEECH 2015 -16t\u02b0 Annual Conference of the International Speech Communication Association. Dresden, Germany, pages 3214-3218.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "Georg",
"middle": [],
"last": "Silovsky",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vesely",
"suffix": ""
}
],
"year": 2011,
"venue": "ASRU 2011 -IEEE Workshop on Automatic Speech Recognition & Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In ASRU 2011 -IEEE Workshop on Automatic Speech Recognition & Under- standing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Purely sequence-trained neural networks for ASR based on lattice-free MMI",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Vijayaditya",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Galvez",
"suffix": ""
},
{
"first": "Pegah",
"middle": [],
"last": "Ghahremani",
"suffix": ""
},
{
"first": "Vimal",
"middle": [],
"last": "Manohar",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Na",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2016,
"venue": "INTERSPEECH 2016 -17t\u02b0 Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "2751--2755",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2016-595"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vi- mal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for ASR based on lattice-free MMI. In INTERSPEECH 2016 -17t\u02b0 Annual Conference of the Interna- tional Speech Communication Association. San Francisco, pages 2751-2755. https://doi.org/10.21437/Interspeech.2016-595.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An efficient one-pass decoder for Finnish large vocabulary continuous speech recognition",
"authors": [
{
"first": "Janne",
"middle": [],
"last": "Pylkk\u00f6nen",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of The 2nd Baltic Conference on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "167--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janne Pylkk\u00f6nen. 2005. An efficient one-pass decoder for Finnish large vocabulary continuous speech recognition. In Proceedings of The 2nd Baltic Conference on Hu- man Language Technologies. pages 167-172.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "On growing and pruning Kneser-Ney smoothed-gram models. Audio, Speech, and Language Processing",
"authors": [
{
"first": "Vesa",
"middle": [],
"last": "Siivola",
"suffix": ""
},
{
"first": "Teemu",
"middle": [],
"last": "Hirsimaki",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Transactions on",
"volume": "15",
"issue": "5",
"pages": "1617--1624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vesa Siivola, Teemu Hirsimaki, and Sami Virpioja. 2007. On growing and pruning Kneser-Ney smoothed-gram models. Audio, Speech, and Language Processing, IEEE Transactions on 15(5):1617-1624.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Character-based units for unlimited vocabulary continuous speech recognition",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Seppo",
"middle": [],
"last": "Siva Reddy Gangireddy",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Enarvi",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2017,
"venue": "ASRU 2017 -IEEE Workshop on Automatic Speech Recognition & Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Smit, Siva Reddy Gangireddy, Seppo Enarvi, Sami Virpioja, and Mikko Kurimo. 2017a. Character-based units for unlimited vocabulary continuous speech recog- nition. In ASRU 2017 -IEEE Workshop on Automatic Speech Recognition & Under- standing.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic speech recognition for Northern S\u00e1mi with comparison to other Uralic languages",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Juho",
"middle": [],
"last": "Leinonen",
"suffix": ""
},
{
"first": "Kristiina",
"middle": [],
"last": "Jokinen",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second International Workshop on Computational Linguistics for Uralic Languages",
"volume": "",
"issue": "",
"pages": "80--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Smit, Juho Leinonen, Kristiina Jokinen, and Mikko Kurimo. 2016. Automatic speech recognition for Northern S\u00e1mi with comparison to other Uralic languages. In Proceedings of the Second International Workshop on Computational Linguistics for Uralic Languages. pages 80-91.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improved subword modeling for WFST-based speech recognition",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2017,
"venue": "INTERSPEECH 2017 -18t\u02b0 Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Smit, Sami Virpioja, and Mikko Kurimo. 2017b. Improved subword modeling for WFST-based speech recognition. In INTERSPEECH 2017 -18t\u02b0 Annual Conference of the International Speech Communication Association. Stockholm, Sweden.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 7th International Conference on Spoken Language Processing (ICSLP-2002)",
"volume": "",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Proceed- ings of the 7th International Conference on Spoken Language Processing (ICSLP-2002). pages 901-904.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A bilingual study on the prediction of morph-based improvement",
"authors": [
{
"first": "Tibor",
"middle": [],
"last": "Bal\u00e1zs Tarj\u00e1n",
"suffix": ""
},
{
"first": "P\u00e9ter",
"middle": [],
"last": "Fegy\u00f3",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mihajlik",
"suffix": ""
}
],
"year": 2014,
"venue": "Fourth International Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU-2014)",
"volume": "",
"issue": "",
"pages": "131--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bal\u00e1zs Tarj\u00e1n, Tibor Fegy\u00f3, and P\u00e9ter Mihajlik. 2014. A bilingual study on the pre- diction of morph-based improvement. In Fourth International Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU-2014). pages 131-138.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Morfessor 2.0: Python implementation and extensions for Morfessor Baseline",
"authors": [
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sami Virpioja, Peter Smit, Stig-Arne Gr\u00f6nroos, and Mikko Kurimo. 2013. Morfessor 2.0: Python implementation and extensions for Morfessor Baseline. Report 25/2013 in Aalto University publication series SCIENCE + TECHNOLOGY, Department of Signal Processing and Acoustics, Aalto University, Helsinki, Finland.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Prototype Lexicon FST for subword-based lexicon."
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Language and acoustic modeling data for the speech recognizer training.",
"html": null,
"content": "<table/>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "compares the error rates of the GMM-HMM baselines from AaltoASR and Kaldi. Since the data and language models are the same the difference is due to the",
"html": null,
"content": "<table><tr><td>Data</td><td>Units</td><td colspan=\"3\">Lexicon (#types) LM (#n-grams)</td></tr><tr><td/><td/><td>SF1</td><td>SM1</td><td>SF1</td><td>SM1</td></tr><tr><td>TRAIN+WIKI</td><td colspan=\"2\">words subwords, &lt;w&gt; 14.3k 23.5k subwords, +m+ 19.1k subwords, +m 16.1k subwords, m+ 17.2k</td><td colspan=\"2\">23.1k 103.9k 102.4k 14.1k 751.8k 747.6k 18.7k 610.9k 600.0k 15.8k 608.7k 596.9k 17.0k 607.5k 596.4k</td></tr><tr><td/><td>words</td><td>474.9k</td><td/><td>5.9M</td></tr><tr><td>BIG</td><td>subwords, &lt;w&gt; subwords, +m+ subwords, +m</td><td>93.9k 172.4k 122.2k</td><td/><td>51.6M 64.6M 65.0M</td></tr><tr><td/><td>subwords, m+</td><td>137.8k</td><td/><td>64.4M</td></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "presents the main result of this paper, which is the comparison of GMM-HMM to various DNN architectures when the training data resources are limited. The special advantage of DNNs is their remarkable effectiveness in modeling \"deep\"",
"html": null,
"content": "<table><tr><td>Speaker</td><td colspan=\"2\">Acoustic model</td><td colspan=\"2\">TRAIN+WIKI</td><td>BIG</td><td/></tr><tr><td/><td>Type</td><td colspan=\"2\">#params WER</td><td colspan=\"3\">LER WER LER</td></tr><tr><td/><td>AaltoASR</td><td>600k</td><td>37.5</td><td>8.5</td><td>23.7</td><td>5.5</td></tr><tr><td/><td>HMM-GMM</td><td>858k</td><td>32.3</td><td>6.9</td><td>19.9</td><td>3.8</td></tr><tr><td>SF1</td><td>TDNN</td><td>6.6M</td><td>24.8</td><td>4.9</td><td>14.7</td><td>2.5</td></tr><tr><td/><td>Chain Model</td><td>5.8M</td><td>25.6</td><td>6.0</td><td>17.0</td><td>3.5</td></tr><tr><td/><td>BLSTM</td><td>10.8M</td><td>25.6</td><td>5.3</td><td>13.9</td><td>2.7</td></tr><tr><td/><td>AaltoASR</td><td>600k</td><td>39.5</td><td>9.4</td><td>20.9</td><td>4.9</td></tr><tr><td/><td>HMM-GMM</td><td>858k</td><td>34.9</td><td>7.4</td><td>18.0</td><td>3.6</td></tr><tr><td>SM1</td><td>TDNN</td><td>6.6M</td><td>29.2</td><td>5.7</td><td>12.5</td><td>2.1</td></tr><tr><td/><td>Chain Model</td><td>5.8M</td><td>29.8</td><td>6.0</td><td>15.2</td><td>2.8</td></tr><tr><td/><td>BLSTM</td><td>10.8M</td><td>28.5</td><td>5.8</td><td>12.8</td><td>2.4</td></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "Error Rates between different boundary marking styles using the BIG language model. TDNN was used in all recognizers.",
"html": null,
"content": "<table/>"
}
}
}
}