ACL-OCL / Base_JSON /prefixS /json /sigtyp /2021.sigtyp-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:21:18.327229Z"
},
"title": "Anlirika: an LSTM-CNN Flow Twister for Spoken Language Identification",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Shcherbakov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne Monash University \u2022 Bhim Rao Ambedkar University",
"location": {}
},
"email": ""
},
{
"first": "Liam",
"middle": [],
"last": "Whittle",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne Monash University \u2022 Bhim Rao Ambedkar University",
"location": {}
},
"email": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne Monash University \u2022 Bhim Rao Ambedkar University",
"location": {}
},
"email": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne Monash University \u2022 Bhim Rao Ambedkar University",
"location": {}
},
"email": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Coleman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne Monash University \u2022 Bhim Rao Ambedkar University",
"location": {}
},
"email": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne Monash University \u2022 Bhim Rao Ambedkar University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper presents Anlirika's submission to SIGTYP 2021 Shared Task on Robust Spoken Language Identification. The task aims at building a robust system that generalizes well across different domains and speakers. The training data is limited to a single domain only with predominantly single speaker per language while the validation and test data samples are derived from diverse dataset and multiple speakers. We experiment with a neural system comprising a combination of dense, convolutional, and recurrent layers that are designed to perform better generalization and obtain speaker-invariant representations. We demonstrate that the task in its constrained form (without making use of external data or augmentation the train set with samples from the validation set) is still challenging. Our best system trained on the data augmented with validation samples achieves 29.9% accuracy on the test set.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper presents Anlirika's submission to SIGTYP 2021 Shared Task on Robust Spoken Language Identification. The task aims at building a robust system that generalizes well across different domains and speakers. The training data is limited to a single domain only with predominantly single speaker per language while the validation and test data samples are derived from diverse dataset and multiple speakers. We experiment with a neural system comprising a combination of dense, convolutional, and recurrent layers that are designed to perform better generalization and obtain speaker-invariant representations. We demonstrate that the task in its constrained form (without making use of external data or augmentation the train set with samples from the validation set) is still challenging. Our best system trained on the data augmented with validation samples achieves 29.9% accuracy on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Among approximately 7,000 world languages, over 43% are oral only and do not exhibit any writing system. Still, even in less exotic cases language processing systems may have to solely rely on vocal representations. Spoken language identification (SLI) is essential sub-task in many approaches to multilingual automated speech recognition and machine translation. In addition, it also has practical applications as a standalone task. Automated assignment of a call center operator to a client is one of possible use case scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper provides a description of \"Anlirika\" system 1 that was submitted to SIGTYP 2021 Shared Task on Robust SLI (Salesky et al., 2021) . In terms of the task, systems are trained to predict a language class (id) from an audio signal. Importantly, the task aims at development of robust systems that can generalize well to new domains and speakers. Many languages are under-resourced, and the situation when the language data exist only for a very limited number of speakers or domains is common. For instance, the largest multilingual SLI dataset, namely CMU Wilderness (Black, 2019) , has been derived from the Bible in \u2248 700 languages and lacks speaker diversity. Therefore, it is essential for a system to be speaker-invariant and robust.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Salesky et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 574,
"end": 587,
"text": "(Black, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most work on SLI focused on Indo-European languages such as English, German, Russian, French, Hindi. It is also common to transform raw audio signal into the log-Mel spectra or MFCC features. Recent approaches such as Bartz et al. (2017) , Revay and Teschke (2019), and Shukla et al. (2019) make use of various convolution-based neural architectures. For instance, Bartz et al. (2017) proposed a hybrid model that used convolutional layers to extract spatial features and recurrent units (bidirectional LSTMs) tp capture temporal characteristics. Revay and Teschke (2019) explored the ResNet-50 (He et al., 2016) architecture dynamically adapting learning rate.",
"cite_spans": [
{
"start": 218,
"end": 237,
"text": "Bartz et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 270,
"end": 290,
"text": "Shukla et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 365,
"end": 384,
"text": "Bartz et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 595,
"end": 612,
"text": "(He et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The dataset comprises 16 typologically diverse languages from Afro-Asiatic, Austronesian, Basque, Dravidian, Indo-European, Niger-Congo, and Tai-Kadai families. The training data is derived from the CMU Wilderness dataset (Black, 2019) which represents a single domain (speech utterances from the Bible) and has predominantly a single speaker per language. The validation and test sets were collected from multiple corpora such as Common Voice (Ardila et al., 2019) and present a variety of recording conditions with multiple speakers per language. The length of each speech utterance ranged",
"cite_spans": [
{
"start": 222,
"end": 235,
"text": "(Black, 2019)",
"ref_id": "BIBREF2"
},
{
"start": 444,
"end": 465,
"text": "(Ardila et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "N m N m 3(N m -K+1) N L x LSTM D L D L D L dense CNN LSTM dense concat MFCC features D L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "one hot language prediction @ last sequential step between 3 and 7 seconds. The training data contained 4,000 utterances per language, while validation and test sets comprised 500 samples each. Importantly, the utterances were provided in the form of Mel-Frequency Cepstral Coefficients (MFCC) features rather then raw audio signal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "As illustrated on Figure 1 , we used a multi-layer neural network solution with two dense layers, one CNN and 1-7 LSTM layers. The design of neural layer stack is motivated by the following general vision of how a sample should be processed:",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "\u2022 We suggest that a raw spectral pattern first needs to be multiplied by a square matrix in order to remove sound harmonics. That is why we are using a dense layer as the front one;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "\u2022 Then we try to recognize features related to the spectral line shape. Therefore, we use a one-dimensional CNN (convolving by input feature vector index [frequency]);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "\u2022 Then we recognize \"local\" temporal constructs with a stack of LSTMs;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "We use yet another LSTM to reduce temporal patterns into a single-vector representation (only the final time step output goes to the next layer);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "\u2022 Finally, we classify it into one of 16 languages with a dense layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "The layer stack we used is summarized in Table 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "We employed a batched learning process with a fixed number of processed samples per batch 64and with variable number of time steps. Such a mechanism works as follows. An initial batch is filled with randomly chosen samples. The number of temporal steps in the batch is determined by the shortest sample currently present in a given batch. Once a batch is fed forward through the neural network layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batching mechanism",
"sec_num": "4.1"
},
{
"text": "1. The samples which ends happen to be aligned with the end of the batch, are done now (within a given epoch). We replace them with next randomly chosen training samples when forming the next batch. If a sequential layer contains hidden states (which is true for the LSTMs in our model), zero hidden states are supplied to the respective threads of batch. Final prediction values for such threads are used to calculate the loss;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batching mechanism",
"sec_num": "4.1"
},
{
"text": "2. The samples which do not fit within the batch length, are passed to the next batch for further processing, having their already-processed prefixes removed. Start hidden states for the respective threads are initialized with the values of final hidden states computed in the preceding batch, as shown with blue arrows in Figure 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 323,
"end": 331,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Batching mechanism",
"sec_num": "4.1"
},
{
"text": "= 39 1D CNN D CN N = 3(N m \u2212 K + 1) per-timestep K=4 -kernel size N L \u00d7 LSTM D L ea. per-timestep - concat D CN N + N L D L per-timestep - LSTM D L per-sample - dense",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batching mechanism",
"sec_num": "4.1"
},
{
"text": "Num. languages=16 per-sample - Figure 2 summarises the description above.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Batching mechanism",
"sec_num": "4.1"
},
{
"text": "A drawback of such a batching technique is constraining of temporal depth when the backpropagation through time takes place in LSTM layers. A batch is typically much shorter in time steps than a sample. Therefore, a single backpropagation operation (that cannot run across batches) may modify less weights than it would be expected without batching. We regarded this effect as minor, however, its influence to the overall learning capability is yet to be investigated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batching mechanism",
"sec_num": "4.1"
},
{
"text": "We varied N L , the number of extra sequential LSTM layers (which outputs were concatenated to the output of the CNN layer). We tried the following options: {0,2,4,6}. A number of units in each LSTM layer was chosen from {200,300}. We used equal numbers of units across all the LSTM layers present in the model. Using the original train set. A learning dynamic we observed in our experiment was generally slow. In most trials the model failed to learn with the learning rate value greater than 4 \u2022 10 \u22124 . With lower learning rates, it trained at an extremely slow pace gaining about 0.1% train set accuracy per epoch. At the time of this report writing, we achieved an overall accuracy value of about 12% at validation set. It is curious to note that accuracy figures for train and validation sets did not correlate as expected, the fact that may indicate significant difference in domains. Typically, a predicted distribution of languages was limited to 2-3 classes, the list of which was volatile. We also noticed that the model converged much faster at small subsets of training sets (50-500 samples). Augmenting training data with validation set samples. A quite different picture was observed when we combined training and validations sets and randomly split them again into training and validation portions. A much superior accuracy of 74% on validation set was achieved. The confusion matrix is shown on Figure 3 . Such a relatively high prediction accuracy is not surprising, as a validation holdout is likely to share speaker identities with the respective training subset, the fact that leads to a significant loosening of required generalization ability. However, a drastic improvement in convergence dynamics remains a noticeable and unexpected effect.",
"cite_spans": [],
"ref_spans": [
{
"start": 1412,
"end": 1420,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "A choice ofN L = 2 was found to be producing the highest accuracy. Increasing D L from 200 to 300 didn't lead to any significant difference in performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning of hyperparameters.",
"sec_num": null
},
{
"text": "Shared task submission. The final submitted version was trained on an augmented set. The performance figures are shown in Table 2 To address the task of language classification in speech samples, we implemented and explored a neural network model. The model's architecture was inspired by an idea of phoneme sequence recognition. Our experiments are yet in progress, still it is clear that the generalization across domains appears to be the main challenge. Following a maxim of keeping model as light as possible, we are going to explore architecture modifications that directly enforce some kind of phonetic generalization, for instance, by insertion of \"bottlenecks\" (layers with low output size).",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Tuning of hyperparameters.",
"sec_num": null
},
{
"text": "The code is available at https://github. com/andreas-softwareengineer-pro/ speech-language-classifier",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Common voice: A massivelymultilingual speech corpus",
"authors": [
{
"first": "Rosana",
"middle": [],
"last": "Ardila",
"suffix": ""
},
{
"first": "Megan",
"middle": [],
"last": "Branson",
"suffix": ""
},
{
"first": "Kelly",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Henretty",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Kohler",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "Reuben",
"middle": [],
"last": "Morais",
"suffix": ""
},
{
"first": "Lindsay",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "Gregor",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.06670"
]
},
"num": null,
"urls": [],
"raw_text": "Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M Tyers, and Gregor Weber. 2019. Common voice: A massively- multilingual speech corpus. arXiv preprint arXiv:1912.06670.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language identification using deep convolutional recurrent neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Bartz",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Herold",
"suffix": ""
},
{
"first": "Haojin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Meinel",
"suffix": ""
}
],
"year": 2017,
"venue": "International conference on neural information processing",
"volume": "",
"issue": "",
"pages": "880--889",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Bartz, Tom Herold, Haojin Yang, and Christoph Meinel. 2017. Language identification us- ing deep convolutional recurrent neural networks. In International conference on neural information pro- cessing, pages 880-889. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cmu wilderness multilingual speech dataset",
"authors": [
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5971--5975",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan W Black. 2019. Cmu wilderness multilingual speech dataset. In ICASSP 2019-2019 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5971-5975. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multiclass language identification using deep learning on spectral images of audio signals",
"authors": [
{
"first": "Shauna",
"middle": [],
"last": "Revay",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Teschke",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.04348"
]
},
"num": null,
"urls": [],
"raw_text": "Shauna Revay and Matthew Teschke. 2019. Multi- class language identification using deep learning on spectral images of audio signals. arXiv preprint arXiv:1905.04348.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SIGTYP 2021 shared task: Robust spoken language identification",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Badr",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Abdullah",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Mielke",
"suffix": ""
},
{
"first": "Oleg",
"middle": [],
"last": "Klyachko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Serikov",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Edoardo",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vylomova",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Third Workshop on Computational Research in Linguistic Typology",
"volume": "",
"issue": "",
"pages": "136--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Salesky, Badr M Abdullah, Sabrina J Mielke, Elena Klyachko, Oleg Serikov, Edoardo Maria Ponti, Ritesh Kumar, Ryan Cotterell, and Ekaterina Vylo- mova. 2021. SIGTYP 2021 shared task: Robust spo- ken language identification. In Proceedings of the Third Workshop on Computational Research in Lin- guistic Typology, pages 136-142.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Spoken language identification using convnets",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Shukla",
"suffix": ""
},
{
"first": "Govind",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2019,
"venue": "European Conference on Ambient Intelligence",
"volume": "",
"issue": "",
"pages": "252--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Shukla, Govind Mittal, et al. 2019. Spoken language identification using convnets. In European Conference on Ambient Intelligence, pages 252-265. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Architecture used in language classifier",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Confusion matrix for mixed set holdout validation (N L = 2, D L = 200)",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"text": "Layer stack summary cessed, i.e. a training epoch is done. Some trailing batches may be underpopulated with threads, in which cases output values of unused threads are ignored.",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"text": ".",
"content": "<table><tr><td>Set</td><td colspan=\"3\">Acc. F1, Micro Avg F1, Macro Avg</td></tr><tr><td>Test</td><td>29.9%</td><td>29.8%</td><td>28.2%</td></tr><tr><td colspan=\"2\">Valid. 43.6%</td><td>43.6%</td><td>42.1%</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Aggregated performance metrics for the final model version 6 Conclusion & future work",
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}