ACL-OCL / Base_JSON /prefixW /json /wanlp /2020.wanlp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:58:35.585081Z"
},
"title": "German-Arabic Speech-to-Speech Translation for Psychiatric Diagnosis",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Hussain",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Mediani",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Behr",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "St\u00fcker",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "M",
"middle": [],
"last": "Amin Cheragui",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Waibel",
"suffix": "",
"affiliation": {},
"email": "alexander.waibel@cmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present the Arabic related natural language processing components of our German-Arabic speech-to-speech translation system which is being deployed in the context of interpretation during psychiatric, diagnostic interviews. For this purpose we have built a pipelined speech-to-speech translation system consisting of automatic speech recognition, machine translation, text post-processing, and speech synthesis systems. We have implemented two pipelines, from German to Arabic and vice versa, to conduct interpreted two-way dialogues between psychiatrists and potential patients. All systems in our pipeline have been realized as all-neural end-to-end systems, using different architectures suitable for the different components. The speech recognition systems use an encoder/decoder + attention architecture, the machine translation system is based on the Transformer architecture, the post-processing for Arabic employs a sequence-tagger for diacritization, and for the speech synthesis systems we use Tacotron 2 for generating spectrograms and WaveGlow as a vocoder. The speech translation is deployed in a server-based speech translation application that implements a turn-based translation between a German-speaking psychiatrist administrating the Mini-International Neuropsychiatric Interview (M.I.N.I.) and an Arabic speaking person answering the interview. As this is a very specific domain, in addition to the linguistic challenges posed by translating between Arabic and German, we also focus in this paper on the methods we implemented for adapting our speech to speech translation system to the domain of this psychiatric interview.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present the Arabic related natural language processing components of our German-Arabic speech-to-speech translation system which is being deployed in the context of interpretation during psychiatric, diagnostic interviews. For this purpose we have built a pipelined speech-to-speech translation system consisting of automatic speech recognition, machine translation, text post-processing, and speech synthesis systems. We have implemented two pipelines, from German to Arabic and vice versa, to conduct interpreted two-way dialogues between psychiatrists and potential patients. All systems in our pipeline have been realized as all-neural end-to-end systems, using different architectures suitable for the different components. The speech recognition systems use an encoder/decoder + attention architecture, the machine translation system is based on the Transformer architecture, the post-processing for Arabic employs a sequence-tagger for diacritization, and for the speech synthesis systems we use Tacotron 2 for generating spectrograms and WaveGlow as a vocoder. The speech translation is deployed in a server-based speech translation application that implements a turn-based translation between a German-speaking psychiatrist administrating the Mini-International Neuropsychiatric Interview (M.I.N.I.) and an Arabic speaking person answering the interview. As this is a very specific domain, in addition to the linguistic challenges posed by translating between Arabic and German, we also focus in this paper on the methods we implemented for adapting our speech to speech translation system to the domain of this psychiatric interview.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In psychiatry the Mini-International Neuropsychiatric Interview (M.I.N.I.) is a short structured diagnostic interview for psychiatric disorders (Sheehan et al., 1998) . In Germany it is, among others, used for diagnosing Arabic speaking refugees. Here, the language barrier is an obvious one, that is normally overcome with the help of human interpreters. However, human interpreters are scarce, expensive and very often not readily available when an urgent diagnosis is needed. In the project Removing language barriers in treating refugees-RELATER we are therefore building a speech-to-speech translation (S2ST) system for interpreting between a German speaking psychiatrist and an Arabic speaker taking the M.I.N.I. interview.",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "(Sheehan et al., 1998)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The natural language processing (NLP) technology for this scenario faces two challenges: a) the general linguistic challenges when translating between German and Arabic and b) the specific domain of the interview for which only very little adaptation data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe the Arabic related NLP components with which we implemented the speech to speech translation between German and Arabic for the interview. These components are part of a server-based application, where the client application is capable of running on mobile platforms; whereas the components themselves run on remote powerful computation servers. The application realizes a turnbased interpretation system between the psychiatrist and the potential patient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While end-to-end S2ST translation systems are the latest trend in research, for our system we opted for pipe-lined systems, because a) pipe-lined systems still outperform end-to-end systems (Ansari et al., 2020) , and b) to the best of our knowledge no suitable corpus for training German-Arabic end-to-end S2ST systems exists, for any domain.",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Ansari et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All systems in our pipeline have been realized as all-neural end-to-end systems, using different architectures suitable for the different components. The speech recognition system uses an encoder/decoder + attention architecture (Nguyen et al., 2020b) , the text segmentation component, and the machine translation are based on the Transformer architecture (Vaswani et al., 2017a) , and for the speech synthesis we use Tacotron 2 1 (Shen et al., 2018) for generating spectrograms and WaveGlow as vocoder (Prenger et al., 2019) .",
"cite_spans": [
{
"start": 229,
"end": 251,
"text": "(Nguyen et al., 2020b)",
"ref_id": "BIBREF18"
},
{
"start": 357,
"end": 380,
"text": "(Vaswani et al., 2017a)",
"ref_id": "BIBREF25"
},
{
"start": 432,
"end": 451,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 504,
"end": 526,
"text": "(Prenger et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is structured as follows: Section 2 describes the Arabic speech recognition system in the pipeline, while section 3 describes the machine translation component and section 4 the Arabic speech synthesis system. As the the topic of diacritization is very prominent for Arabic NLP, we discuss the specific issues we faced in our S2ST system in section 5. In section 6, we study the real-time aspect of our pipeline system which is essential for the deployment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For Automatic Speech Recognition (ASR), we employ an LSTM-based sequence to sequence (S2S) model with encoder/decoder + attention architecture. This model yields better performance compared to the self-attention S2S model with a comparable parameter size. Before the LSTM layers in the encoder, we place a two-layer Convolutional Neural Network (CNN) with 32 channels and a time stride of two to down-sample the input spectrogram by a factor of four. In the decoder, we adopt two layers of unidirectional LSTMs and the approach of Scaled Dot-Product (SDP) Attention to generate context vectors from the hidden states. More details about the models can be found in (Nguyen et al., 2020b) . In section 2.1, we list the data used for the training, testing, and domain adaptation. For the latter, we use primary methods described in section 2.2. We report some implementation details and experimental results in section 2.3.",
"cite_spans": [
{
"start": 664,
"end": 686,
"text": "(Nguyen et al., 2020b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "2"
},
{
"text": "We use the following data sets. For each data set we give a short name and the duration in hour (h) or in minutes (m):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "\u2022 Alj.1200h: We use this set for the training or bootstrapping of our model. It consists of 1200 hours of broadcast videos recorded during 2005-2015 from the Aljazeera Arabic TV channel as described in (Ali et al., 2016) . As reported, 70% of this set is in Modern Standard Arabic (MSA) and the rest is Dialectal Arabic (DA), such as Egyptian (EGY), Gulf (GLF), Levantine (LEV), and North African (NOR). The categories of the speech range from conversation (63%), interview (19%), to report (18%).",
"cite_spans": [
{
"start": 202,
"end": 220,
"text": "(Ali et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "\u2022 Alj.MSA+dialect.10h: A test set of 10 hours described in (Ali et al., 2016) as well. It includes non-overlapped speech from Aljazeera, which was prepared according to (Ali et al., 2016) for an Arabic multi-dialect broadcast media recognition challenge. We use the set as it is without normalizing Alif, Hamza or any other characters.",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "(Ali et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 169,
"end": 187,
"text": "(Ali et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "\u2022 Alj.MSA.2h: This is a subset from Alj.MSA+dialect.10h where we cut only MSA utterances free from dialects from the beginning of the set until we reached the duration of 2 hours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "\u2022 mini.que.ans.3.34h: This dataset consists of 915 utterances and 3.34 hours of reading M.I.N.I questions (Sheehan et al., 1998) and free answers from two speakers. We transcribed the answers with our ASR system and then corrected them manually with our desktop application DaC-ToR (Hussain et al., 2020) which we used to correct the automatic transcription. For the recording we employed our online application TEQST 2 which allows the user to read texts and record with their own mobile devices.",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "(Sheehan et al., 1998)",
"ref_id": "BIBREF23"
},
{
"start": 282,
"end": 304,
"text": "(Hussain et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "\u2022 mini-ans.42m: A test set that has been processed similarly to mini.que.ans.3.34h. It consists of 224 free answers on M.I.N.I questions with a duration of 42 minutes by one speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "\u2022 mini.ques.50m: 225 M.I.N.I questions by the same speaker as from mini-ans.42m with a duration of 50 minutes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "As we will see in section 2.3, we obtain extremely poor performance on our target domain (psychiatric interview). Therefore, we are investigating many approaches to adapt our speech recognition system to the target domain. In this paper, we report about two experiments: the mixed-fine-tuning and the finetuning experiment (Chu et al., 2017) . For mixed-fine-tuning, we begin with the pre-trained model on Alj.1200h data as a baseline and mixed-fine-tune it on the data resulting from mixing Alj.1200h with the domain data mini.que.ans.3.34h. For mixed-fine-tune or fine-tuning the decoder, we freeze the encoder, and train only the decoder. For fine-tuning, we don't mix the data. Instead we use only mini.que.ans.3.34h set. For mixed-fine-tuning and fine-tuning, We use the learning rates (0.0009) and (0.00001) respectively. We report the result in section 2.3.",
"cite_spans": [
{
"start": 323,
"end": 341,
"text": "(Chu et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for ASR",
"sec_num": "2.2"
},
{
"text": "We experimented with the LSTM-based encoder-decoder + attention using 40 log-Mel features, twolayer Convolutional Neural Network (CNN) with 32 channels, 6 encoder-layers, 2 decoder-layers. For data augmentation, we use dynamic time stretching and specAugument as described in (Nguyen et al., 2020b) . For the output, we employed a sub-word tokenization method, byte-pair encoding (BPE) (Sennrich et al., 2015) , (Gage, 1994) . Empirical experiments indicated that 4k tokens yielded the best performance. The tokenizer is trained on MSA texts since we aim to have a well-performing system on MSA. For this reason, we obtain on the first test set MSA+dialect.10h, which contains dialect besides MSA, Word Error Rate (WER) of 18.8% (see Table 1 ). While, On the second test set MSA.2h with only MSA data, we reach a WER of 12.6%. We employ the beam-search algorithm for the output sequence prediction, where the beam-size of 4 yields the best WER. This low beam-size is considered very efficient for the real-time capability of the system.",
"cite_spans": [
{
"start": 276,
"end": 298,
"text": "(Nguyen et al., 2020b)",
"ref_id": "BIBREF18"
},
{
"start": 386,
"end": 409,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 412,
"end": 424,
"text": "(Gage, 1994)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 734,
"end": 741,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "2.3"
},
{
"text": "For Domain adaption, the results of the main model on both test sets mini-ans.42m and mini.ques.50m of the target domain have a very high WER: 40% and 30.4% respectively. Our speech recognition model is S2S without an additional language model scoring or a dictionary. The decoder learns the language model from the training data directly. The out-of-vocabulary (OOV) rate between the test sets and the training data Alj.1200h is 3%. Hence, The main reason of the high WER is not the OOV but the domain mismatch since the training data is from broadcast videos and the domain of both test sets is psychiatric interviews. The results in the next section gives evidence for this hypothesis, since tuning only the decoder yields a considerable improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "2.3"
},
{
"text": "By mixed-fine-tuning the whole model with the same architecture, we reach an improvement on both domains, where the WER is reduced by 0.7% on both Alj.MSA+dialect.10h and Alj.MSA.2h. Besides, on mini-ans.42m and mini.ques.50m we obtain a WER reduction of about 22%. By mixed-fine-tuning only the encoder we obtain comparable results. On the other hand, although finetuning the decoder only causes the forgetting of the out-of-domain (i.e. Alj.MSA+dialect.10h and Alj.MSA+dialect.10h) by increasing their WER by about 8%, it improves the in-domain (i.e. mini.ques.50m) to 6.2%. This is likely due to the questions being identical in the training and test set, however by different speakers. The reason of forgetting is that the fine-tuning does not use the whole training but only the in-domain data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "2.3"
},
{
"text": "It is a known fact that translating between structurally or morphologically different languages is a very difficult task. Two well-known examples of these hard language pairs are English-German and English-Arabic. In this work, we are associating the worst of these two worlds: Arabic and German. of the unmistakable differences between these two languages is that both of them are morphologically rich: Arabic is highly inflectional ( (Farghaly and Shaalan, 2009) ) and German possesses word compounding. At the syntactic level, word order is a substantial difference between the two languages. Arabic is much more flexible in this respect.",
"cite_spans": [
{
"start": 436,
"end": 464,
"text": "(Farghaly and Shaalan, 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "3"
},
{
"text": "In this work, we face two major challenges: data scarcity caused by the understudied language pair and the specificity of the domain of application. Indeed, the language pair under consideration here has been out of the focus of the international machine translation campaigns. Such campaigns (such as WMT 3 and IWSLT 4 ) are organized on a yearly basis, and have boosted considerably both the performance and the available resources for the language pairs they consider. The data scarcity problem becomes even more severe when we know that the resulting system is to be involved in the communication between psychiatrists and their patients. The genre of the language used therein is quite different from that used in the news. This latter makes most of the available training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "3"
},
{
"text": "In the following, we explain the steps we followed to overcome these limitations and to produce a reasonable translation system. Then, we show the developed systems in action through empirical evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "3"
},
{
"text": "Although the language pair under consideration is not commonly studied, a reasonable training set could be gathered, thanks to the different data sources publicly exposed on the Internet. In Part A of Table 2 , we give a summary about the exploited data sets and some of their important attributes.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The TED corpus 6 is our first source of data. This corpus is collected by (Cettolo et al., 2012) from the translations generated by volunteers for the TED talks, over the course of several years. By looking at the data, we think that TED is cleaner and more suitable for speech translation. Therefore, we always start by a baseline trained on TED data only, and then we gradually introduce more data from other sources.",
"cite_spans": [
{
"start": 74,
"end": 96,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Another extremely important source is the OPUS 7 repository. OPUS is a large depot where parallel data is collected from different sources and sometimes also preprocessed for a very large number of language pairs. It turned out that not all of the data available in this repository is in a good shape. Therefore, we manually examined those corresponding to our language pair, and discarded those of which we were convinced that they include very large amounts of noise. A good example of this noisy data is the OpenSubtitle corpus. After this manual filtering, the corpora used from OPUS are: Multi UN, News Commentary, Global Voices, and Tatoeba.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The Wikipedia corpus is generated from German-English and Arabic-English corpora by pivoting over their English side. Although OPUS repository offers these two corpora for download, it does not offer the direct version Arabic-German. We use strict string matching of the English sides to find the Arabic-German translations (i.e. Finding sentence pairs from the two corpora where the English sentences are exactly equal in both pairs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The QED corpus is a multilingual corpus for the educational domain produced by QCRI in Qatar (Abdelali et al., 2014) . Similar to TED talks, this corpus consists of the transcription and translation of educational lectures. In this work, we use the version 1.4 of this corpus. 8",
"cite_spans": [
{
"start": 93,
"end": 116,
"text": "(Abdelali et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Finally, the JW300 corpus was crawled from the Jehovah's Witnesses website 9 by (Agi\u0107 and Vuli\u0107, 2019) . The contents are of religious nature and are available in a large number of language pairs. This corpus is also made available on the OPUS repository. However, unlike the other corpora, this one comes unaligned on the sentence level. Consequently, we perform sentence alignment on the raw downloaded data. The process was held by the hunalign 10 tool. However, this tool requires a dictionary for the language pair; we automatically created one from the other aligned corpora. We used the fast align 11 tool to align the corpora at the word level in two directions. Afterwards, We restricted our dictionary to word pairs appearing 5 times or more in the two directions.",
"cite_spans": [
{
"start": 80,
"end": 102,
"text": "(Agi\u0107 and Vuli\u0107, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The last row in Part A of Table 2 shows a part of our in-domain data. The other small part is used for testing (Last row, Q+A-TEST in Part B of the same Table) . This consists of the translations of the M.I.N.I questions. Added to them are manual translations of the transcribed answers by some patients. These latter correspond to the mini.que.ans.3.34h data set (mentioned in speech data stes in Section 2.1).",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 153,
"end": 159,
"text": "Table)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "In addition to the training data, we use three test sets to evaluate the performance of our systems on different kinds of data. Some details about these test sets are given in Part B of Table 2 (JW-TEST, TED-TEST, Q+A-TEST). The first two of these sets are considered as general domain and used to measure the model's performance on out-of-domain tasks. While these two sets are both out-of-domain for our purpose, they still represent different domains. The JW-TEST is a random subset drawn from the JW300, while ensuring that the test and the training sets remain disjoint. The TED-TEST set is the test set used in IWSLT evaluation in 2012 (IWSLT2012). The last test set (Q+A-TEST), as mentioned in the previous paragraph, consists of what was held out from the in-domain data to evaluate the system's performance on in-domain tasks. This latter test set, as well as its training counter-part (Q+A-TRAIN), is work in progress and will be extended in the future.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The similarity between training and test data is a common assumption in machine learning (See e.g. (Daum\u00e9 and Marcu, 2006) for an elaborate description of this issue). In most cases however, this assumption is violated in the field. This issue is usually resolved by performing in-domain adaptation. That is tweaking a model trained on large quantities of out-of-domain data using only the very few available examples from the domain under consideration. As a result, the similarity between probability distributions of the model and the domain is augmented.",
"cite_spans": [
{
"start": 99,
"end": 122,
"text": "(Daum\u00e9 and Marcu, 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for Machine Translation",
"sec_num": "3.2"
},
{
"text": "The tasks we perform in this work are no exception to the aforementioned problem. In order to adapt the general system to the domain under consideration, some few in-domain examples are mandatory. In our work, these examples are the Q+A sets shown in the last rows of Parts A and B of Table 2 . It is noteworthy however, that these sets are being actively extended, since the manual data creation is time-consuming. So far, we explored two approaches to the adaptation: fine-tuning and data selection. Fine-tuning is accomplished by resuming the training for very few additional epochs using only the in-domain data. Data selection provides us with more in-domain data. This is achieved by choosing a special subset of training examples from the general domain training set. This subset consists of general domain examples, which are the most similar to the in-domain examples. The selection process is carried out using (Moore and Lewis, 2010) with more careful out-of-domain language model selection as proposed by (Mediani et al., 2014) . The language models used in this procedure are 4-gram language models with Witten-Bell smoothing. We perform the selection for both languages (i.e. once for Arabic and once for German). Then, we take the intersection of the two selected subsets. 12",
"cite_spans": [
{
"start": 1017,
"end": 1039,
"text": "(Mediani et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Domain Adaptation for Machine Translation",
"sec_num": "3.2"
},
{
"text": "The input data is preprocessed using the sentence piece 13 algorithm. For Arabic the vocabulary size is set to 4000 and for German to 16000. Lines longer than 80 words are discarded. The model's architecture is transformer encoder-decoder model (Vaswani et al., 2017b) . Both the encoder and decoder are 8-layers. Each layer is of size 512. The inner size of the feed-forward network inside each layer is 2048. The attention block consists of 8 heads. The dropout is set to 0.2. All trainings are run for 100 epochs. While the number of epochs used for fine-tuning is taken to be 10. We use the Adam scheduling with a learning rate initially set to 1 and with 2048 warming steps. The results are summarized in The scores in Table 3 are BLEU scores (Papineni et al., 2002) . The table is subdivided into two panels, one for each translation direction. We report all tested combinations for the German \u2192 Arabic direction. For the reverse direction we report results only for the most promising configurations. The configurations shown here are as follows:",
"cite_spans": [
{
"start": 245,
"end": 268,
"text": "(Vaswani et al., 2017b)",
"ref_id": "BIBREF26"
},
{
"start": 748,
"end": 771,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 724,
"end": 731,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.3"
},
{
"text": "\u2022 TED only The training is performed on TED corpus only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.3"
},
{
"text": "\u2022 TED+Extra data the system is trained on all data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.3"
},
{
"text": "\u2022 Fine-tune The fine-tuning is done with the very small in-domain Q+A-train only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.3"
},
{
"text": "\u2022 Fine-tune-select where the fine-tuning is accomplished using the set consisting of the merge between Q+A-train and the 20K more sentences selected from the training corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.3"
},
{
"text": "As the table shows, using all the available data is helpful for all test sets. However, the JW-TEST is the one which benefits the most. This is to be expected as an important part from the training data comes from the JW300 corpus. That corpus was originally in the same set with JW-TEST. The fine tuning on the tiny Q+A-train data set was not a good help to all test sets. While it introduces small improvement on the in-domain test, it was very harmful to the other sets. It causes the system to quickly overfit; most likely due to its small size and very restricted type of sentences. This is where the data selection comes in handy to fix these problems of the in-domain data set. This is demonstrated on the last row of the two panels of the table. The selection was able to bring a large improvement for the in-domain test set, without harming the other test sets from different domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.3"
},
{
"text": "For speech synthesis or Text-To-Speech (TTS), we use the state-of-the-art model, namely Tacotron 2 14 (Shen et al., 2018) which is a recurrent S2S network with attention that predicts a sequence of Mel spectrogram frames for a given text sequence. For generating time-domain waveform samples conditioned on the predicted Mel spectrogram frames, we chose WaveGlow (Prenger et al., 2019) where the authors claim that it has a faster inference than WaveNet (Oord et al., 2016) used with Tacotron 2 in (Shen et al., 2018) .",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 363,
"end": 385,
"text": "(Prenger et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 498,
"end": 517,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Synthesis",
"sec_num": "4"
},
{
"text": "We trained Tacotron 2 and used a pre-trained WaveGlow model found on the Github page of WaveGlow. We also kept most of the default parameters from the implementation except for the sampling rate where we use 16kHz for the audio. As input to the Tacotron 2, we used Arabic character sequences with diacritics in Buckwalter format 15 . The corpus used for the training (Halabi, 2016) contains 1813 utterances with a duration of 3.8 hours. We first experimented with Arabic characters without diacritics to synthesize the audio directly from the MT output. Unfortunately, the model did not converge since even the same word in a different context is diacritized differently. Employing BPE used for ASR (section 2) without diacritics did not work either even with pretraining with over 400 hours of the data set Alj.1200h. For this reason we developed the diacritization component (section 5) to our pipeline.",
"cite_spans": [
{
"start": 367,
"end": 381,
"text": "(Halabi, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "The resulting speech sounds natural (see section 4.2), however, when synthesizing for only one word, the model does not terminate and reaches the maximum decoding steps. As a solution, we implemented data augmentation by both splitting and merging utterances. We apply splitting on 708 out of the total 1813 utterances which are in form of spoken separate words separated by silence, for instance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "\"ta>aa\u02dcwSawa\u02dcra -wata>aa\u02dcwSara -watu&a\u02dcwSa -taSa>aa\u02dcw\" with an automatic approach. The algorithm simply finds positions with values under a threshold. These are simple to find since the corpus is recorded using a professional studio. Besides, we merge randomly 2 and 3 utterances with the probabilities 0.2 and 0.1 respectively if the duration stays under 30 seconds. This approach solved the one words synthesizing. However, another issue appeared, the synthesis contained short silence where it is not supposed to. This is due to the silence between the concatenated utterances which we solved by adding the symbol for silence \"-\" used already in the corpus between to concatenated utterances. It is worth to mention that this method solved an issue of the German synthesis for long sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "For the evaluation of the naturalness of the resulting speech, we constructed an online form including 10 synthesized speech from the first 10 utterances of the test set Alj.MSA.2h (section 2.1) after diacritizing them manually. Besides, we added 3 real speech samples from the training data to judge the seriousness of the evaluators and whether the task has been properly understood by the participants. The scale we use is 1 if the speaker is a robot, 2 near to a robot sound, 3 no difference, 4 near to human sound, and 5 the sound is from a human. From the 18 participations we received, we deleted the whole evaluation of participants who evaluated one or more of the 3 real audio samples with 2 or below. It remains 11 valid votes with an average of 4.05 as a Mean Of Opinion (MOP \u2191). For the whole 18 participations without deleting any invalid votes we obtain a MOP of 3.82. This means that the synthesized speech is of very good quality, in most cases judged as very close to human speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "As mentioned earlier (Section 4), the dicritization was introduced to make the TTS understandable. We were, indeed, unable to synthesize good Arabic speech without these short vowels. The synthesis, then, consists of incomprehensible mumbles as the system tends to insert the vowels arbitrarily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diacritization",
"sec_num": "5"
},
{
"text": "Diactritics have a crucial role in giving the Arabic text a phonetics (Abbad and Xiong, 2020) . Moreover, they allow for better comprehension by reducing the level of ambiguity inherent in the Arabic transcription system. Having that said, most modern written Arabic resources omit these diacritics and rely on the ability of the native speakers to guess them from the context. In particular, all the parallel data used to train our translation systems has no diacritics.",
"cite_spans": [
{
"start": 70,
"end": 93,
"text": "(Abbad and Xiong, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diacritization",
"sec_num": "5"
},
{
"text": "We employ a sequence-tagger trained using the Flair-framework (Akbik et al., 2018) , which is a BiLSTM-CRF proposed by (Huang et al., 2015) . besides (Akbik et al., 2018) introduces Contextual String Embeddings. Thereby, sentences are passed as sequences of characters into a character-level Language Model (LM) to form word-level embeddings. This is done by concatenating the output hidden state after the last character in the word from the forward LM and the output hidden state before the word's first character from the backward LM. These two language models result from utilizing the hidden states of the forward-backward recurrent neural network (see (Akbik et al., 2018) for more details about the approach).",
"cite_spans": [
{
"start": 62,
"end": 82,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 119,
"end": 139,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 150,
"end": 170,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 658,
"end": 678,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diacritization",
"sec_num": "5"
},
{
"text": "Fortunately, some available Arabic resources are diacritized. Al-Shamela corpus (Belinkov et al., 2016) is a large-scale, historical corpus of Arabic of about 1 billion words from diverse periods of time. It is important to specify that the corpus in its initial state will not be exploitable in our resolution process, for this reason, a series of operations will be carried out, as follows:",
"cite_spans": [
{
"start": 80,
"end": 103,
"text": "(Belinkov et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 Delete empty lines and lines with a single character and keep only lines with a high amount of vowels (> 40% of characters are short vowels)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 Split words to obtain pairs consisting of (letter, Diacritic Mark).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "Applying these steps to the Al-Shamela corpus gives an amount of around 5 million lines of training data. We used only a part of this large corpus (\u2248 1.5 million lines). Some other parts of this corpus were held out for validation and testing (1300 validation set and 8000 lines for testing).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We use the sequence to sequence biLSTM provided by the Flair framework. We kept the configuration as simple as possible. The embeddings are obtained by stacking forward and backward embeddings of size 512 each. They were computed while training a one layer language model. The network consists of an RNN encoder of 1 layer and 128 hidden states and a CRF decoder. With such a configuration, an accuracy of 95.34% was achieved on the held out test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "6 Real-Time Aspects All components of the pipeline reside on only one GPU of type TITAN RTX with 24 GB memory. The number of components is 7 where there are 3 for each S2ST direction plus the diacritization component for Arabic side. We measured the latency of the system components by using 5 Arabic sentences from the test set Alj.MSA.2h consisting of 16, 9, 8, 12 and 9 words, hence 54 words in total. For ASR, we use the synthesis of this sentences which has the total duration of 47.85 seconds. The ratio of the total duration of processing the 5 sentences by the whole pipeline system to the total input speech duration is 7.95 47.85 = 0.17. This means that the processing time is about 6 times faster than the input speech duration. Hence, the system is real-time capable and Long sentences can be chunked into smaller parts in a stream-like output (see (Nguyen et al., 2020a) as an example of using The LSTM-based model for ASR to process a stream). Table 4 shows the time duration in total for the 5 sentences, the average time per sentence, per word, and the model size on the GPU. component total time avg. per sentence per word GPU model size (GB) ASR 1.06 0.212 0.020 1.4 MT 1.90 0.381 0.035 0.9 Diacritization 0.68 0.136 0.014 0.8 TTS 4.31 0, 861 0.080 1.7 Table 4 : The measurement of the latency for each component of the pipeline. Times are in seconds(\u2193). The measurements are for 5 sentences with (16, 9, 8, 12, 9) words, i.e 54 words in total It can also be noted from the Table that the TTS is the slowest component in the pipeline.",
"cite_spans": [
{
"start": 861,
"end": 883,
"text": "(Nguyen et al., 2020a)",
"ref_id": "BIBREF17"
},
{
"start": 1415,
"end": 1419,
"text": "(16,",
"ref_id": null
},
{
"start": 1420,
"end": 1422,
"text": "9,",
"ref_id": null
},
{
"start": 1423,
"end": 1425,
"text": "8,",
"ref_id": null
},
{
"start": 1426,
"end": 1429,
"text": "12,",
"ref_id": null
},
{
"start": 1430,
"end": 1432,
"text": "9)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 958,
"end": 965,
"text": "Table 4",
"ref_id": null
},
{
"start": 1271,
"end": 1278,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "In this work, we presented the different components of our German-Arabic speech-to-speech translation system which is being deployed in the context of interpretation during psychiatric, diagnostic interviews. For this purpose, we have built a pipe-lined speech-to-speech translation system consisting of automatic speech recognition, machine translation, text post-processing, and speech synthesis systems. We also described the problems we faced while building these components and how we proceeded to overcome them. The speech recognition system uses an LSTM-based encoder/decoder + attention architecture, the machine translation component is based on the Transformer architecture, the post-processing for Arabic employs a sequence-tagger for diacritization, and for the speech synthesis systems we use Tacotron 2 for generating spectrograms and WaveGlow as a vocoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For domain adaptation, we used fine-tuning and mixed-fine-tuning on hand-created in-domain data. We achieved thereby considerable improvements despite the scarce collected domain data. This result confirms the already well established fact about the extreme importance of in-domain data. Therefore, collecting more of this domain-related data is on the top of our priorities in the near future. Certainly, this data collection process will not prevent us from trying more ways to exploit the data we already have. For instance, more efforts will be put into exploring other adaptation techniques. Additionally, we are investigating the exploitation of the large quantities of monolingual data in the translation components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/NVIDIA/tacotron2 2 https://github.com/TEQST/TEQST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/wmt20/ 4 http://iwslt.org/doku.php 5 In-domain data 6 https://wit3.fbk.eu/ 7 http://opus.nlpl.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://alt.qcri.org/resources/qedcorpus/ 9 https://www.jw.org/en/ 10 http://mokk.bme.hu/resources/hunalign/ 11 https://github.com/clab/fast_align",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other ways of combining selections from the two sides of the parallel corpus were not explored in this work. Our choice (i.e. the intersection) is motivated by our aim for higher precision.13 https://github.com/google/sentencepiece",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The source code is available at https://github.com/NVIDIA/tacotron2 15 Buckwalter transliteration can be found here: http://www.qamus.org/transliteration.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work in this paper was funded by the German Ministry of Education and Science within the project Removing language barriers in treating refugees-RELATER, no. 01EF1803B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multi-components system for automatic arabic diacritization",
"authors": [
{
"first": "Hamza",
"middle": [],
"last": "Abbad",
"suffix": ""
},
{
"first": "Shengwu",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2020,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "341--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamza Abbad and Shengwu Xiong. 2020. Multi-components system for automatic arabic diacritization. In European Conference on Information Retrieval, pages 341-355. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The amara corpus: Building parallel language resources for the educational domain",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzman",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "Hrafn",
"middle": [],
"last": "Loftsson",
"suffix": ""
},
{
"first": "Bente",
"middle": [],
"last": "Maegaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, and Stephan Vogel. 2014. The amara corpus: Building parallel language resources for the educational domain. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "JW300: A wide-coverage parallel corpus for low-resource languages",
"authors": [
{
"first": "Zeljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3204--3210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide-coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638-1649.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The mgb-2 challenge: Arabic multi-dialect broadcast media recognition",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Messaoui",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "279--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Ali, Peter Bell, James Glass, Yacine Messaoui, Hamdy Mubarak, Steve Renals, and Yifan Zhang. 2016. The mgb-2 challenge: Arabic multi-dialect broadcast media recognition. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 279-284. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "2020. FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN",
"authors": [
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ajay",
"middle": [],
"last": "Nagesh",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 17th International Conference on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond\u0159ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St\u00fcker, Marco Turchi, Alexander Waibel, and Changhan Wang. 2020. FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 1-34, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Shamela: A large-scale historical arabic corpus",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Magidow",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Shmidman",
"suffix": ""
},
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.08989"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Alexander Magidow, Maxim Romanov, Avi Shmidman, and Moshe Koppel. 2016. Shamela: A large-scale historical arabic corpus. arXiv preprint arXiv:1612.08989.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Wit 3 : Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "261--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit 3 : Web inventory of transcribed and translated talks. In Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT), pages 261-268, Trento, Italy, May.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An empirical comparison of domain adaptation methods for neural machine translation",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "385--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of domain adaptation methods for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 385-391.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Domain Adaptation for Statistical Classifiers",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Iii",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "J. Artif. Int. Res",
"volume": "26",
"issue": "1",
"pages": "101--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "III Hal Daum\u00e9 and Daniel Marcu. 2006. Domain Adaptation for Statistical Classifiers. J. Artif. Int. Res., 26(1):101-126, May.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Arabic natural language processing: Challenges and solutions",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Farghaly",
"suffix": ""
},
{
"first": "Khaled",
"middle": [],
"last": "Shaalan",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "8",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Farghaly and Khaled Shaalan. 2009. Arabic natural language processing: Challenges and solutions. ACM Transactions on Asian Language Information Processing, 8(4), December.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A new algorithm for data compression",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Gage",
"suffix": ""
}
],
"year": 1994,
"venue": "C Users Journal",
"volume": "12",
"issue": "2",
"pages": "23--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Gage. 1994. A new algorithm for data compression. C Users Journal, 12(2):23-38.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Modern standard arabic phonetics for speech synthesis",
"authors": [
{
"first": "Nawar",
"middle": [],
"last": "Halabi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nawar Halabi. 2016. Modern standard arabic phonetics for speech synthesis. Ph.D. thesis, University of Southampton.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dactor: A data collection tool for the relater project",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Hussain",
"suffix": ""
},
{
"first": "Oussama",
"middle": [],
"last": "Zenkri",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6627--6632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Hussain, Oussama Zenkri, Sebastian St\u00fcker, and Alex Waibel. 2020. Dactor: A data collection tool for the relater project. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6627-6632.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving In-Domain Data Selection for Small In-Domain Sets",
"authors": [
{
"first": "Mohammed",
"middle": [],
"last": "Mediani",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Winebarger",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Mediani, Joshua Winebarger, and Alexander Waibel. 2014. Improving In-Domain Data Selection for Small In-Domain Sets.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Intelligent Selection of Language Model Training Data",
"authors": [
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "William",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL (Short Papers)",
"volume": "",
"issue": "",
"pages": "220--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Moore and William D. Lewis. 2010. Intelligent Selection of Language Model Training Data. In ACL (Short Papers), pages 220-224.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "High performance sequenceto-sequence model for streaming speech recognition",
"authors": [
{
"first": "Thai-Son",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Ngoc-Quan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Stueker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10022"
]
},
"num": null,
"urls": [],
"raw_text": "Thai-Son Nguyen, Ngoc-Quan Pham, Sebastian Stueker, and Alex Waibel. 2020a. High performance sequence- to-sequence model for streaming speech recognition. arXiv preprint arXiv:2003.10022.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation",
"authors": [
{
"first": "Thai-Son",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7689--7693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thai-Son Nguyen, Sebastian St\u00fcker, Jan Niehues, and Alex Waibel. 2020b. Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation. In ICASSP 2020-2020 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 7689-7693. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Dieleman",
"suffix": ""
},
{
"first": "Heiga",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.03499"
]
},
"num": null,
"urls": [],
"raw_text": "Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalch- brenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evalua- tion of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Waveglow: A flow-based generative network for speech synthesis",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Prenger",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valle",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "3617--3621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2019. Waveglow: A flow-based generative network for speech synthesis. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3617-3621. IEEE.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The mini-international neuropsychiatric interview (m.i.n.i.): The development and validation of a structured diagnostic psychiatric interview for dsm-iv and icd-10",
"authors": [
{
"first": "David",
"middle": [
"V"
],
"last": "Sheehan",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Lecrubier",
"suffix": ""
},
{
"first": "K",
"middle": [
"Harnett"
],
"last": "Sheehan",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Amorim",
"suffix": ""
},
{
"first": "Juris",
"middle": [],
"last": "Janavs",
"suffix": ""
},
{
"first": "Emmanuelle",
"middle": [],
"last": "Weiller",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Hergueta",
"suffix": ""
},
{
"first": "Roxy",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"C"
],
"last": "Dunbar",
"suffix": ""
}
],
"year": 1998,
"venue": "The Journal of Clinical Psychiatry",
"volume": "59",
"issue": "20",
"pages": "22--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David V. Sheehan, Yves Lecrubier, K. Harnett Sheehan, Patricia Amorim, Juris Janavs, Emmanuelle Weiller, Thierry Hergueta, Roxy Baker, and Geoffrey C. Dunbar. 1998. The mini-international neuropsychiatric inter- view (m.i.n.i.): The development and validation of a structured diagnostic psychiatric interview for dsm-iv and icd-10. The Journal of Clinical Psychiatry, 59(20):22-33.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Ron",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Zongheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Rj",
"middle": [],
"last": "Skerrv-Ryan",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4779--4783",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. 2018. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779-4783. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017a. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017b. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table><tr><td>test set</td><td colspan=\"4\">baseline mixed-FT Dec mixed-FT Dec FT</td></tr><tr><td colspan=\"2\">Alj.MSA+dialect.10h 18.8</td><td>18.1</td><td>18.6</td><td>25.7</td></tr><tr><td>Alj.MSA.2h</td><td>12.6</td><td>11.8</td><td>12.3</td><td>20.5</td></tr><tr><td>mini-ans.42m</td><td>40.0</td><td>16.4</td><td>18.4</td><td>14.8</td></tr><tr><td>mini.ques.50m</td><td>30.4</td><td>8.6</td><td>10.3</td><td>6.2</td></tr><tr><td/><td/><td/><td/><td>An example</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Table 1: ASR results, where FT stands for fine-tuning and dec for Decoder. The values are the Word Error Rate (WER) \u2193 in percent"
},
"TABREF2": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>System</td><td colspan=\"3\">JW-TEST TED-TEST Q+A-TEST</td></tr><tr><td colspan=\"2\">A: German \u2192 Arabic</td><td/><td/></tr><tr><td>TED only</td><td>5.06</td><td>12.44</td><td>7.99</td></tr><tr><td>TED+Extra data</td><td>23.90</td><td>14.62</td><td>8.19</td></tr><tr><td>Fine-tuned</td><td>5.34</td><td>12.94</td><td>12.18</td></tr><tr><td>Fine-tune-select</td><td>23.18</td><td>14.35</td><td>19.16</td></tr><tr><td colspan=\"2\">B: Arabic \u2192 German</td><td/><td/></tr><tr><td>TED+Extra data</td><td>17.17</td><td>9.67</td><td>6.18</td></tr><tr><td>Fine-tune-select</td><td>16.60</td><td>9.06</td><td>15.96</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF4": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Summary of the translation experiments (results are expressed as BLEU (\u2191) scores)"
}
}
}
}