ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-demo.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:35.765721Z"
},
"title": "FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "changhan@fb.com"
},
{
"first": "Yun",
"middle": [],
"last": "Tang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "yuntang@fb.com"
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "ma@jhu.edu"
},
{
"first": "Anne",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "annewu@fb.com"
},
{
"first": "Dmytro",
"middle": [],
"last": "Okhonko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Facebook",
"middle": [],
"last": "Ai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce FAIRSEQ S2T, a FAIRSEQ (Ott et al., 2019) extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation. It follows FAIRSEQ's careful design for scalability and extensibility. We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. We implement state-of-the-art RNN-based as well as Transformer-based models and opensource detailed training recipes. FAIRSEQ's machine translation models and language models can be seamlessly integrated into S2T workflows for multi-task learning or transfer learning. FAIRSEQ S2T documentation and examples are available at https:",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce FAIRSEQ S2T, a FAIRSEQ (Ott et al., 2019) extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation. It follows FAIRSEQ's careful design for scalability and extensibility. We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. We implement state-of-the-art RNN-based as well as Transformer-based models and opensource detailed training recipes. FAIRSEQ's machine translation models and language models can be seamlessly integrated into S2T workflows for multi-task learning or transfer learning. FAIRSEQ S2T documentation and examples are available at https:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "End-to-end sequence-to-sequence (S2S) modeling has witnessed rapidly increased applications in speech-to-text (S2T) tasks. It achieves state-of-theart performance on automatic speech recognition (ASR) (Park et al., 2019; Synnaeve et al., 2019) and leads to the recent resurgence of speech-totext translation (ST) research (Duong et al., 2016; B\u00e9rard et al., 2016) . ASR and ST are closely related. There are recent attempts to combine the two tasks under the same S2S model architecture via multi-task learning (Anastasopoulos and Chiang, 2018; . They also benefit from each other via transfer learning (Bansal et al., 2019; Wang et al., 2020b) and are able to leverage additional supervision from machine translation (MT) and language modeling (LM). When supervised data is not abundant, self-supervised pretraining (Schneider et al., 2019; and semi-supervised training (Kahn et al., 2020; lowers the requirements on supervision and improves model performance.",
"cite_spans": [
{
"start": 201,
"end": 220,
"text": "(Park et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 221,
"end": 243,
"text": "Synnaeve et al., 2019)",
"ref_id": "BIBREF46"
},
{
"start": 322,
"end": 342,
"text": "(Duong et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 343,
"end": 363,
"text": "B\u00e9rard et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 511,
"end": 544,
"text": "(Anastasopoulos and Chiang, 2018;",
"ref_id": "BIBREF0"
},
{
"start": 603,
"end": 624,
"text": "(Bansal et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 625,
"end": 644,
"text": "Wang et al., 2020b)",
"ref_id": "BIBREF51"
},
{
"start": 817,
"end": 841,
"text": "(Schneider et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 871,
"end": 890,
"text": "(Kahn et al., 2020;",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The increased connections among ASR, ST, MT and LM has called for all-in-one S2S modeling toolkits, and the use of large-scale unlabeled speech data sets the scalability requirements. In this paper, we introduce FAIRSEQ S2T, a FAIRSEQ extension for S2T tasks such as end-to-end ASR and ST. It follows FAIRSEQ's careful design for scalability and extensibility. We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. We implement state-of-the-art RNN-based (Chan et al., 2016; B\u00e9rard et al., 2018) and Transformer-based (Vaswani et al., 2017; Mohamed et al., 2019) models and open-source detailed training recipes. FAIRSEQ's MT models and LMs can be seamlessly integrated into S2T workflows for multi-task learning or transfer learning. To facilitate model evaluation, we add a collection of scorers as well as VizSeq integration for visualized error analysis. FAIRSEQ S2T documentation and examples are available at https://github.com/pytorch/fairseq/ tree/master/examples/speech_to_text.",
"cite_spans": [
{
"start": 505,
"end": 524,
"text": "(Chan et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 525,
"end": 545,
"text": "B\u00e9rard et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 568,
"end": 590,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 591,
"end": 612,
"text": "Mohamed et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With counterpart toolkits such as ESPNet (Inaguma et al., 2020) and Lingvo (Shen et al., 2019) , FAIRSEQ S2T pursues the best integration, scalability and reproducibility. A detailed comparison of FAIRSEQ S2T with its counterparts can be found in Table 1 .",
"cite_spans": [
{
"start": 41,
"end": 63,
"text": "(Inaguma et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 75,
"end": 94,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Fairseq Models FAIRSEQ provides a collection of MT models Lewis et al., 2020) and LMs Conneau et al., 2020) that demonstrate state-of-the-art performance on standard benchmarks. They are open-sourced with pre-trained models. FAIRSEQ also supports other tasks such as text summarization, story generation and self-supervised speech pre-training.",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "Lewis et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 86,
"end": 107,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2"
},
{
"text": "Non-Autoreg. Offline Online Speech Multi-node Pre-trained MT ST ST Pre-training training models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "ESPNet-ST \u2020 Lingvo \u2021 OpenSeq2seq 1 RETURNN 2 SLT.KIT 3 Tensor2Tensor 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "OpenNMT 5 Kaldi 6 Wav2letter++ 7 fairseq S2T S2T extension FAIRSEQ S2T adds attentionbased RNN models (Chan et al., 2016; B\u00e9rard et al., 2018) as well as the latest Transformer models (Vaswani et al., 2017; Mohamed et al., 2019) for ASR and ST. It also supports CTC criterion (Graves et al., 2006) for ASR. For the simultaneous ST setting, it includes online models with widely used policies: monotonic attention (Raffel et al., 2017) , wait-k (Ma et al., 2019) , monotonic infinite lookback attention (Arivazhagan et al., 2019b) , and monotonic multihead attention (Ma et al., 2020b) .",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "(Chan et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 122,
"end": 142,
"text": "B\u00e9rard et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 184,
"end": 206,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 207,
"end": 228,
"text": "Mohamed et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 276,
"end": 297,
"text": "(Graves et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 413,
"end": 434,
"text": "(Raffel et al., 2017)",
"ref_id": "BIBREF43"
},
{
"start": 444,
"end": 461,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 502,
"end": 529,
"text": "(Arivazhagan et al., 2019b)",
"ref_id": "BIBREF3"
},
{
"start": 566,
"end": 584,
"text": "(Ma et al., 2020b)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "Data Pre-Processing FAIRSEQ S2T extracts Kaldi-compliant (Povey et al., 2011) speech features (e.g. log mel-filter banks) automatically from WAV/FLAC audio files via PyKaldi or torchaudio 1 . Speech features can also be pre-computed and stored in NumPy (Harris et al., 2020) format. Optionally, raw audio files or features files can be packed into ZIP archives to improve I/O performance or facilitate file management. For further pre-processing, FAIRSEQ S2T provides online speech data transforms, including CMVN (cepstral mean and variance normalization), speed perturbation (Ko et al., 2017) and SpecAugment (Park et al., 2019) . It also has an open interface for user-defined transforms. For text data, FAIRSEQ S2T does online tokenization with a rich collection of tokenizers, including Moses 2 , SentencePiece (Kudo and Richardson, 2018) , subword-nmt 3 , byte-level BPE and bytes .",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF41"
},
{
"start": 577,
"end": 594,
"text": "(Ko et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 611,
"end": 630,
"text": "(Park et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 816,
"end": 843,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "Data Configuration FAIRSEQ S2T gets raw audio (feature) paths and target texts from manifest files in TSV (tab-separated values) format, which is similar to Kaldi-style scp files. Online speech data transforms and other data-related settings (e.g. tokenizer type and vocabulary) are defined by a separate configuration file in YAML format.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "Computation FAIRSEQ is implemented in Py-Torch (Paszke et al., 2019) and it provides efficient batching, mixed precision training (Micikevicius et al., 2018) , multi-GPU as well as multi-machine training for computational efficiency on large-scale experiments.",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 130,
"end": 157,
"text": "(Micikevicius et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "Evaluation Metrics FAIRSEQ S2T provides common automatic metrics for ASR, ST and MT, including WER (word error rate), BLEU (Papineni et al., 2002) and chrF (Popovi\u0107, 2015) . It also integrates SIMULEVAL (Ma et al., 2020a) for simultaneous ST/MT metrics such as AL (average lagging) (Ma et al., 2019) and DAL (differentiable average Lagging) (Cherry and Foster, 2019) .",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF35"
},
{
"start": 156,
"end": 171,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF39"
},
{
"start": 203,
"end": 221,
"text": "(Ma et al., 2020a)",
"ref_id": "BIBREF28"
},
{
"start": 282,
"end": 299,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 341,
"end": 366,
"text": "(Cherry and Foster, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "Visualization FAIRSEQ supports Tensorboard 4 for monitoring holistic metrics during model training. It also has VizSeq integration for sequence-level error analysis, where speech and target/predicted text data are visualized with alignments in Jupyter Notebook interface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR LM MT",
"sec_num": null
},
{
"text": "We evaluate FAIRSEQ S2T models on English ASR benchmark-LibriSpeech (Panayotov et al., 2015) , as well as multilingual ST benchmarks-MuST-C (Di Gangi et al., 2019a) 13.7 High Lat. \u2021 18.6 (6.8) 22.9 (6.9) 22.3 (6.8) 28.4 (6.7) 15.4 (6.8) 22.6 (6.9) 19.1 (6.7) 12.9 (6.9) Mid Lat. \u2021 14.1 (5.4) 17.9 (5.4) 17.2 (5.5) 25.0 (5.3) 12.0 (5.5) 17.7 (5.8) 15.0 (5.6) 7.2 (5.8) Low Lat. \u2021 8.2 (2.9) 12.3 (2.8) 13.0 (3.0) 21.1 (2.8) 6.7 (2.9) 13.3 (2.9) 12.1 (2.9) 4.9 (2.7) et al., 2020c). The model architectures used in benchmarking can be found in Table 3 .",
"cite_spans": [
{
"start": 68,
"end": 92,
"text": "(Panayotov et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 144,
"end": 164,
"text": "Gangi et al., 2019a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 541,
"end": 548,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "For speech inputs, we extract 80-channel log melfilter bank features (25ms window size and 10ms shift) with utterance-level CMVN applied. We remove training samples with more than 3,000 frames for GPU memory efficiency. To alleviate overfitting, we pre-train ST model encoders on English ASR and adopt SpecAugment (without time warping): LD policy on LibriSpeech models and LB policy on MuST-C and CoVoST 2 models. We av-erage the last 10 checkpoints and use a beam size of 5 for decoding. For ASR, we use 10K unigram vocabulary (Kudo and Richardson, 2018) and report WER. For ST, we use character vocabulary for CoVoST 2 and 8K unigram vocabulary for MuST-C. We report case-sensitive detokenized BLEU using sacreBLEU (Post, 2018) , except for Japanese and Chinese translations (no word segmentation) where we report character-level BLEU.",
"cite_spans": [
{
"start": 529,
"end": 556,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF22"
},
{
"start": 718,
"end": 730,
"text": "(Post, 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "LibriSpeech is a de-facto standard ASR benchmark that contains 1,000 hours of English speech from audiobooks. Table 4 shows the dev and test WER of our models on LibriSpeech clean and noisy sets. Two popular architectures, RNN-based model (\"B-Big\") and Transformer based models (\"T-Sm\", \"T-Md\" and \"T-Lg\"), are evaluated. We can see that both architectures are able to achieve competitive performance (WER) to the state-of-the-art ones (the upper section), while we use only default model hyper-parameters and learning rate schedule without any task-specific tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Speech Recognition (ASR)",
"sec_num": "3.2"
},
{
"text": "MuST-C contains up to around 500 hours of English speech from TED talks with translations in 8 European languages. Table 2 shows the test BLEU of our Transformer-based models (\"T-Sm\" and \"Multi. T-Md\") and RNN-based models (\"B-Base\") on all the MuST-C language directions. Compared with previous Transformer-based approaches (Di Gangi et al., 2019b; Inaguma et al., 2020) , our bilingual models achieve comparative results to the state of the art without applying additional techniques such as speed perturbation and (Schneider et al., 2019; . \u2020 Trained jointly on all 21 X-En directions with temperature-based (T=2) resampling (Arivazhagan et al., 2019a pre-trained decoder from MT. Moreover, our multilingual model (trained on all 8 languages) outperforms all bilingual ones with large margins. Besides traditional offline models, we also provide simultaneous ST models: the lower section in Table 2 presents the online models with wait-k policy, which was the baseline system in the IWSLT 2020 shared task on simultaneous ST (Ansari et al., 2020) . The results represent the best systems in high (AL > 6), medium (6 \u2265 AL > 3) and low (AL \u2264 3) latency regimes, on which we can clearly see the trade-offs between model performance and prediction latency.",
"cite_spans": [
{
"start": 329,
"end": 349,
"text": "Gangi et al., 2019b;",
"ref_id": "BIBREF13"
},
{
"start": 350,
"end": 371,
"text": "Inaguma et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 517,
"end": 541,
"text": "(Schneider et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 628,
"end": 654,
"text": "(Arivazhagan et al., 2019a",
"ref_id": "BIBREF2"
},
{
"start": 1028,
"end": 1049,
"text": "(Ansari et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "MuST-C",
"sec_num": "3.3.1"
},
{
"text": "CoVoST 2 contains total 2,880 hours of read speech in 22 languages from the open-source community, with 21 X-En directions and 15 En-X directions. We evaluate our models bidirectionally on 13 languages of them, including low-resource X-En directions: Zh, Tr, Ar, Sv, Lv, Sl, Ta, Ja, Id and Cy. We observe from Table 5 that our Transformer-based models (\"T-Sm\" and \"T-Md\") outperforms RNNbased ones (\"B-Base\" and \"B-Big\") on all En-X and X-En directions. The performance gap tends to be larger when the training data is higher resource (En-X directions, Fr-En, De-En and Es-En). Our multilingual models perform reasonably well with a universal model for over 15 X-En or En-X directions. They even have significant improvements on some directions (e.g. at least 4 BLEU gain on Es-En). For low-resource directions, we also evaluate self-supervised speech features (Schneider et al., 2019; Wu et al., 2020) 5 as an alternative to the traditional log mel-filter bank features (\"+ SSL\"). We find that self-supervised features bring consistent gains and transfer well across different languages (self-supervised model trained on English and feature extracted for non-English).",
"cite_spans": [
{
"start": 861,
"end": 885,
"text": "(Schneider et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 886,
"end": 904,
"text": "Wu et al., 2020) 5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 310,
"end": 317,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "CoVoST 2",
"sec_num": "3.3.2"
},
{
"text": "We introduce FAIRSEQ S2T, a FAIRSEQ extension for speech-to-text (S2T) modeling tasks such as speech recognition and speech translation. It includes end-to-end workflows and state-of-theart models with scalablity and extensibility design. It seamlessly integrates FAIRSEQ's machine translation models and language models to improve S2T model performance. FAIRSEQ S2T documentation and examples are available at https://github.com/pytorch/fairseq/ tree/master/examples/speech_to_text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://github.com/pytorch/audio 2 https://github.com/moses-smt/mosesdecoder 3 https://github.com/rsennrich/subword-nmt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/tensorflow/tensorboard",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "From a wav2vec model pre-trained on LibriSpeech: https://github.com/pytorch/fairseq/tree/ master/examples/wav2vec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Myle Ott, Michael Auli, Alexei Baevski, Jiatao Gu, Abdelrahman Mohamed and Javad Dousti for helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tied multitask learning for neural speech translation",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "82--91",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1008"
]
},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 82-91, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "2020. FINDINGS OF THE IWSLT 2020 EVALU-ATION CAMPAIGN",
"authors": [
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ajay",
"middle": [],
"last": "Nagesh",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 17th International Conference on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "1--34",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwslt-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond\u0159ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Eliz- abeth Salesky, Xing Shi, Sebastian St\u00fcker, Marco Turchi, Alexander Waibel, and Changhan Wang. 2020. FINDINGS OF THE IWSLT 2020 EVALU- ATION CAMPAIGN. In Proceedings of the 17th In- ternational Conference on Spoken Language Trans- lation, pages 1-34, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Massively multilingual neural machine translation in the wild: Findings and challenges",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Lepikhin",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Mia",
"middle": [
"Xu"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.05019"
]
},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019a. Massively multilingual neural machine translation in the wild: Findings and chal- lenges. arXiv preprint arXiv:1907.05019.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Monotonic infinite lookback attention for simultaneous machine translation",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Semih",
"middle": [],
"last": "Yavuz",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1313--1323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019b. Monotonic infinite lookback attention for simul- taneous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1313-1323.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pretraining on high-resource speech recognition improves low-resource speech-to-text translation",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "58--68",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2019. Pre- training on high-resource speech recognition im- proves low-resource speech-to-text translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 58-68, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "End-toend automatic speech translation of audiobooks",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Ali",
"middle": [
"Can"
],
"last": "Kocabiyikoglu",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6224--6228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Laurent Besacier, Ali Can Ko- cabiyikoglu, and Olivier Pietquin. 2018. End-to- end automatic speech translation of audiobooks. In 2018 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 6224-6228. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Listen and translate: A proof of concept for end-to-end speech-to-text translation",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.01744"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text trans- lation. arXiv preprint arXiv:1612.01744.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pykaldi: A python wrapper for kaldi",
"authors": [
{
"first": "Dogan",
"middle": [],
"last": "Can",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Victor",
"suffix": ""
},
{
"first": "Pavlos",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [
"S"
],
"last": "Papadopoulos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2018,
"venue": "Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dogan Can, Victor R. Martinez, Pavlos Papadopou- los, and Shrikanth S. Narayanan. 2018. Pykaldi: A python wrapper for kaldi. In Acoustics, Speech and Signal Processing (ICASSP), 2018 IEEE Inter- national Conference on. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4960--4964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 4960-4964. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Thinking slow about latency evaluation for simultaneous machine translation",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.00048"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and George Foster. 2019. Thinking slow about latency evaluation for simultaneous machine translation. arXiv preprint arXiv:1906.00048.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Oneto-many multilingual end-to-end speech translation",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Di Gangi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)",
"volume": "",
"issue": "",
"pages": "585--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Di Gangi, M. Negri, and M. Turchi. 2019. One- to-many multilingual end-to-end speech translation. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 585-592.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Must-c: a multilingual speech translation corpus",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Mattia",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2012--2017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia A Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019a. Must-c: a multilingual speech translation corpus. In 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 2012-2017. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Enhancing transformer for end-to-end speech-to-text translation",
"authors": [
{
"first": "Mattia Antonino Di",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Dessi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Research Track",
"volume": "1",
"issue": "",
"pages": "21--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia Antonino Di Gangi, Matteo Negri, Roldano Cat- toni, Roberto Dessi, and Marco Turchi. 2019b. En- hancing transformer for end-to-end speech-to-text translation. In Proceedings of Machine Translation Summit XVII Volume 1: Research Track, pages 21- 31, Dublin, Ireland. European Association for Ma- chine Translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An attentional model for speech translation without transcription",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "949--959",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. 2016. An attentional model for speech translation without transcription. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 949-959.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Faustino",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "369--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks. In Pro- ceedings of the 23rd international conference on Ma- chine learning, pages 369-376.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Array programming with numpy",
"authors": [
{
"first": "Jarrod",
"middle": [],
"last": "Charles R Harris",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Millman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "St\u00e9fan",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Van Der Walt",
"suffix": ""
},
{
"first": "Pauli",
"middle": [],
"last": "Gommers",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Virtanen",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Wieser",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [
"J"
],
"last": "Berg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Nature",
"volume": "585",
"issue": "7825",
"pages": "357--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles R Harris, K Jarrod Millman, St\u00e9fan J van der Walt, Ralf Gommers, Pauli Virtanen, David Cour- napeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. 2020. Array programming with numpy. Nature, 585(7825):357-362.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ESPnet-ST: All-in-one speech translation toolkit",
"authors": [
{
"first": "Hirofumi",
"middle": [],
"last": "Inaguma",
"suffix": ""
},
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Shigeki",
"middle": [],
"last": "Karita",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Yalta",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "302--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirofumi Inaguma, Shun Kiyono, Kevin Duh, Shigeki Karita, Nelson Yalta, Tomoki Hayashi, and Shinji Watanabe. 2020. ESPnet-ST: All-in-one speech translation toolkit. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 302- 311, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Selftraining for end-to-end speech recognition",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Kahn",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Awni",
"middle": [],
"last": "Hannun",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7084--7088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Kahn, Ann Lee, and Awni Hannun. 2020. Self- training for end-to-end speech recognition. ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7084-7088.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A study on data augmentation of reverberant speech for robust speech recognition",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Vijayaditya",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Seltzer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5220--5224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Ko, Vijayaditya Peddinti, Daniel Povey, Michael L Seltzer, and Sanjeev Khudanpur. 2017. A study on data augmentation of reverberant speech for robust speech recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5220-5224. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Openseq2seq: extensible toolkit for distributed and mixed precision training of sequence-tosequence models",
"authors": [
{
"first": "Oleksii",
"middle": [],
"last": "Kuchaiev",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Ginsburg",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Gitman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oleksii Kuchaiev, Boris Ginsburg, Igor Gitman, Vi- taly Lavrukhin, Carl Case, and Paulius Micikevi- cius. 2018. Openseq2seq: extensible toolkit for dis- tributed and mixed precision training of sequence-to- sequence models. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 41- 46.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bytes are all you need: Endto-end multilingual speech recognition and synthesis with bytes",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Sainath",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5621--5625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Li, Yu Zhang, Tara Sainath, Yonghui Wu, and William Chan. 2019. Bytes are all you need: End- to-end multilingual speech recognition and synthe- sis with bytes. In ICASSP 2019-2019 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5621-5625. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Synchronous speech recognition and speech-to-text translation with interactive decoding",
"authors": [
{
"first": "Yuchen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "8417--8424",
"other_ids": {
"DOI": [
"10.1609/aaai.v34i05.6360"
]
},
"num": null,
"urls": [],
"raw_text": "Yuchen Liu, Jiajun Zhang, Hao Xiong, Long Zhou, Zhongjun He, Hua Wu, Haifeng Wang, and Chengqing Zong. 2020. Synchronous speech recog- nition and speech-to-text translation with interactive decoding. Proceedings of the AAAI Conference on Artificial Intelligence, 34:8417-8424.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework",
"authors": [
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Kaibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3025--3036",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1289"
]
},
"num": null,
"urls": [],
"raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous trans- lation with implicit anticipation and controllable la- tency using prefix-to-prefix framework. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025-3036, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Simuleval: An evaluation toolkit for simultaneous translation",
"authors": [
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Javad"
],
"last": "Dousti",
"suffix": ""
},
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.16193"
]
},
"num": null,
"urls": [],
"raw_text": "Xutai Ma, Mohammad Javad Dousti, Changhan Wang, Jiatao Gu, and Juan Pino. 2020a. Simuleval: An evaluation toolkit for simultaneous translation. arXiv preprint arXiv:2007.16193.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Monotonic multihead attention. 8th International Conference on Learning Representations",
"authors": [
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Liezl",
"middle": [],
"last": "Puzon",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xutai Ma, Juan Pino, James Cross, Liezl Puzon, and Jiatao Gu. 2020b. Monotonic multihead attention. 8th International Conference on Learning Represen- tations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, Conference Track Proceedings.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Mixed precision training",
"authors": [
{
"first": "Paulius",
"middle": [],
"last": "Micikevicius",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Jonah",
"middle": [],
"last": "Alben",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Diamos",
"suffix": ""
},
{
"first": "Erich",
"middle": [],
"last": "Elsen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Ginsburg",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Houston",
"suffix": ""
},
{
"first": "Oleksii",
"middle": [],
"last": "Kuchaiev",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Venkatesh",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. 2018. Mixed precision training. In International Conference on Learning Representations.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Transformers with convolutional context for asr",
"authors": [
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Dmytro",
"middle": [],
"last": "Okhonko",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.11660"
]
},
"num": null,
"urls": [],
"raw_text": "Abdelrahman Mohamed, Dmytro Okhonko, and Luke Zettlemoyer. 2019. Transformers with convolutional context for asr. arXiv preprint arXiv:1904.11660.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Facebook FAIR's WMT19 news translation task submission",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Kyra",
"middle": [],
"last": "Yee",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "314--319",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5333"
]
},
"num": null,
"urls": [],
"raw_text": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission. In Proceedings of the Fourth Conference on Ma- chine Translation (Volume 2: Shared Task Papers, Day 1), pages 314-319, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4009"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Librispeech: an asr corpus based on public domain audio books",
"authors": [
{
"first": "Vassil",
"middle": [],
"last": "Panayotov",
"suffix": ""
},
{
"first": "Guoguo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5206--5210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206-5210. IEEE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Specaugment: A simple data augmentation method for automatic speech recognition",
"authors": [
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Park",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ekin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cubuk",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.21437/interspeech.2019-2680"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel S. Park, William Chan, Yu Zhang, Chung- Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. Specaugment: A simple data aug- mentation method for automatic speech recognition. Interspeech 2019.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in neural information processing systems, pages 8026-8037.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Self-training for end-to-end speech translation",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Qiantong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Javad"
],
"last": "Dousti",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.02490"
]
},
"num": null,
"urls": [],
"raw_text": "Juan Pino, Qiantong Xu, Xutai Ma, Mohammad Javad Dousti, and Yun Tang. 2020. Self-training for end-to-end speech translation. arXiv preprint arXiv:2006.02490.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "chrf: character n-gram f-score for automatic mt evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 workshop on automatic speech recognition and understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The kaldi speech recogni- tion toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, CONF. IEEE Signal Processing Society.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "wav2letter++: The fastest open-source speech recognition system",
"authors": [
{
"first": "Vineel",
"middle": [],
"last": "Pratap",
"suffix": ""
},
{
"first": "Awni",
"middle": [],
"last": "Hannun",
"suffix": ""
},
{
"first": "Qiantong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Kahn",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Synnaeve",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vineel Pratap, Awni Hannun, Qiantong Xu, Jeff Cai, Jacob Kahn, Gabriel Synnaeve, Vitaliy Liptchin- sky, and Ronan Collobert. 2018. wav2letter++: The fastest open-source speech recognition system. CoRR, abs/1812.07625.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Online and lineartime attention by enforcing monotonic alignments",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Ron",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2837--2846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Minh-Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. 2017. Online and linear- time attention by enforcing monotonic alignments. In International Conference on Machine Learning, pages 2837-2846.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "wav2vec: Unsupervised Pre-Training for Speech Recognition",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech 2019",
"volume": "",
"issue": "",
"pages": "3465--3469",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2019-1873"
]
},
"num": null,
"urls": [],
"raw_text": "Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised Pre-Training for Speech Recognition. In Proc. Inter- speech 2019, pages 3465-3469.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Lingvo: a modular and scalable framework for sequence-to-sequence modeling",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Mia",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Anjuli",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Sainath",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.08295"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, et al. 2019. Lingvo: a modular and scalable framework for sequence-to-sequence modeling. arXiv preprint arXiv:1902.08295.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "End-to-end asr: from supervised to semi-supervised learning with modern architectures",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Synnaeve",
"suffix": ""
},
{
"first": "Qiantong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Kahn",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tatiana",
"middle": [],
"last": "Likhomanenko",
"suffix": ""
},
{
"first": "Vineel",
"middle": [],
"last": "Pratap",
"suffix": ""
},
{
"first": "Anuroop",
"middle": [],
"last": "Sriram",
"suffix": ""
},
{
"first": "Vitaliy",
"middle": [],
"last": "Liptchinsky",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.08460"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel Synnaeve, Qiantong Xu, Jacob Kahn, Edouard Grave, Tatiana Likhomanenko, Vineel Pratap, Anuroop Sriram, Vitaliy Liptchinsky, and Ronan Collobert. 2019. End-to-end asr: from supervised to semi-supervised learning with modern architectures. arXiv preprint arXiv:1911.08460.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Tensor2Tensor for neural machine translation",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Chollet",
"suffix": ""
},
{
"first": "Aidan",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Sepassi",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "193--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, \u0141ukasz Kaiser, Nal Kalchbrenner, Niki Par- mar, Ryan Sepassi, Noam Shazeer, and Jakob Uszko- reit. 2018. Tensor2Tensor for neural machine trans- lation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Ameri- cas (Volume 1: Research Papers), pages 193-199, Boston, MA. Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Neural machine translation with byte-level subwords",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9154--9160",
"other_ids": {
"DOI": [
"10.1609/aaai.v34i05.6451"
]
},
"num": null,
"urls": [],
"raw_text": "Changhan Wang, Kyunghyun Cho, and Jiatao Gu. 2020a. Neural machine translation with byte-level subwords. Proceedings of the AAAI Conference on Artificial Intelligence, 34:9154-9160.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "VizSeq: a visual analysis toolkit for text generation tasks",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Danlu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations",
"volume": "",
"issue": "",
"pages": "253--258",
"other_ids": {
"DOI": [
"10.18653/v1/D19-3043"
]
},
"num": null,
"urls": [],
"raw_text": "Changhan Wang, Anirudh Jain, Danlu Chen, and Ji- atao Gu. 2019. VizSeq: a visual analysis toolkit for text generation tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 253-258, Hong Kong, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Improving cross-lingual transfer learning for endto-end speech recognition with speech translation",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.05474"
]
},
"num": null,
"urls": [],
"raw_text": "Changhan Wang, Juan Pino, and Jiatao Gu. 2020b. Improving cross-lingual transfer learning for end- to-end speech recognition with speech translation. arXiv preprint arXiv:2006.05474.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Covost 2 and massively multilingual speech-to-text translation",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changhan Wang, Anne Wu, and Juan Pino. 2020c. Covost 2 and massively multilingual speech-to-text translation. arXiv e-prints, pages arXiv-2007.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Self-supervised representations improve end-to-end speech translation",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.12124"
]
},
"num": null,
"urls": [],
"raw_text": "Anne Wu, Changhan Wang, Juan Pino, and Jiatao Gu. 2020. Self-supervised representations im- prove end-to-end speech translation. arXiv preprint arXiv:2006.12124.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Open source toolkit for speech to text translation",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Zenkel",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Ngoc-Quan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2018,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "111",
"issue": "1",
"pages": "125--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Zenkel, Matthias Sperber, Jan Niehues, Markus M\u00fcller, Ngoc-Quan Pham, Sebastian St\u00fcker, and Alex Waibel. 2018. Open source toolkit for speech to text translation. The Prague Bulletin of Mathematical Linguistics, 111(1):125-135.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "RETURNN as a generic flexible neural toolkit with application to translation and speech recognition",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Zeyer",
"suffix": ""
},
{
"first": "Tamer",
"middle": [],
"last": "Alkhouli",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "128--133",
"other_ids": {
"DOI": [
"10.18653/v1/P18-4022"
]
},
"num": null,
"urls": [],
"raw_text": "Albert Zeyer, Tamer Alkhouli, and Hermann Ney. 2018. RETURNN as a generic flexible neural toolkit with application to translation and speech recognition. In Proceedings of ACL 2018, System Demonstrations, pages 128-133, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Comparison of FAIRSEQ S2T with counterpart toolkits (as of July 2020). \u2020 Only available in version 2 (under development). \u2021 Not publicly available. 1 Kuchaiev et al. (2018). 2 Zeyer et al. (2018). 3 Zenkel et al. (2018). 4 Vaswani et al. (2018). 5 Klein et al. (2017). 6 Povey et al. (2011). 7 Pratap et al. (2018)."
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td>Type</td><td>Config.</td><td>Params</td></tr><tr><td colspan=\"2\">B-Base RNN \u2020 B-Big</td><td>512d, 3L enc./2L dec. 512d, 5L enc./3L dec.</td><td>31M 52M</td></tr><tr><td colspan=\"2\">T-Sm Trans-</td><td>256d, 12L enc./6L dec.</td><td>31M</td></tr><tr><td colspan=\"2\">T-Md form-</td><td>512d, 12L enc./6L dec.</td><td>72M</td></tr><tr><td>T-Lg</td><td>er \u2021</td><td>1024d, 12L enc./6L dec.</td><td>263M</td></tr></table>",
"html": null,
"type_str": "table",
"text": "FAIRSEQ S2T models on MuST-C. Test BLEU reported (for online models, AL is shown in parentheses). DiGangi et al. (2019).2 Inaguma et al. (2020). \u2020 Applied additional techniques: speed perturbation, pre-trained decoder from MT and auxiliary CTC loss for ASR pre-training."
},
"TABREF3": {
"num": null,
"content": "<table><tr><td/><td>Dev</td><td/><td>Test</td><td/></tr><tr><td/><td colspan=\"4\">Clean Other Clean Other</td></tr><tr><td>LAS \u2020</td><td>-</td><td>-</td><td>2.8</td><td>6.8</td></tr><tr><td>Transformer \u2021</td><td>2.5</td><td>6.7</td><td>2.9</td><td>7.0</td></tr><tr><td>B-Big</td><td>3.7</td><td>11.4</td><td>3.9</td><td>11.5</td></tr><tr><td>T-Sm</td><td>4.1</td><td>9.3</td><td>4.4</td><td>9.2</td></tr><tr><td>T-Md</td><td>3.5</td><td>8.1</td><td>3.7</td><td>8.1</td></tr><tr><td>T-Lg</td><td>3.3</td><td>7.7</td><td>3.5</td><td>7.8</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF4": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "FAIRSEQ S2T models on LibriSpeech (using default hyper-parameters and LR schedule). Dev and test WER reported."
},
"TABREF6": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "FAIRSEQ S2T models on CoVoST 2. Test BLEU reported (character-level BLEU for Zh and Ja targets). Replaced mel-filter bank features with wav2vec ones"
}
}
}
}