| { |
| "paper_id": "2019", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:54:22.009943Z" |
| }, |
| "title": "Investigating on Computer-Assisted Pronunciation Training Leveraging End-to-End Speech Recognition Techniques", |
| "authors": [ |
| { |
| "first": "Hsiu-Jui", |
| "middle": [], |
| "last": "\u5f35\u4fee\u745e", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Tien-Hong", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Tzu-En", |
| "middle": [], |
| "last": "Lo", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Berlin", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "berlin@csie.ntnu.edu.tw" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "One of the primary tasks of a computer-assisted the pronunciation techniques (CAPT) system is mispronunciation detection and diagnosis. Previous research on CAPT mostly relies on a forced-alignment procedure which is usually conducted with the acoustic models adopted from a traditional speech recognition system, in conjunction with a phoneme paragraph, to calculate the goodness of pronunciation (GOP) scores for the phonemes of spoken words with respect to a text prompt. However, the training process of the traditional speech recognition system is complicated. In recent years, the end-to-end speech recognition system has not only greatly simplified this problem, but also has the trend of catching up with", |
| "pdf_parse": { |
| "paper_id": "2019", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "One of the primary tasks of a computer-assisted the pronunciation techniques (CAPT) system is mispronunciation detection and diagnosis. Previous research on CAPT mostly relies on a forced-alignment procedure which is usually conducted with the acoustic models adopted from a traditional speech recognition system, in conjunction with a phoneme paragraph, to calculate the goodness of pronunciation (GOP) scores for the phonemes of spoken words with respect to a text prompt. However, the training process of the traditional speech recognition system is complicated. In recent years, the end-to-end speech recognition system has not only greatly simplified this problem, but also has the trend of catching up with", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "traditional speech recognition. In view of this, this thesis sets out to conduct mispronunciation detection and diagnosis on the strength of end-to-end speech recognition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "To this end, we design and develop two mispronunciation detection methods: 1) method leveraging a recognition confidence measure; 2) method simply based speech recognition results; A series of experiments showed that leveraging end-to-end speech recognition architecture on mispronunciation detection and diagnosis not only reduced the training steps originally required for traditional speech recognition but also improve the performance of detection and diagnosis significantly. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "( * | )\uff0c\u5373\u8f38\u51fa\u8d8a\u63a5\u8fd1\u771f\u5be6\u6a19\u8a18\u8d8a\u597d\u3002\u5e38\u898b\u61c9\u7528\u65bc\u97f3\u7d20\u8fa8\u8b58\u4ee5 \u53ca\u624b\u5beb\u8fa8\u8b58\u3002\u5176\u6982\u5ff5\u70ba\u7d66\u5b9a\u4e00\u6bb5\u9577\u5ea6\u70ba T \u7684\u8072\u5b78\u7279\u5fb5\u5e8f\u5217 X \u53ca\u4e00\u6bb5\u9577\u5ea6\uff2c\u7684\u6a19\u7c64\u5e8f\u5217 C\uff0c\u5176\u4e2d = { \u2208 | = 1, \u2026 , }\uff0cU \u70ba\u5b58\u5728\u7684\u6a19\u7c64\u96c6\u5408\u3002\u4e26\u4e14 CTC \u5728\u8a13\u7df4\u6642\u5f15\u5165\u4e86\u984d \u5916\u7684\u7a7a\u767d\u6a19\u7c64\uff0c\u4f5c\u70ba\u6a19\u7c64\u9593\u7684\u5206\u754c\uff0c\u6bcf\u500b\u97f3\u6846\u7684\u6a19\u7c64\u5e8f\u5217\u53ef\u8868\u793a\u70ba = { \u222a {< >}| = 1, \u2026 }\uff0c\u5176\u640d\u5931\u51fd\u6578\u53ef\u8868\u793a\u70ba\uff1a P ctc ( | ) \u2248 \u2211 \u220f P( | \u22121 , )P( | ) =1 (1) \u5176\u4e2dP( | \u22121 , )\u4ee3\u8868\u72c0\u614b\u8f49\u79fb\u6a5f\u7387\uff0cP( | )\u5247\u70ba Softmax \u8f38\u51fa\u7684\u7d50\u679c\u3002 2.2 \u6ce8\u610f\u529b\u6a21\u578b(Attention model) \u6cbf\u7528\u4e0a\u4e00\u5c0f\u7bc0\u4e2d\u7b26\u865f\u8a2d\u5b9a\uff0c\u6ce8\u610f\u529b\u6a21\u578b\u76ee\u6a19\u51fd\u5f0f\u53ef\u5b9a\u7fa9\u70ba\uff1a P att ( | ) = \u220f P( | , 1: \u22121 ) =1 (2) \u540c\u6a23\u4e5f\u5e0c\u671b\u76f4\u63a5\u4f30\u6e2c\u8072\u5b78\u7279\u5fb5\u5c0d\u61c9\u5230\u6a19\u7c64\u7684\u4e8b\u5f8c\u6a5f\u7387\uff0c\u7136\u800c\u8207 CTC \u4e0d\u540c\u5728\u65bc\u6ce8\u610f\u529b\u6a21 \u578b\u4e26\u7121\u689d\u4ef6\u7368\u7acb\u7684\u5047\u8a2d\uff0c\u5982\u4e0a\u5f0f 2 \u6240\u793a\uff0c\u6bcf\u4e00\u7576\u524d\u8f38\u51fa\u7686\u8003\u616e\u904e\u53bb\u7684\u8f38\u51fa\u3002P(c l |X, c 1:l\u22121 )\u53ef \u4ee5\u7531\u4e0b\u5217\u5f0f\u5b50\u63a8\u5f97\uff1a = Encoder( ) (3) = Attention( l\u22121 , t , \u22121 ) (4) = exp (\u03b3 ) \u2211 exp (\u03b3 ) (5) = \u2211 t t=1 (6) p( | , 1: \u22121 ) = Decoder( , , \u22121 )", |
| "eq_num": "(7" |
| } |
| ], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [ |
| "R" |
| ], |
| "last": "Rabiner", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proceedings of the IEEE", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence R. Rabiner et al., \"A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,\" Proceedings of the IEEE, 1989.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Application of Hidden Markov Models in Speech Recognition", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Gales", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Foundations and Trends\u00ae in Signal Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Gales and Steve Yang, \"The Application of Hidden Markov Models in Speech Recognition,\" Foundations and Trends\u00ae in Signal Processing, 2008.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "IEEE Signal", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Geoffrey Hinton et al., \"Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,\" IEEE Signal processing magazine, 2012.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Convolutional neural networks for speech recognition", |
| "authors": [ |
| { |
| "first": "Ossama", |
| "middle": [], |
| "last": "Abdel-Hamid", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "IEEE/ACM Transactions on audio", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ossama Abdel-Hamid et al., \"Convolutional neural networks for speech recognition,\" IEEE/ACM Transactions on audio, speech, and language processing, 2014.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Speech recognition with deep recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves et al., \"Speech recognition with deep recurrent neural networks,\" ICASSP, 2013.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition", |
| "authors": [ |
| { |
| "first": "Ha\u015fim", |
| "middle": [], |
| "last": "Sak", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ha\u015fim Sak et al., \"Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition,\" arXiv, 2014.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A time delay neural network architecture for efficient modeling of long temporal contexts", |
| "authors": [ |
| { |
| "first": "Vijayaditya", |
| "middle": [], |
| "last": "Peddinti", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vijayaditya Peddinti et al., \"A time delay neural network architecture for efficient modeling of long temporal contexts,\" Interspeech,2015.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves et al., \"Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,\" ICML, 2006.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau et al., \"Neural machine translation by jointly learning to align and translate,\" ICLR, 2015.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Show, attend and tell: Neural image caption generation with visual attention", |
| "authors": [ |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kelvin Xu et al., \"Show, attend and tell: Neural image caption generation with visual attention,\" ICML, 2015.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Attention-Based Models for Speech Recognition", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Chorowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Chorowski et al., \"Attention-Based Models for Speech Recognition,\" NIPS, 2015.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Joint CTC-Attention based end-to-end speech recognition using multi-task learning", |
| "authors": [ |
| { |
| "first": "Suyoun", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Suyoun Kim et al., \"Joint CTC-Attention based end-to-end speech recognition using multi-task learning,\" ICASSP, 2017.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Hybrid CTC/attention architecture for end-to-end speech recognition", |
| "authors": [ |
| { |
| "first": "Shinji", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IEEE Journal of Selected Topics in Signal Processing", |
| "volume": "11", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shinji Watanabe et al., \"Hybrid CTC/attention architecture for end-to-end speech recognition,\" IEEE Journal of Selected Topics in Signal Processing 11, 2017.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Automatic pronunciation scoring of specific phone segments for language instruction", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. Eurospeech-1997. ISCA", |
| "volume": "", |
| "issue": "", |
| "pages": "645--648", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim et al., \"Automatic pronunciation scoring of specific phone segments for language instruction,\" in Proc. Eurospeech-1997. ISCA, pp. 645-648, 1997.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Language Learning Based On Non-Native Speech Recognition", |
| "authors": [ |
| { |
| "first": "Silke", |
| "middle": [], |
| "last": "Witt", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Silke Witt and Steve Young, \"Language Learning Based On Non-Native Speech Recognition,\" European Conference on Speech Communication and Technology, 1997.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Phone-level pronunciation scoring and assessment for interactive language learning", |
| "authors": [ |
| { |
| "first": "Silke", |
| "middle": [], |
| "last": "Witt", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Speech Communiciation", |
| "volume": "30", |
| "issue": "2-3", |
| "pages": "95--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Silke Witt and Steve Young, \"Phone-level pronunciation scoring and assessment for interactive language learning,\" Speech Communiciation, Vol. 30, No. 2-3, pp. 95-108, 2000.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Mispronunciation detection and diagnosis in L2 english speech using multidistribution deep neural networks", |
| "authors": [ |
| { |
| "first": "Kun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Speech, and Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kun Li et al., \"Mispronunciation detection and diagnosis in L2 english speech using multidistribution deep neural networks,\" IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2016.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Applying Multitask Learning to Acoustic-Phonemic Model for Mispronunciation Detection and Diagnosis in L2 English Speech", |
| "authors": [ |
| { |
| "first": "Shaoguang", |
| "middle": [], |
| "last": "Mao", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shaoguang Mao et al., \"Applying Multitask Learning to Acoustic-Phonemic Model for Mispronunciation Detection and Diagnosis in L2 English Speech,\" ICASSP, 2018.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "CNN-RNN-CTC Based End-to-end Mispronunciation Detection and Diagnosis", |
| "authors": [ |
| { |
| "first": "Wai-Kim", |
| "middle": [], |
| "last": "Leung", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wai-Kim Leung et al., \"CNN-RNN-CTC Based End-to-end Mispronunciation Detection and Diagnosis,\" ICASSP, 2019.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves et al., \"Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,\" ICML, 2006.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The string-to-string correction problem", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "J" |
| ], |
| "last": "Wagner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fischer", |
| "suffix": "" |
| } |
| ], |
| "year": 1974, |
| "venue": "Journal of the ACM (JACM)", |
| "volume": "21", |
| "issue": "1", |
| "pages": "168--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert A. Wagner and Michael J. Fischer. \"The string-to-string correction problem,\" Journal of the ACM (JACM) , Vol. 21.1, pp. 168-173, 1974.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "The Kaldi Speech Recognition Toolkit", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Povey", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ASRU", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Povey et.al, \"The Kaldi Speech Recognition Toolkit,\" ASRU, 2011.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "ESPnet: End-to-End Speech Processing Toolkit", |
| "authors": [ |
| { |
| "first": "Shinji", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shinji Watanabe et al., \"ESPnet: End-to-End Speech Processing Toolkit,\" Interspeech, 2018.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Povey", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Povey, Daniel et.al, \"Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks,\" Interspeech, 2018.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Advances in Joint CTC-Attention based End-to-End speech recognition with a Deep CNN Encoder and RNN-LM", |
| "authors": [ |
| { |
| "first": "Takaaki", |
| "middle": [], |
| "last": "Hori", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Takaaki Hori et al., \"Advances in Joint CTC-Attention based End-to-End speech recognition with a Deep CNN Encoder and RNN-LM,\" Interspeeh, 2017.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling", |
| "authors": [ |
| { |
| "first": "Ha\u015fim", |
| "middle": [], |
| "last": "Sak", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ha\u015fim Sak et al., \"Long short-term memory recurrent neural network architectures for large scale acoustic modeling,\" Interspeech, 2014.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Improving Mispronunciation Detection for Non-Native Learners with Multisource Information and LSTM-Based Deep Models", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Li et al., \"Improving Mispronunciation Detection for Non-Native Learners with Multisource Information and LSTM-Based Deep Models,\" Interspeech, 2017", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Monotonic chunkwise attention", |
| "authors": [ |
| { |
| "first": "Chung-Cheng", |
| "middle": [], |
| "last": "Chiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chung-Cheng Chiu and Colin Raffel. \"Monotonic chunkwise attention,\" ICLR, 2018.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "An Online Attention-based Model for Speech Recognition", |
| "authors": [ |
| { |
| "first": "Ruchao", |
| "middle": [], |
| "last": "Fan", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruchao Fan et al., \"An Online Attention-based Model for Speech Recognition,\" arXiv, 2018.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Keywords: end-to-end speech recognition, acoustic model, mispronunciation detection, mispronunciation diagnosis. neural network-hidden Markov model, DNN-HMM) \u8a9e\u97f3\u8fa8\u8b58\u67b6\u69cb\u3002\u8a72\u67b6\u69cb\u4e3b\u8981\u7531\u8072\u5b78\u6a21\u578b(Acoustic model)\u3001\u8a9e\u8a00\u6a21\u578b(Language model)\u3001 \u767c\u97f3\u8a5e\u5178(Pronunciation lexicon)\u6240\u7d44\u6210\uff0c\u4e26\u4e14\u5728\u8a13\u7df4\u7684\u904e\u7a0b\u4e2d\uff0c\u5fc5\u9808\u5148\u7531\u50b3\u7d71\u7684\u9ad8\u65af\u6df7 \u5408 \u6a21 \u578b \u7d50 \u5408 \u96b1 \u85cf \u5f0f \u99ac \u53ef \u592b \u6a21 \u578b (Gaussian mixture model-hidden Markov model,", |
| "uris": null |
| }, |
| "TABREF1": { |
| "text": "\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u65bc\u7b2c\u4e00\u968e\u6bb5\u5be6\u9a57\u4f7f\u7528\u7684\u662f CTC-Attention \u6df7\u5408\u6a21\u578b\uff0c\u67b6\u69cb\u4e3b\u8981\u53c3\u8003 [25][26]\uff0c\u6df7\u5408\u53c3\u6578\u70ba 0.5\u3002Encoder \u7684\u67b6\u69cb\u70ba\u5169\u5c64\u7684 VGG \u5c64\u52a0\u4e0a\u516d\u5c64 Long short-term memory projection(LSTMP)\u4e00\u5c64\u542b 320 \u500b\u795e\u7d93\u5143\uff0cDecoder \u7684\u67b6\u69cb\u5247\u70ba\u4e00\u5c64 LSTM \u542b 300", |
| "html": null, |
| "content": "<table><tr><td>3.1 \u57fa\u65bc\u5206\u6578\u4e4b\u767c\u97f3\u6aa2\u6e2c [21]\u3002\u5982\u4e0b\u5716 2 \u6240\u793a\uff0c\u7d05\u8272\u7bad\u865f\u4ee3\u8868\u66ff\u63db\u932f\u8aa4\uff0c\u85cd\u8272\u7bad\u865f\u70ba\u522a\u9664\u932f\u8aa4\uff0c\u7da0\u8272\u7bad\u865f\u70ba\u63d2\u5165 \u88681\u3001\u83ef\u8a9e\u5b78\u7fd2\u8005\u53e3\u8a9e\u8a9e\u6599\u5eab\u4e4b\u8a13\u7df4\u96c6\u3001\u767c\u5c55\u96c6\u8207\u6e2c\u8a66\u96c6 \u4e94\u3001\u5be6\u9a57\u7d50\u679c\u8207\u5206\u6790 \u679c\u505a\u5224\u65b7\uff0c\u7136\u800c\u96a8\u8457\u6a19\u6e96\u653e\u5bec\uff0c\u932f\u8aa4\u63a5\u53d7\u4e5f\u96a8\u4e4b\u4e0a\u5347\u4f7f\u5f97\u7d50\u679c\u7f3a\u4e4f\u9451\u5225\u529b\u3002\u70ba\u4e86\u4f7f\u6a21\u578b \u88688\u3001\u767c\u97f3\u8a3a\u65b7\u6548\u679c</td></tr><tr><td>\u932f\u8aa4\uff0c\u9ed1\u8272\u7bad\u865f\u5247\u70ba\u8207\u76ee\u6a19\u76f8\u7b26\u3002\u767c\u751f\u66ff\u63db\u932f\u8aa4\u8207\u522a\u9664\u932f\u8aa4\u6642\u5247\u4ee3\u8868\u767c\u751f\u4e86\u767c\u97f3\u932f\u8aa4\uff0c Initial Final Tone \u80fd\u5920\u5224\u65b7\u767c\u97f3\u597d\u58de\uff0c\u63a5\u4e0b\u4f86\u7684\u5be6\u9a57\u6211\u5011\u5c07 L2 \u8a9e\u6599\u52a0\u5165\u8a13\u7df4\u52a0\u5165\u8072\u5b78\u6a21\u578b\u8a13\u7df4\uff0c\u5e0c\u671b\u80fd \u904e\u53bb\u9032\u884c\u6aa2\u6e2c\u7684\u9996\u8981\u6b65\u9a5f\u5728\u65bc\u9032\u884c\u5f37\u5236\u5c0d\u9f4a\uff0c\u7121\u8ad6\u767c\u51fa\u5c0d\u8207\u932f\u7684\u97f3\u7d20\u7686\u89e3\u78bc\u6210\u76ee\u6a19\u97f3\u7d20\uff0c \u4e26\u6a19\u8a18\u97f3\u7d20\u51fa\u73fe\u65bc\u8072\u97f3\u4e4b\u6642\u9593\u6bb5\u3002\u5728\u7aef\u5c0d\u7aef\u7684\u8a9e\u97f3\u8fa8\u8b58\u63a1\u7528\u5149\u675f\u641c\u5c0b\u7b97\u6cd5\uff0c\u8f38\u51fa\u6642\u7d93\u7531 \u800c\u63d2\u5165\u932f\u8aa4\u7684\u767c\u751f\u60c5\u6cc1\u8f03\u70ba\u7279\u6b8a\uff0c\u7531\u65bc\u4e2d\u6587\u7684\u4e00\u5b57\u4e00\u97f3\u7bc0\u7279\u6027\uff0c\u767c\u751f\u63d2\u5165\u932f\u8aa4\u7684\u53ef\u80fd\u6027 \u66f4\u4f4e\u3002\u6703\u767c\u751f\u7684\u60c5\u6cc1\u901a\u5e38\u662f\u5728\u5b78\u7fd2\u8005\u767c\u51fa\u8072\u97f3\u6642\u610f\u8b58\u5230\u81ea\u5df1\u5538\u5f97\u4e0d\u5920\u6a19\u6e96\uff0c\u60f3\u8981\u518d\u6b21\u767c \u6642\u9593(\u5c0f\u6642) \u8a9e\u8005\u6578 \u97f3\u7d20\u6578\u91cf \u932f\u8aa4\u767c\u97f3\u97f3\u7d20\u6578\u91cf \u8a13\u7df4\u96c6 L1 6.7 44 72,486 5.1 L1 \u8fa8\u8b58\u7d50\u679c DNN-HMM 0. 548 0.441 0.752 \u5920\u4f7f\u8072\u5b78\u6a21\u578b\u76f4\u63a5\u5b78\u7fd2\u5230 L2 \u7684\u932f\u8aa4\u6a21\u5f0f\u3002 -L2 17.4 82 133,102 29,377 \u500b\u795e\u7d93\u5143\uff0c\u4f7f\u7528\u7684\u6ce8\u610f\u529b\u6a5f\u5236\u70ba Location attention[12]\uff0c\u8a08\u7b97\u65b9\u5f0f\u5982\u4e0b\u5f0f 11 \u6240\u793a\uff0c\u70ba\u4e86 CTC 0.611 0.582 0.768 \u70ba\u4e86\u8b49\u5be6\u7aef\u5c0d\u7aef\u8a9e\u97f3\u8fa8\u8b58\u65bc\u767c\u97f3\u6aa2\u6e2c\u7684\u53ef\u884c\u6027\uff0c\u6211\u5011\u9996\u5148\u4e0d\u540c\u8072\u5b78\u6a21\u578b\u67b6\u69cb\u65bc L1 \u8a9e\u6599 \u52a0\u5165\u542b\u6709\u932f\u8aa4\u6a21\u5f0f\u7684 L2 \u8a13\u7df4\u96c6\u5f8c\uff0c\u767c\u97f3\u6aa2\u6e2c\u7684\u4efb\u52d9\u53ef\u4ee5\u76f4\u63a5\u8996\u70ba\u8a9e\u97f3\u8fa8\u8b58\u7684\u4efb\u52d9\uff0c Softmax \u51fd\u6578\u8f38\u51fa\u6240\u6709\u6a19\u7c64\u4e4b\u4e8b\u5f8c\u6a5f\u7387\uff0c\u4fdd\u7559\u6bcf\u4e00\u6b21\u8f38\u51fa\u524d n \u9ad8\u503c\u76f4\u5230\u51fa\u73fe\u53e5\u5c3e\u7b26\u865f <eos>\u70ba\u6b62\u3002\u70ba\u4e86\u9054\u5230\u5728\u641c\u5c0b\u6642\u80fd\u7522\u751f\u6211\u5011\u6240\u60f3\u8981\u7684\u76ee\u6a19\u97f3\u7d20\uff0c\u4f7f\u7528\u4e86\u9650\u5236\u89e3\u78bc\u7684\u65b9\u6cd5\uff0c \u5373\u5728\u6bcf\u4e00\u6b21 Softmax \u51fd\u6578\u8f38\u51fa\u6642\u53ea\u95dc\u6ce8\u6211\u5011\u6240\u60f3\u8981\u7684\u97f3\u7d20\u96c6\u5408\uff0c\u5982\u4e0b\u5716 1 \u6240\u793a\u3002\u5728\u89e3\u78bc \u51fa\u6b63\u78ba\u7684\u97f3\u6240\u5c0e\u81f4\uff0c\u6216\u662f\u74b0\u5883\u7684\u566a\u97f3\u88ab\u7576\u4f5c\u8a9e\u8005\u6240\u8aaa\u7684\u8a71\u3002\u56e0\u6b64\uff0c\u5c0d\u65bc\u63d2\u5165\u932f\u8aa4\u7684\u90e8\u5206 \u6211\u5011\u80fd\u5920\u5ffd\u7565\uff0c\u50c5\u5c08\u6ce8\u65bc\u63d2\u5165\u932f\u8aa4\u4ee5\u5916\u7684\u66ff\u63db\u932f\u8aa4\u8207\u522a\u9664\u932f\u8aa4\u6aa2\u6e2c\u3002 \u767c\u5c55\u96c6 L1 1.4 10 14,186 -L2 ----L1 3.2 25 32,568 -\u5f37\u5316\u5de6\u5230\u53f3\u7684\u5c0d\u9f4a\uff0c\u9664\u4e86\u8003\u616e\u524d\u4e00\u500b Decoder state \u4ee5\u53ca\u7576\u524d Encoder state \u5916\uff0c\u66f4\u52a0\u5165\u4e00 \u7dad\u647a\u7a4d\u5c64 K \u5c0d\u65bc\u904e\u53bb\u7684 Attention \u5411\u91cf \u22121 \u62bd\u53d6\u7684\u5411\u91cf\u3002\u9664\u6b64\u4e4b\u5916\uff0c\u4e5f\u52a0\u5165\u4e86\u6a19\u7c64\u5e73\u6ed1 CTC-Attention 0.661 0.612 0.801 \u4e2d\u7684\u8868\u73fe\uff0c\u4e0b\u8868 3 \u70ba\u4e0d\u540c\u8072\u5b78\u6a21\u578b\u67b6\u69cb\u65bc\u540c\u4e00\u8a9e\u6599\u6e2c\u8a66\u96c6\u7684\u97f3\u7d20\u932f\u8aa4\u7387\u8207\u97f3\u7bc0\u932f\u8aa4\u7387\uff0c \u5373\u7576\u5c0d\u6e2c\u8a66\u96c6\u89e3\u78bc\u7684\u932f\u8aa4\u7387\u8f03\u4f4e\uff0c\u5f8c\u7e8c\u7684\u767c\u97f3\u6aa2\u6e2c\u8207\u8a3a\u65b7\u4e5f\u5c07\u6709\u5f88\u597d\u7684\u6548\u679c\u3002\u56e0\u6b64\u6211\u5011 Attention 0.645 0.609 0.797 \u800c\u4f7f\u7528\u8cc7\u6599\u91cf\u76f8\u540c\u7684\u70ba TDNN-F LFMMI \u8207 CTC-Attention \u6a21\u578b\u3002\u7531\u7d50\u679c\u53ef\u4ee5\u5f97\u77e5\u4f7f\u7528 \u9996\u5148\u6bd4\u8f03\u97f3\u7d20\u932f\u8aa4\u7387\u5982\u8868 6 \u6240\u793a\uff1a \u6e2c\u8a66\u96c6 L2 7.5 44 55,190 14,247 \u65b9\u6cd5(Label smoothing) \u53c3\u6578\u8a2d\u70ba 0.05\uff0c\u76ee\u7684\u5728\u65bc\u4e0d\u8b93\u6a21\u578b\u904e\u5ea6\u81ea\u4fe1\u4f7f\u90e8\u5206\u8f03\u5c11\u51fa\u73fe\u7684\u6a19 CTC-Attention \u7684\u8fa8\u8b58\u6548\u679c\u512a\u65bc\u4efb\u610f\u5176\u4ed6\u6a21\u578b\uff0c\u53ef\u80fd\u539f\u56e0\u662f\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u4e26\u4e0d\u53d7\u5236\u65bc\u767c \u7531\u4e0a\u8868\u8a3a\u65b7\u7d50\u679c\u986f\u793a\uff0c\u5118\u7ba1\u5728\u767c\u97f3\u6aa2\u6e2c CTC-Attention \u8207 Attention \u7684\u6548\u679c\u5dee\u7570\u4e0d\u5927\uff0c\u4f46 \u88686\u3001\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u97f3\u7d20\u932f\u8aa4\u7387 \u7684\u904e\u7a0b\u4e2d\uff0c\u627e\u51fa\u97f3\u7d20\u4e8b\u5f8c\u6a5f\u7387\u7e3d\u548c\u6700\u9ad8\u4e4b\u97f3\u7d20\u7d44\u5408\uff0c\u4f5c\u70ba\u6700\u7d42\u60f3\u8981\u7684\u76ee\u6a19\u97f3\u7d20\u5e8f\u5217\uff0c\u4e26 \u8a18\u9304\u6bcf\u4e00\u97f3\u7d20\u4e4b\u4e8b\u5f8c\u6a5f\u7387P( * | )\u3002\u5f97\u5230\u97f3\u7d20\u4e8b\u5f8c\u6a5f\u7387\u53ef\u5e36\u5165\u5f0f 9 \u6c7a\u7b56\u51fd\u6578D(P( * | ))\uff0c \u4f7f\u97f3\u7d20\u4e8b\u5f8c\u6a5f\u7387\u6295\u5f71\u5230 0-1 \u7684\u7bc4\u570d\uff0c\u4e26\u6839\u64da\u9580\u6abb\u503c \u6c7a\u5b9a\u767c\u97f3\u597d\u58de\u3002 D(P( * | )) = 1 1 + exp (P( * | )) (9) (D(P( * | ))) = { 1 if D(P( * | )) \u2265 0 otherwise \u97f3\u8a5e\u5178\uff0c\u56e0\u6b64\u76f8\u8f03\u65bc\u50b3\u7d71 DNN-HMM \u6a21\u578b\u4e0d\u6703\u53d7\u5230\u672a\u77e5\u97f3\u7d20\u7d44\u5408\u7684\u5f71\u97ff\uff0c\u4e26\u4e14 Attention \u57164\u3001\u57fa\u65bc\u9580\u6abb\u503c\u4e4bROC\u66f2\u7dda\u5716 Model Weight Phone error rate \u662f CTC \u8207 Attention \u7684\u806f\u5408\u89e3\u78bc\u5e6b\u52a9\u4e86\u8a3a\u65b7\u7d50\u679c\uff0c\u4f7f\u8a3a\u65b7\u66f4\u52a0\u6e96\u78ba\u3002 \u7c64\u4e5f\u80fd\u6709\u9ede\u6a5f\u7387\u5206\u4f48\u4f7f\u6a21\u578b\u66f4\u52a0\u4e00\u822c\u5316\u3002\u7531\u65bc\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u901a\u5e38\u9700\u8981\u8f03\u5927\u91cf\u8cc7\u6599\uff0c\u6211 5.2 \u8072\u5b78\u6a21\u578b \u672c\u7814\u7a76\u5206\u70ba\u5169\u968e\u6bb5\uff0c\u7b2c\u4e00\u968e\u6bb5\u5c07\u6bd4\u8f03\u4ee5 L1 \u8a9e\u6599\u8a13\u7df4\u50b3\u7d71\u8072\u5b78\u6a21\u578b\u9032\u884c\u767c\u97f3\u6aa2\u6e2c\u8207\u7aef\u5c0d \u7aef\u8072\u5b78\u6a21\u578b\u9032\u884c\u767c\u97f3\u6aa2\u6e2c\u4e4b\u6548\u679c\uff0c\u7b2c\u4e8c\u968e\u6bb5\u5247\u662f\u4ee5\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u52a0\u5165 L2 \u8a9e\u6599\u9032\u884c\u767c \u5011\u540c\u6a23\u5730\u4e5f\u4f7f\u7528\u4e86\u901f\u5ea6\u64fe\u52d5\u65b9\u6cd5\uff0c\u56e0\u6b64\u8207 TDNN-F LFMMI \u4e4b\u7d50\u679c\u8f03\u5177\u53ef\u6bd4\u6027\uff0c\u8a73\u7d30\u67b6 \u69cb\u5982\u4e0b\u5716 3 \u6240\u793a\u3002 = { = * \u2212 \uff47 T tanh(W q \u2212 + W h + W f ) (11) \u6a21\u578b\u67b6\u69cb\u8a2d\u8a08\u5e36\u6709\u8a9e\u8a00\u6a21\u578b\u7684\u6982\u5ff5\uff0c\u53ef\u63d0\u5347\u5c0d\u65bc\u5df2\u77e5\u97f3\u7d20\u7d44\u5408\u7684\u8fa8\u8b58\u6548\u679c\u3002 \u88683\u3001L1\u6e2c\u8a66\u96c6\u7684\u97f3\u7d20\u932f\u8aa4\u7387\u8207\u97f3\u7bc0\u932f\u8aa4\u7387 Mono syllable(MS) Double syllable(DS) \u88684\u3001\u7b2c\u4e8c\u968e\u6bb5\u5206\u985e\u6548\u679c Correct pronunciation CTC -28.3 Attention -\u516d\u3001\u7d50\u8ad6 24.5 Mispronunciation Recall Precision F1 Recall Precision CTC-ATT 0.2 24.8 \u672c\u8ad6\u6587\u5728\u7aef\u5c0d\u7aef\u8a9e\u97f3\u8fa8\u8b58\u67b6\u69cb\u4e0a\u63d0\u51fa\u5169\u7a2e\u767c\u97f3\u6aa2\u6e2c\u65b9\u5f0f\uff0c\u5206\u5225\u70ba\u4f7f\u7528\u4fe1\u5fc3\u5206\u6578\u4ee5\u53ca\u8a9e\u97f3 F1 CTC-ATT 0.5 24.6 DNN-HMM 0.88 0.85 0.86 0.55 0.61 0.58 CTC-ATT 0.8 24.9 \u8fa8\u8b58\u7d50\u679c\uff0c\u4e26\u4e14\u6bd4\u8f03\u50b3\u7d71\u4f7f\u7528\u8a9e\u97f3\u8fa8\u8b58\u5668\u9032\u884c\u767c\u97f3\u6aa2\u6e2c\u7684\u65b9\u6cd5\u3002\u5728\u4f7f\u7528\u4fe1\u5fc3\u5206\u6578\u7684\u7d50\u679c (GOP) (10) Model SER PER SER PER DNN-HMM 41.8 28.4 28.7 18.0 \u4e2d\uff0c\u4ecd\u7136\u905c\u65bc GOP \u7684\u65b9\u6cd5\uff0c\u5e0c\u671b\u5728\u672a\u4f86\u80fd\u5920\u52a0\u5165\u66f4\u591a\u767c\u97f3\u7684\u7279\u5fb5\uff0c\u5982[27]\u70ba\u4e86\u6539\u5584\u55ae\u7528 End-to-end (Threshold) 0.76 0.87 0.81 0.69 0.51 0.59 \u7531\u6b64\u5be6\u9a57\u5f97\u77e5\u4f7f\u7528 CTC-Attention \u8207\u50c5\u4f7f\u7528 Attention \u6a21\u578b\u5dee\u7570\u4e0d\u5927\uff0c\u7136\u800c\u97f3\u7d20\u932f\u8aa4\u7387\u7684 \u767c\u97f3\u4e8b\u5f8c\u6a5f\u7387\u5bb9\u6613\u6709\u8aa4\u5224\u7684\u60c5\u6cc1\uff0c\u984d\u5916\u52a0\u5165\u4e86\u8a31\u591a\u767c\u97f3\u7279\u5fb5\uff0c\u4f8b\u5982\u767c\u97f3\u65b9\u5f0f\u3001\u767c\u97f3\u985e\u578b\u3001</td></tr><tr><td>\u7531\u65bc\u6ce8\u610f\u529b\u6a21\u578b\u6709\u8457\u975e\u55ae\u8abf\u7684\u5de6\u5230\u53f3\u5c0d\u9f4a\u548c\u6536\u6582\u8f03\u6162\u7684\u7f3a\u9ede\uff0cCTC \u5247\u662f\u5fc5\u9808\u4f7f\u7528\u984d\u5916 \u7684\u8a9e\u8a00\u6a21\u578b\u624d\u80fd\u6709\u8f03\u597d\u7684\u6548\u679c\u3002\u56e0\u6b64\u6709\u5b78\u8005\u4e5f\u5c07\u5169\u8005\u7d50\u5408[12] [13]\uff0c\u4ee5 CTC \u7d66\u4e88\u6ce8\u610f\u529b \u6a21\u578b\u66f4\u5f37\u7684\u5de6\u5230\u53f3\u9650\u5236\uff0c\u4e26\u5728\u9032\u884c\u5149\u675f\u89e3\u78bc\u6642(Beam search)\u540c\u6642\u52a0\u5165\u5169\u7a2e\u6a21\u578b\u7684\u8f38\u51fa\uff0c \u57162\u3001\u6700\u77ed\u7de8\u8f2f\u8ddd\u96e2\u767c\u97f3\u6aa2\u6e2c \u56db\u3001\u5be6\u9a57\u8a2d\u5b9a 5.1 \u8a9e\u6599 -free 5.2 \u57fa\u65bc\u5206\u6578\u4e4b\u767c\u97f3\u6aa2\u6e2c\u7d50\u679c \u57fa\u65bc\u9580\u6abb\u503c\u65b9\u6cd5\uff0c\u9996\u5148\u6211\u5011\u5229\u7528 ROC \u66f2\u7dda\u89c0\u5bdf\u9580\u6abb\u503c\u5c0d\u65bc\u767c\u97f3\u6aa2\u6e2c\u7684\u8a55\u4f30\u6a19\u6e96\u8b8a\u5316\uff0c \u5982\u4e0b\u5716 4 \u6240\u793a\uff0c\u6211\u5011\u5f9e\u4e2d\u767c\u73fe\u7576\u9580\u6abb\u503c\u8a2d\u8d8a\u5c0f\u932f\u8aa4\u62d2\u7d55\u7387\u8d8a\u4f4e\uff0c\u800c\u932f\u8aa4\u63a5\u53d7\u7387\u6703\u96a8\u4e4b\u4e0a TDNN-F LFMMI 34.2 26.7 25.3 22.5 CTC-Attention 32.2 18.9 6.8 5.1 \u8a08\u7b97\u6709\u8003\u91cf\u5230\u63d2\u5165\u932f\u8aa4\uff0c\u6211\u5011\u5728\u9032\u884c\u6700\u5c0f\u7de8\u8f2f\u8ddd\u96e2\u5c0d\u9f4a\u76ee\u6a19\u97f3\u7d20\u6642\u5247\u4e0d\u8003\u616e\u63d2\u5165\u932f\u8aa4\u56e0 \u5438\u6c23\u5410\u6c23\u7b49\u7279\u5fb5\u3002\u53e6\u4e00\u65b9\u9762\uff0c\u57fa\u65bc\u8a9e\u97f3\u8fa8\u8b58\u7d50\u679c\u7684\u767c\u97f3\u6aa2\u6e2c\uff0c\u6211\u5011\u767c\u73fe\u52a0\u5165 L2 \u8a9e\u6599\u8a13 5.3 \u57fa\u65bc\u8fa8\u8b58\u4e4b\u767c\u97f3\u6aa2\u6e2c\u8207\u8a3a\u65b7\u7d50\u679c \u6b64\u5169\u8005\u7684\u6548\u679c\u6709\u5f85\u9032\u4e00\u6b65\u8a55\u4f30\uff0c\u53e6\u5916\u4e5f\u6bd4\u8f03[19]\u57fa\u65bc CTC \u7684\u7d50\u679c\u3002 \u7df4\u5c0d\u65bc\u6574\u9ad4\u7684\u6aa2\u6e2c\u6548\u679c\u5f71\u97ff\u5f88\u5927\u3002\u7576\u50c5\u6709\u6bcd\u8a9e\u8005\u8a9e\u6599\u6642\uff0c\u8a13\u7df4\u7684\u6a21\u578b\u5c0d\u65bc L2 \u8a9e\u8005\u4f86\u8aaa \u7531 5.1 \u7bc0\u7684\u5be6\u9a57\u7d50\u679c\u986f\u793a\uff0cCTC-Attention \u6a21\u578b\u5728\u8fa8\u8b58\u7387\u4e0a\u8d85\u8d8a\u5176\u4ed6\u50b3\u7d71\u8072\u5b78\u6a21\u578b\u3002\u6211\u5011 \u767c\u97f3\u6aa2\u6e2c\u7684\u6548\u679c\u5982\u4e0b\u8868 7 \u6240\u793a\uff0c\u6574\u9ad4\u4f86\u770b\u4f7f\u7528 CTC-Attention \u6a21\u578b\u7684\u6548\u679c\u8207\u50c5\u4f7f\u7528 \u904e\u65bc\u56b4\u683c\u3002\u800c\u52a0\u5165 L2 \u8a9e\u6599\u4e0d\u50c5\u80fd\u5920\u4f7f\u6a21\u578b\u9032\u884c\u767c\u97f3\u6aa2\u6e2c\u4e5f\u80fd\u5920\u8a3a\u65b7\uff0c\u4e26\u4e14\u8a3a\u65b7\u6b63\u78ba\u7387 \u5047\u8a2d\u6b64\u8072\u5b78\u6a21\u578b\u6240\u8fa8\u8b58\u7684\u7d50\u679c\u70ba\u6b63\u78ba\u7b54\u6848\uff0c\u4e26\u5957\u7528\u6700\u77ed\u7de8\u8f2f\u8ddd\u96e2\u53bb\u5c0d\u9f4a\u76ee\u6a19\u97f3\u7d20\uff0c\u5f97\u5230 Attention \u6548\u679c\u5dee\u7570\u4e0d\u5927\uff0c\u4f46\u662f\u90fd\u6bd4\u53ea\u4f7f\u7528 CTC \u505a\u767c\u97f3\u6aa2\u6e2c\u6548\u679c\u66f4\u597d\u3002 \u4e5f\u8d85\u8d8a\u4ee5\u5f80\u65b9\u6cd5\u3002\u5c07\u767c\u97f3\u6aa2\u6e2c\u8996\u70ba\u8a9e\u97f3\u8fa8\u8b58\u7684\u554f\u984c\u5c07\u4f7f\u5f97\u7814\u7a76\u66f4\u52a0\u7c21\u55ae\uff0c\u50c5\u9700\u8981\u601d\u8003\u5982 \u7684\u7d50\u679c\u5982\u4e0b\u8868 5 \u6240\u793a\uff1a \u88687\u3001\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u767c\u97f3\u6aa2\u6e2c\u6548\u679c \u4f55\u8b93\u6a21\u578b\u8fa8\u8b58\u7387\u63d0\u5347\u3002\u800c\u672a\u4f86\u65b9\u5411\u9664\u4e86\u6539\u9032\u8072\u5b78\u6a21\u578b\u5916\uff0c\u4e5f\u5e0c\u671b\u80fd\u5920\u8655\u7406\u672a\u77e5\u7684\u932f\u8aa4\u3002 \u88685\u3001L1\u8072\u5b78\u6a21\u578b\u767c\u97f3\u6aa2\u6e2c\u7d50\u679c Correct pronunciation Mispronunciation \u4ee5\u9054\u5230\u66f4\u4f73\u7684\u89e3\u78bc\u6548\u679c\u3002\u8a13\u7df4\u6642\u4ee5 \u4f5c\u70ba\u5169\u6a21\u578b\u6df7\u548c\u53c3\u6578\uff0c\u5176\u640d\u5931\u51fd\u6578\u70ba\uff1a \u2112 CTC\u2212ATT = \u2212( lnP ctc ( | ) + (1 \u2212 )lnP att ( | )) (8) \u4e09\u3001\u7aef\u5c0d\u7aef\u8a9e\u97f3\u8fa8\u8b58\u6280\u8853\u65bc\u767c\u97f3\u6aa2\u6e2c\u8207\u8a3a\u65b7 \u57fa\u65bc\u6aa2\u6e2c\u8207\u8a3a\u65b7\u53ef\u88ab\u8996\u70ba\u8a9e\u97f3\u8fa8\u8b58\u4efb\u52d9\u7684\u60f3\u6cd5\uff0c\u4e00\u500b\u80fd\u5920\u8fa8\u8b58\u6e05\u695a\u7b2c\u4e00\u8a9e\u8a00\u8005\u8207\u7b2c\u4e8c\u8a9e \u8a00\u7684\u5b78\u7fd2\u8005\u6240\u767c\u51fa\u97f3\u7d20\u5dee\u7570\u7684\u8fa8\u8b58\u5668\uff0c\u5c07\u5c0d\u65bc\u767c\u97f3\u6aa2\u6e2c\u8207\u8a3a\u65b7\u6709\u5f88\u5927\u7684\u7a81\u7834\u3002\u5229\u7528\u9019\u6a23 \u672c\u8ad6\u6587\u4f7f\u7528\u81fa\u7063\u5e2b\u7bc4\u5927\u5b78\u9081\u5411\u9802\u5c16\u5927\u5b78\u8a08\u756b\u4e4b\u83ef\u8a9e\u5b78\u7fd2\u8005\u53e3\u8a9e\u8a9e\u6599\u5eab\uff0c\u5176\u4e2d\u53ef\u4ee5\u5206\u70ba\u83ef maximum mutual information) [24]\uff0c\u4e5f\u5229\u7528\u4e86\u901f\u5ea6\u64fe\u52d5\u9032\u884c\u8cc7\u6599\u589e\u6dfb\u5206\u5225\u52a0\u5feb 1.1 \u500d\u901f\u8207 Correct pronunciation Mispronunciation Recall Precision F1 Recall Precision \u4f8b\u5982\u5728 L2 \u6e2c\u8a66\u96c6\u4e2d\u6709\u8a31\u591a\u4e0d\u5b58\u5728\u65bc\u8a13\u7df4\u96c6\u7684\u932f\u8aa4\u6a19\u8a18\uff0c\u5f80\u5f80\u662f\u5169\u500b\u97f3\u7d20\u6216\u662f\u8072\u8abf\u7684\u7d44 F1 \u5347\uff0c\u4f46\u662f\u589e\u52a0\u5e45\u5ea6\u8f03\u5c0f\uff0c\u9032\u800c\u767c\u73fe\u7576\u7576\u5168\u57df\u9580\u6abb\u503c\u8a2d\u70ba 0.1 \u6642\u5169\u8005\u4e4b\u932f\u8aa4\u63a5\u53d7\u7387\u8207\u932f\u8aa4 Recall Precision F1 Recall Precision F1 CTC 0.831 0.893 0.861 0.706 0.656 0.680 \u5408\uff0c\u800c\u6211\u5011\u7684\u6a21\u578b\u8a3a\u65b7\u7d50\u679c\u7531\u65bc\u7f3a\u5c11\u9019\u6a23\u7684\u6a19\u8a18\u901a\u5e38\u53ea\u80fd\u56de\u994b\u5169\u500b\u97f3\u7d20\u4e2d\u7684\u5176\u4e2d\u4e00\u7a2e\uff0c \u57161\u3001\u9650\u5236\u89e3\u78bc\u6d41\u7a0b 3.2 \u57fa\u65bc\u8fa8\u8b58\u7d50\u679c\u4e4b\u767c\u97f3\u6aa2\u6e2c\u8207\u8a3a\u65b7 \u96fb\u8166\u8f14\u52a9\u767c\u97f3\u6aa2\u6e2c\u53ef\u88ab\u8996\u70ba\u8a9e\u97f3\u8fa8\u8b58\u7684\u554f\u984c\uff0c\u7576\u8a9e\u97f3\u6a21\u578b\u7684\u8fa8\u8b58\u7387\u70ba\u767e\u5206\u4e4b\u767e\uff0c\u767c\u97f3\u6aa2 \u8a9e\u6bcd\u8a9e\u8005(L1 speaker)\u4ee5\u53ca\u83ef\u8a9e\u975e\u6bcd\u8a9e\u8005(L2 speaker)\u5169\u90e8\u4efd\u3002\u8a9e\u6599\u4e2d\u7684\u8a9e\u53e5\u5305\u542b\u4e09\u7a2e\u985e\u578b \u653e\u6162 0.9 \u500d\u901f\uff0c\u8a73\u7d30\u67b6\u69cb\u5982\u4e0b\u8868 2 \u6240\u793a\u3002 \u88681\u3001\u50b3\u7d71\u8072\u5b78\u6a21\u578b\u67b6\u69cb \u57163\u3001\u7aef\u5c0d\u7aef\u8a9e\u97f3\u8fa8\u8b58\u6a21\u578b\u67b6\u69cb \u62d2\u7d55\u7387\u76f8\u7b49\u3002\u5118\u7ba1\u4f7f\u7528\u9580\u6abb\u503c\u65b9\u6cd5\u5728\u932f\u8aa4\u6aa2\u6e2c\u4e0a\u8207\u4f7f\u7528 GOP \u65b9\u6cd5\u53ef\u6bd4\u8f03\u3002\u7e3d\u9ad4\u4f86\u770b\u5206 \u985e\u6548\u679c\u4ecd\u7136\u662f\u52a3\u65bc GOP \u65b9\u6cd5\u3002\u932f\u8aa4\u7684\u62d2\u7d55\u904e\u591a\u5c0e\u81f4\u5728\u5224\u65b7\u932f\u8aa4\u6642\u7684\u7cbe\u6e96\u5ea6\u504f\u4f4e\u3002 1-best 0.73 0.87 0.79 0.71 0.49 0.58 2-best 0.80 0.84 0.82 0.59 0.52 0.55 CTC-Att 0.873 0.893 0.883 0.714 0.672 0.693 Attention 0.875 0.892 0.884 0.710 0.674 \u5c0d\u65bc\u6b64\u60c5\u6cc1\u4ecd\u7136\u96e3\u4ee5\u89e3\u6c7a\u3002\u671f\u8a31\u5728\u672a\u4f86\u80fd\u5920\u627e\u5230\u5c0d\u65bc\u672a\u77e5\u932f\u8aa4\u7684\u6b63\u78ba\u56de\u994b\u65b9\u6cd5\u3002\u53e6\u5916\u7aef 0.691 \u5206\u5225\u70ba\u55ae\u97f3\u7bc0(Mono syllable, MS)\u3001\u96d9\u97f3\u7bc0(Double syllable, DS)\u4ee5\u53ca\u77ed\u6587(Essay, ES)\uff1b\u8a73 \u7d30\u7d71\u8a08\u8cc7\u8a0a\u5982\u8868 1 \u6240\u793a\u3002 \u985e\u795e\u7d93\u7db2\u8def\u5c64\u6578 \u7b2c\u4e8c\u968e\u6bb5\u4e3b\u8981\u63a2\u8a0e\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u52a0\u5165 L2 \u8a9e\u6599\u5c0d\u65bc\u767c\u97f3\u6aa2\u6e2c\u4e4b\u5f71\u97ff\uff0c\u5c07\u57fa\u65bc\u5716 3 \u4e4b\u67b6 3-best 0.84 0.82 0.83 0.52 0.54 0.53 \u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u7684\u89e3\u78bc\u901f\u5ea6\u4e0d\u5920\u5373\u6642\uff0c\u5c0d\u65bc\u5be6\u969b\u61c9\u7528\u4f86\u8aaa\u9084\u6709\u4e00\u6bb5\u8ddd\u96e2\uff0c\u5728\u672a\u4f86\u4e5f\u5e0c\u671b\u80fd \u6bcf\u4e00\u795e\u7d93\u5143\u6578 DNN-HMM 6 2048 \u69cb\uff0c\u5206\u5225\u6bd4\u8f03\u53ea\u4f7f\u7528 CTC\u3001Attention \u6a21\u578b\u8207 CTC-Attention \u6df7\u5408\u6a21\u578b\u3002\u4e26\u4e14\u6bd4\u8f03\u52a0\u5165\u5df2 5-best 0.87 0.81 0.84 0.43 0.55 0.49 \u5118\u7ba1\u767c\u97f3\u6aa2\u6e2c\u6548\u679c\u5df2\u7d93\u5f97\u5230\u826f\u597d\u7684\u7d50\u679c\uff0c\u5c0d\u65bc\u5b78\u7fd2\u8005\u4f86\u8aaa\u4ecd\u7136\u7121\u6cd5\u5f97\u77e5\u81ea\u5df1\u7684\u767c\u97f3\u767c\u932f \u5920\u505a\u5230\u5373\u6642\u89e3\u78bc\uff0c\u5982[28][29]\uff0c\u4f7f\u5f97\u6211\u5011\u7684\u67b6\u69cb\u80fd\u88ab\u5be6\u969b\u61c9\u7528\u3002</td></tr><tr><td>\u7684\u65b9\u6cd5\u4e0d\u50c5\u53ef\u4ee5\u540c\u6642\u9032\u884c\u767c\u97f3\u8a3a\u65b7\u8207\u6aa2\u6e2c\uff0c\u4e5f\u7701\u53bb\u4ee5\u5f80\u5fc5\u9808\u518d\u85c9\u7531\u767c\u97f3\u5206\u6578\u8a55\u4f30\u7684\u5169\u968e \u7684\u4e0d\u540c\u9020\u6210\u6a21\u578b\u7684\u8aa4\u5224\u3002\u9451\u65bc\u5224\u65b7\u904e\u65bc\u56b4\u683c\uff0c\u6211\u5011\u4e5f\u9010\u6f38\u653e\u5bec\u6a19\u6e96\uff0c\u6539\u63a1\u4ee5 N-best \u7684\u7d50 \u6bb5\u6b65\u9a5f\u3002\u4ee5\u4e0b\u5c07\u91dd\u5c0d\u7aef\u5c0d\u7aef\u8a9e\u97f3\u8fa8\u8b58\u767c\u97f3\u6aa2\u6e2c\u8207\u8a3a\u65b7\u65b9\u6cd5\u9032\u884c\u8aaa\u660e\u3002 \u6e2c\u7684\u554f\u984c\u4fbf\u53ef\u4ee5\u88ab\u89e3\u6c7a\u3002\u5118\u7ba1\u7576\u524d\u8a9e\u97f3\u8fa8\u8b58\u6280\u8853\u4ecd\u9054\u4e0d\u5230\u5b8c\u7f8e\uff0c\u4f46\u5df2\u5341\u5206\u9032\u6b65\u3002\u5728\u672c\u7bc0 \u4e2d\uff0c\u6211\u5011\u5c07\u8a9e\u97f3\u8fa8\u8b58\u7d50\u679c\u8207\u76ee\u6a19\u8a9e\u53e5\u6587\u5b57\u4ee5\u6700\u77ed\u7de8\u7ddd\u8ddd\u96e2\u6f14\u7b97\u6cd5(Edit distance)\u9032\u884c\u5c0d\u9f4a TDNN-F LFMMI 13 768 \u7531\u8868\u4e2d\u5f97\u77e5\uff0c\u50c5\u4f7f\u7528 L1 \u8a13\u7df4\u8072\u5b78\u6a21\u578b\u5224\u65b7 L2 \u8a9e\u8005\u767c\u97f3\u597d\u58de\u904e\u65bc\u56b4\u683c\uff0c\u4e5f\u53ef\u80fd\u56e0\u70ba\u53e3\u97f3 \u77e5 L2 \u932f\u8aa4\u6a21\u5f0f\u9032\u884c\u89e3\u78bc\u3002 \u6210\u4ec0\u9ebc\u4e86\uff0c\u56e0\u6b64\u7e7c\u7e8c\u63a2\u8a0e\u7aef\u5c0d\u7aef\u8072\u5b78\u6a21\u578b\u7684\u8a3a\u65b7\u6548\u679c\u3002\u5982\u4e0b\u8868 8 \u6240\u793a\uff1a</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |