ACL-OCL / Base_JSON /prefixR /json /rocling /2020.rocling-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:54:51.374585Z"
},
"title": "Keynote Speaker I",
"authors": [
{
"first": "Jenq-Haur",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ying-Hui",
"middle": [],
"last": "Lai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shinnou",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Voice conversion is a technique for modifying speech waveforms to convert non-/paralinguistic information into any form we want while preserving linguistic content. It has been dramatically improved thanks to significant progress in machine learning techniques, such as deep learning, as well as significant efforts to develop freely available resources. In this talk, I will review recent progress of voice conversion techniques, overviewing recent research activities including Voice Conversion Challenges, and then, I will also discuss possible future directions of voice conversion research.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Voice conversion is a technique for modifying speech waveforms to convert non-/paralinguistic information into any form we want while preserving linguistic content. It has been dramatically improved thanks to significant progress in machine learning techniques, such as deep learning, as well as significant efforts to develop freely available resources. In this talk, I will review recent progress of voice conversion techniques, overviewing recent research activities including Voice Conversion Challenges, and then, I will also discuss possible future directions of voice conversion research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "On behalf of the organizing committee, it is our pleasure to welcome you to National Taipei University of Technology (NTUT), Taipei, Taiwan, for the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING), the flagship conference on computational linguistics, natural language processing, and speech processing in Taiwan. ROCLING is the annual conference of the Association for Computational Linguistics and Chinese Language Processing (ACLCLP) which is regularly held by different universities in different cities of Taiwan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Welcome Message from ROCLING 2020",
"sec_num": null
},
{
"text": "ROCLING 2020 features two distinguished keynote speeches from the renowned researchers in natural language processing as well as speech processing. Prof. Tomoki Toda (Professor, Information Technology Center, Nagoya University, Japan) will give a keynote on the \"Recent Trend of Voice Conversion Research and Its Possible Future Direction\". Prof. Hiroyuki Shinnou (Professor, Department of Computer and Information Sciences, Ibaraki University, Japan) will talk about the \"Use of BERT for NLP tasks by HuggingFace's transformers\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Welcome Message from ROCLING 2020",
"sec_num": null
},
{
"text": "ROCLING 2020 is going to provide an international forum for researchers and industry practitioners to share their new ideas, original research results and practical development experiences from all NLP areas, including computational linguistics, information understanding, and speech processing. To facilitate more cross-domain communication and collaboration, we organize a special session on Natural Language Processing for Digital Humanities with Taiwanese Association for Digital Humanities (TADH). In addition to the regular sessions during the first two days, the AI Tutorial organized by SIG-AI (Artificial Intelligence Special Interest Group) of ACLCLP and the Science & Technology Policy Research and Information Center (STPI) will provide Artificial Intelligence Courses that focus on speech processing and NLP applications on the last day. It's sure to be an exciting event for all participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Welcome Message from ROCLING 2020",
"sec_num": null
},
{
"text": "This conference would not have been possible without the tremendous effort of organizing committee and program committee who have worked closely to put together the attractive and intensive scientific program. Their great achievements have contributed much to the visibility of ROCLING 2020. We would like to express our sincere thank and gratitude to all of them. Special thanks to organizers who have worked hard to produce the proceedings, communicate with participants/authors, and handle the registration, budget, local arrangements and logistics. Thanks to all organizers including Program Chairs: Lung-Hao Lee and Kuan-Yu Chen, Tutorial Chair: Hung-Yi Lee, Industry Chair: Chi-Chun Lee, Demo Chair: Syu-Siang Wang, Publication Chair: Hen-Hsen Huang, Web Chair: Chuan-Ming Liu. Thanks to special session organizer: Chao-Lin Liu, and the invited speakers: Jen-Jou Hung, Su-bing Chang, and Wu, wan-yi. Thanks to all participants, authors, and program committee members and reviewers who contributed their valuable time and effort to provide timely and comprehensive reviews. Finally, we thank the generous government, academic and industry sponsors and appreciate your enthusiastic participation and support. Wih the best for a successful and fruitful ROCLING 2020 in Taipei, Taiwan. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Welcome Message from ROCLING 2020",
"sec_num": null
}
],
"back_matter": [
{
"text": "The pre-trained BERT model has been improving states of many NLP tasks. I believe that the use of BERT is essential when we build some kind of NLP system in the future. Initially, it was hard to use BERT because the concept of the pre-trained model was unfamiliar, and BERT was available only by using TensorFlow which is cumbersome for beginners. However, today, there is the HuggingFace's transformers library. Thanks to this library, everyone can utilize BERT easily.In this talk, first I will explain what BERT is and what we can do by BERT, and then I show some examples of the use of BERT by HuggingFace's transformers. As an application, I will do fine-tuning of BERT for a document classification task. Additionally, I will show the technique to learn just some of the layers in BERT. As one of the improvements of BERT, the study on smaller BERT model have been active, for example, Q8BERT, ALBERT, DistilBERT, TinyBERT and so on. Even simple pruning of BERT is effective. I will introduce these studies and show that some of these models are available through HuggingFace's transformers. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "National Kaohsiung First university of Science and Technology",
"authors": [
{
"first": "Guo-Wei",
"middle": [],
"last": "Bian",
"suffix": ""
}
],
"year": null,
"venue": "Feng Chia University Shih-Hung Liu (\u5289\u58eb\u5f18)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guo-Wei Bian (\u908a\u570b\u7dad), Huafan University Chia-Hui Chang (\u5f35\u5609\u60e0), National Central University Ru-Yng Chang (\u5f35\u5982\u7469), AI Clerk International Co., LTD. Yu-Yun Chang (\u5f35\u745c\u82b8), National Chengchi University Yung-Chun Chang (\u5f35\u8a60\u6df3), Taipei Medical University Cheng-Hsien Alvin Chen (\u9673\u6b63\u8ce2), National Taiwan Normal University Chung-Chi Chen (\u9673\u91cd\u5409), National Taiwan University Fei Chen (\u9673\u970f), Southern University of Science and Technology Mei-Hua Chen (\u9673\u73ab\u6a3a), Tunghai University Yun-Nung (Vivian) Chen (\u9673\u7e15\u5102), National Taiwan University Tai-Shih Chi (\u5180\u6cf0\u77f3), National Chiao Tung University Jia-Fei Hong (\u6d2a\u5609\u99a1), National Taiwan University Shu-kai Hsieh (\u8b1d\u8212\u51f1), National Taiwan University Chun-Hsien Hsu (\u5f90\u5cfb\u8ce2), National Central University Yi-Chin Huang (\u9ec3\u5955\u6b3d), National Pingtung University Hen-Hsen Huang (\u9ec3\u701a\u8431), National Chengchi University Jeih-weih Hung (\u6d2a\u5fd7\u5049), National Chi Nan University Wen-Hsing Lai (\u8cf4\u739f\u674f), National Kaohsiung First university of Science and Technology Ying-Hui Lai (\u8cf4\u7a4e\u6689), National Yang Ming University Hong-Yi Lee (\u674e\u5b8f\u6bc5), National Taiwan University Lung-Hao Lee (\u674e\u9f8d\u8c6a), National Central University Yuan-Fu Liao (\u5ed6\u5143\u752b), National Taipei University of Technology Chuan-Jie Lin (\u6797\u5ddd\u5091), National Taiwan Ocean University Shu-Yen Lin (\u6797\u6dd1\u664f), National Taiwan Normal University Chao-Lin Liu (\u5289\u662d\u9e9f), National Chengchi University Yi-Fen Liu (\u5289\u6021\u82ac), Feng Chia University Shih-Hung Liu (\u5289\u58eb\u5f18), Delta Electronics, Inc. Wen-Hsiang Lu (\u76e7\u6587\u7965), National Cheng Kung University Ming-Hsiang Su (\u8607\u660e\u7965), Soochow University Richard Tzong-Han Tsai (\u8521\u5b97\u7ff0), National Central University Wei-Ho Tsai (\u8521\u5049\u548c), National Taipei University of Technology Yuen-Hsien Tseng (\u66fe\u5143\u986f), National Taiwan Normal University Jenq-Haur Wang (\u738b\u6b63\u8c6a), National Taipei University of Technology Syu-Siang Wang (\u738b\u7dd2\u7fd4), National Taiwan University Hsin-Min Wang (\u738b\u65b0\u6c11), Academia Sinica Jiun-Shiung Wu (\u5433\u4fca\u96c4), National Chung Cheng University Shih-Hung Wu (\u5433\u4e16\u5f18), Chaoyang University of Technology Jheng-Long Wu (\u5433\u653f\u9686), Soochow University Cheng-Zen Yang (\u694a\u6b63\u4ec1), Yuan Ze University Yi-Hsuan Yang (\u694a\u5955\u8ed2), Academia Sinica Jui-Feng Yeh (\u8449\u745e\u5cf0), National Chia-Yi University Liang-Chih Yu (\u79b9\u826f\u6cbb), Yuan Ze University",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "from the TAF, the 2007 ISS Best Paper Award from the IEICE, the 2009 Young Author Best Paper Award from the IEEE SPS, and the 2013 Best Paper Award (Speech Communication Journal) from EURASIP-ISCA. He also received the 10th Ericsson Young Scientist Award from Nippon Ericsson K.K., the 4th Itakura Prize Innovative Young Researcher Award from the ASJ, the 2012 Kiyasu Special Industrial Achievement Award from the IPSJ, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology, the Young Scientists' Prize in 2015. He served as a member of the Speech and Language Technical Committee of the IEEE SPS from 2007 to 2009 and 2014 to 2016. He has served as an Associate Editor of IEEE Signal Processing Letters since Nov. 2016. He is a member of IEEE, ISCA, IEICE, IPSJ, and ASJ.",
"num": null,
"uris": null
}
}
}
}