ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.27.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:06.681979Z"
},
"title": "Macsen: A Voice Assistant for Speakers of a Lesser Resourced Language",
"authors": [
{
"first": "Bryn",
"middle": [],
"last": "Dewi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Unit Bangor University",
"location": {
"country": "Wales"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Unit Bangor University",
"location": {
"country": "Wales"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports on the development of a voice assistant mobile app for speakers of a lesser resourced language-Welsh. An assistant with a smaller set of effective but useful skills is both desirable and urgent for the wider Welsh speaking community. Descriptions of the app's skills, architecture, design decisions and user interface is provided before elaborating on the most recent research and activities in open source speech technology for Welsh. The paper reports on the progress to date on crowdsourcing Welsh speech data in Mozilla Common Voice and of its suitability for training Mozilla's DeepSpeech speech recognition for a voice assistant application according to conventional and transfer learning methods. We demonstrate that with smaller datasets of speech data, transfer learning and a domain specific language model, acceptable speech recognition is achievable that facilitates, as confirmed by beta users, a practical and useful voice assistant for Welsh speakers. We hope that this work informs and serves as a model to researchers and developers in other lesserresourced linguistic communities and helps bring into being voice assistant apps for their languages.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports on the development of a voice assistant mobile app for speakers of a lesser resourced language-Welsh. An assistant with a smaller set of effective but useful skills is both desirable and urgent for the wider Welsh speaking community. Descriptions of the app's skills, architecture, design decisions and user interface is provided before elaborating on the most recent research and activities in open source speech technology for Welsh. The paper reports on the progress to date on crowdsourcing Welsh speech data in Mozilla Common Voice and of its suitability for training Mozilla's DeepSpeech speech recognition for a voice assistant application according to conventional and transfer learning methods. We demonstrate that with smaller datasets of speech data, transfer learning and a domain specific language model, acceptable speech recognition is achievable that facilitates, as confirmed by beta users, a practical and useful voice assistant for Welsh speakers. We hope that this work informs and serves as a model to researchers and developers in other lesserresourced linguistic communities and helps bring into being voice assistant apps for their languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Research and development of language technologies for lesser-spoken languages is characterised by not only a lack of data and human resources but also a lack of useful end user applications that address the technological needs and expectations of speakers as well as better mitigate linguistic digital extinction. In recent years the increasing popularity and capabilities of voice assistants such as Google Assistant, Alexa, Siri and Cortana for larger languages such as English have increased the urgency for language technologies and similar applications to serve speakers of other languages, in particular lesser resourced languages. (Evans, 2018) This paper reports on the ongoing work for developing a useful voice assistant for Welsh speakers. This work builds on a series of previous short term projects which had crowdsourced Welsh speech corpora (Cooper et. al., 2019) (1)) and developed a prototype voice assistant that could run on Raspberry Pis. Our initial prototype assistant was capable of responding to a very limited collection of questions regarding time, weather, news and a small set of article titles from the Welsh language Wikipedia. These 'skills' however were not fully implemented, and therefore not very useful due to the need to devote time and priorities towards speech recognition capabilities, to the neglect of other constituent components such as intent parsing, third party API service integration and generation of natural language answers. Despite attempts to make the Welsh voice assistant's implementation accessible 1 to an audience of developers not expert in language technologies, factors such as the complexity of speech recognition development kits, lack of simple documentation, hardware limitations and licensing restrictions collectively undermined 1 Our website at https://projectmacsen.github.io/ provided easy to follow information on how to setup and create your own (limited) Welsh voice assistant on Raspberry Pi equipment.",
"cite_spans": [
{
"start": 638,
"end": 651,
"text": "(Evans, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 856,
"end": 878,
"text": "(Cooper et. al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1797,
"end": 1798,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "stimulating wider development of voice based applications and services for the Welsh speaking community by other developers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Fortunately, progress in open source speech technology, machine learning and software development tools has accelerated in recent months and years to provide new opportunities for empowering lesser resourced language communities to develop their own improved voice assistants and other applications. (Jones, 2019) ",
"cite_spans": [
{
"start": 300,
"end": 313,
"text": "(Jones, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We have developed our own Welsh language voice assistant mobile application and have named it Macsen. Some existing open source digital assistants' projects and products, such as MyCroft 2 , support localization, allowing its software and supported skills to be translated into Welsh. A functionally complete and fully localized assistant is not feasible however if the underlying language technology components, in particular speech recognition and text-to-speech, are not yet as capable as English counterparts. MyCroft also requires specialised hardware, including Raspberry Pi, as a prerequisite for end users to use the software. We had observed from our previous projects that there was limited knowledge and usage of such equipment by the Welsh speaking community at large. Installations of any localized versions would thus be very limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Mobile devices however are very prevalent. Developing an assistant for such devices is a very obvious choice of platform for providing voice assistant functionality as easily and as wide as possible to the wider Welsh speaking community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "We designed the Macsen app with the following objectives in mind:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "-It must be able to run on Android and iOS phones (and possibly other platforms in the future) -It must provide complete and useful skills that users will want to use -Communication should be in as natural as possible language -Users should be able to easily ascertain what skills and questions the assistant supports -If the app is not able to recognise the user's speech, then it must \uf0a7 still be usable as a text-based (chatbot) assistant \uf0a7 provide opportunities for users to contribute to improve its abilities -It must be easy to add new skills with as few updates as possible -It must provide freedom to the user and respect privacy -The entire solution should be open source, be easy to integrate in part or as a whole into other solutions and permissively licensed. -It must be helpful to the research of language technologies for lesser-resourced languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "We have succeeded in developing the Macsen app for the two mobile platforms from more or less a single code base by using Flutter 3 by Google, an open source toolkit for building mobile, desktop and web applications. Platform specific code that access underlining OS services or integrate third-party SDK libraries is very limited but include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "geolocation detection for the weather skill creating scheduled notifications for the alarm skill -Spotify Android/iOS SDK integration 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "The source code can be found on GitHub 5 . The app is available on Apple AppStore 6 and on Google Play 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "The skills we anticipated being most useful and popular with end users were: tywydd (weather) -provides the latest weather forecast for today and tomorrow at device's geo-location. Conditions are retrieved from the OpenWeatherMap API.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "newyddion (news) -reads frontpage or category news headlines from RSS feeds provided by the Golwg360 Welsh language news website.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "amser (time) -ask the app what time it is. TimeZoneDB API was used so as to get the time for the app's geo coordinates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "larwm (alarm) -ask the app to ring an alarm at a particular time later in the next 24 hours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "spotify -ask the app to play music by a particular Welsh musical artist on the device's Spotify app. Most popular artists were selected and whose names are challenging for English language assistants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "wicipedia -the assistant reads the first two sentences from the requested subject's Welsh language Wikipedia article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "For ease of implementation, components providing speech recognition, intent parsing, natural language generation and text-to-speech functionality are provided externally and are accessed by the app over the internet via specially crafted APIs. Separation of the key language technology components to hosted servers provides a more modular and flexible architecture. Such an architecture however might alarm users concerned with privacy. The app therefore provides reassurance with a page that states the privacy policy, explains what information the server uses and assures that no data is retained. Figure 1 shows the first screen presented to the user upon opening the app. Four tabs at the bottom provides the user with four ways to interact with the assistant. 'Siarad' (Speak), the first and primary tab, allows the user to speak to the app. The 'Teipio' (Typing) tab allows users to type in their command or question in case the user's speech has perhaps not been recognized. With the use of predictive text keyboards, frequently typed questions can be quickly learnt and gradually become less cumbersome. The 'Hyfforddiant' (Training) tab meanwhile allows the user to contribute recordings for aiding in improving the 6 Macsen on Apple AppStore : https://apps.apple.com/gb/app/macsen/id1489915663 7 Macsen on Google Play : https://play.google.com/store/apps/details?id=cymru.techi aith.flutter.macsen assistant's speech recognition. Finally the 'Help' tab provides the user with information on the skills and questions the app can respond to. The app also contains a burger menu on the top left, where further information can be found about Macsen, Privacy, server configuration and the Mozilla Common Voice project.",
"cite_spans": [
{
"start": 1225,
"end": 1226,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 600,
"end": 608,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Similar to other mobile based assistants, such as Siri and Alexa, speech interaction with Macsen is initiated by pressing a microphone button to start listening and then again to stop listening. It does not rely on a wake word which aids ease of implementation and improves privacy. The first opening screen shown in Figure 1 explains the operation of the microphone button and also recommends some initial questions for users to try (e.g. \"What is the news?\"). First impressions are important and so these recommended questions are easier for the speech recognition to recognize and provide very useful and dynamic information in their responses. Figure 2 shows a screenshot of the app having responded to a more detailed question \"What's the news in Wales?\" This response is typical of a question and answering type skills (such as time, Wicipedia and weather). The recognised question is displayed first and above the text currently being uttered by the text-to-speech. A button may also be provided for each utterance that activates any hyperlinks to further related content. For example to the associated full news article or Wicipedia page. Displaying the uttered text is useful and probably necessary for the user to comprehend the speech produced from a simple MaryTTS based Welsh voice that may not yet be every time sufficiently naturally sounding and intelligible.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 325,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 648,
"end": 656,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Further example usages are showcased in videos online 8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "As mentioned, the assistant's language technology components are hosted externally to the app and are accessible via cloud based APIs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Text to speech APIs from previous work in a project on providing online Welsh language text-to-speech services (Lleisiwr, 2019) was available and thus did not require further effort. Speech recognition required newly trained models and are described in subsequent sections.",
"cite_spans": [
{
"start": 111,
"end": 127,
"text": "(Lleisiwr, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Particular effort was required to implement intent parsing and the generation of natural language responses. We decided that utilising an existing library for intent parsing would limit development time and increase reusability in other projects and products. A number of chatbot platforms incorporate intent parsing, but we opted for the padatious 9 library developed by the MyCroft project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Padatious is a very simple and fast neural network intent parser that is able to recognise the intent in a sentence and therefore identify the encompassing skill. Each intent is trained from example sentences generated from templates decorated with lists of associated entities. The entire collection of generated sentences 10 serves other purposes in the construction of our voice assistant, including as will be elaborated later in this paper, language modelling and transcripts for recording.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "For example, the intent to trigger playing music by certain artist can be trained by the following template sentences: All generated sentences can be obtained from the API at: https://api.techiaith.org/assistant/get_all_skills_intents_se ntences services do not provide their results in Welsh, therefore extra effort was required to translate data items. For example, our weather skill handler uses our localization of the weather conditions 11 such as 'windy', 'raining', 'sunny' returned from the OpenWeatherMap API. Figure 3 shows the app's 'Help' tab which consists of a collapsible tree like structure populated with the padatious intent parser with the skills and their generated sentences. This helps inform the user of Macsen's skills and of the sentences they can speak.",
"cite_spans": [],
"ref_spans": [
{
"start": 519,
"end": 527,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Chwaraea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "Finally, figure 4 shows the app's 'Hyfforddiant' (Training) tab which consists of a very simple interface for crowdsourcing recordings of Macsen sentences from users. A random sentence is selected from all the sentences generated by the intent parser and is presented for recording. The user uses the same microphone button operation to start and stop recording. Upon stopping the microphone, the recording is uploaded whilst the app presents the next random sentence. Before recording their first sentence, users have been presented with a disclaimer explaining the purpose of recording with assurances that no personal information is collected. Users have to confirm they consent to the sharing their recordings according to open and permissive licensing before recording can begin. Thus users are contributing their voices to an in-domain collection of speech data that, as can be seen in the next 11 Our translations for OpenWeatherMap API weather status can be found at: https://github.com/techiaith/macsen-section, serves as the test set for evaluating the voice assistant's speech recognition component.",
"cite_spans": [
{
"start": 901,
"end": 903,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Welsh Voice Assistant Application",
"sec_num": "2."
},
{
"text": "The data we used for training our assistant's speech recognition engine was sourced from a number of open speech and textual resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Analysis",
"sec_num": "3."
},
{
"text": "The primary speech data resource was the Welsh language data from Mozilla's Common Voice multilingual speech corpus (Ardila et. al., 2019) . Having made previous attempts at crowdsourcing speech corpora (Cooper S. et. al., 2019) , the Welsh community, consisting of our university research unit 12 and members of Welsh open source community 13 were ideally placed to enact Welsh as one of the first languages in the multilingual expansion of Common Voice in June 2018. (Henretty M., 2018). Table 4 shows the other three datasets. These datasets are recommended by Mozilla as the most suitable as training, development and training sets for creating Mozilla DeepSpeech models. They are very small in comparison to the entire Welsh data size, and have not increased much during 2019. Mozilla's explanation and justification 14 is that DeepSpeech models may exhibit bias towards recognizing sentences that have been recorded multiple times and therefore would not be as optimal as a general purpose speech recognition engine. Figure 5 shows that the Welsh data in Common Voice contains just over 2000 sentence and therefore a significant proportion have been recorded many tens of times (one sentence having been recorded 69 times) by multiple speakers. During releases CV1, CV2 and CV3 all sentences were being recorded 10, then 20, then 30 times on average. The number of sentences available for recording increased by version CV4, thanks to efforts that began in August 14 \"Why train.tsv includes few files (just 3% of validated set)?\": https://discourse.mozilla.org/t/why-train-tsvincludes-a-few-files-just-3-of-validated-set/36471 2019 to add significant amounts of new sentences via the Mozilla Sentence Collector website 15 . Sentences were sourced from various public domain collections (Gutenburg, OCRed out of copyright books), donations of portions of copyrighted works and other corpora resources at the disposal of our research unit (Prys et. al., 2016) . Since each sentence has to follow in the Mozilla Sentence Collector website a review process to confirm its suitability for reading and training speech recognition models, progress on adding to Common Voice has been slow. We aim to continue our co-ordinated efforts to submit and review sentences at a rate that is at least up with or ahead of the rate of new recordings.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Ardila et. al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 203,
"end": 228,
"text": "(Cooper S. et. al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1943,
"end": 1963,
"text": "(Prys et. al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 490,
"end": 497,
"text": "Table 4",
"ref_id": null
},
{
"start": 1023,
"end": 1031,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Data Collection and Analysis",
"sec_num": "3."
},
{
"text": "In the meantime, despite containing repeat recordings, the number of hours provided in Common Voice's validated (and 'other') datasets represent a high number of hours of speech data for a lesser resourced language such as Welsh, which cannot be dismissed and must be evaluated for training a speech recognition engine for a simple voice assistant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Analysis",
"sec_num": "3."
},
{
"text": "In our speech recognition experiments we formed the following training sets: additional speech dataset 16 which to date contains 433 recordings, totalling approximately 20 minutes, from 19 beta testers. Each recording has been validated internally and can be used as an in-domain test set. In the presentation of experiment results in section 4, this dataset is denoted by the label 'MACSEN'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Analysis",
"sec_num": "3."
},
{
"text": "SET_1: Mozilla'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Analysis",
"sec_num": "3."
},
{
"text": "The entire collection of sentences generated for training intent parser training also serves as a text corpus for training a domain specific language model. Consisting of 1033 sentences with 570 unique tokens/words, the language model is denoted in the experiment results in section 4 with the label LM_MACSEN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Analysis",
"sec_num": "3."
},
{
"text": "We have also trained and used in our experiments a general purpose language model using Welsh texts sourced from the OSCAR multilingual corpus (Su\u00e1rez et. al., 2019) , which is derived from the CommonCrawl 17 corpus. Segments containing any illegal characters such as numbers were excluded and so reduced the training corpus word count from 37 million to 11 million. In the results in section 4, the usage of the OSCAR based language model is denoted by the label LM_OSCAR.",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "(Su\u00e1rez et. al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Analysis",
"sec_num": "3."
},
{
"text": "We decided to evaluate and use Mozilla DeepSpeech (Mozilla) for our assistant's speech recognition engine. Other speech recognition kits, such as Kaldi (Povey, 2011) , may also perform sufficiently. Mozilla DeepSpeech however has the advantage of being much easier to use, is very developer friendly and easy to integrate into projects implemented in a wide range of programming languages and technologies.",
"cite_spans": [
{
"start": 152,
"end": 165,
"text": "(Povey, 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Recognition Effectiveness",
"sec_num": "4."
},
{
"text": "Mozilla DeepSpeech is based on Tensorflow and is end-toend neural network architecture which maps audio features directly to characters in words. While the general underlying architecture of DeepSpeech is language independent, the alphabet of possible output characters may be language specific, as is the case with Welsh. The following shows the alphabet used in our experiments: <space>,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,r,s,t,u,v,w,y,z,\u00e1,\u00e2,\u00e4,\u00e9, \u00ea,\u00eb,\u00ee,\u00ef,\u00f4,\u00f6,\u00f4,\u00fb,\u0175,\u0177,' In our experiments we have evaluated the effectiveness of speech recognition for our assistant with two versions of DeepSpeech, each supporting two approaches of machine learning. First the conventional learning approach with the latest release of DeepSpeech (0.6.1 18 ). Secondly, a branched work from DeepSpeech 0.5.1 that implements transfer learning 19 .",
"cite_spans": [
{
"start": 381,
"end": 467,
"text": "<space>,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,r,s,t,u,v,w,y,z,\u00e1,\u00e2,\u00e4,\u00e9, \u00ea,\u00eb,\u00ee,\u00ef,\u00f4,\u00f6,\u00f4,\u00fb,\u0175,\u0177,'",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Recognition Effectiveness",
"sec_num": "4."
},
{
"text": "Transfer learning provides a mechanism to exploit models trained on much larger collections of data from a larger language in the training of a new model for new and lesser resourced language. Typically, the bottom layers of a model trained with English language data are kept while a 16 Macsen test set data is available at http://techiaith.cymru/deepspeech/macsen/datasets/ 17 https://commoncrawl.org 18 Mozilla DeepSpeech 0.6.1 release : https://github.com/mozilla/DeepSpeech/releases/tag/v0.6. number of top layers are replaced by training with data from a lesser resourced language such as Welsh. Through initial trials and experimentation we found that the optimal number of top layers to drop (our value for the --drop_source_layers flag) was 2. The English model provided by Mozilla for DeepSpeech 0.5.1 was used as the source model. 19 Mozilla DeepSpeech transfer learning forked branch : https://github.com/mozilla/DeepSpeech/tree/transfer-learning2",
"cite_spans": [
{
"start": 285,
"end": 287,
"text": "16",
"ref_id": null
},
{
"start": 403,
"end": 405,
"text": "18",
"ref_id": null
},
{
"start": 842,
"end": 844,
"text": "19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Recognition Effectiveness",
"sec_num": "4."
},
{
"text": "Default values for flags affecting learning rate and dimensions were used in all experiments. The results are shown in Table 6 . After initial experimenting with Mozilla's recommended train, dev and train (first three rows in Table 6 ) we observed that 10 epochs would suffice for experiments where a development set was not possible or appropriate.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 6",
"ref_id": null
},
{
"start": 226,
"end": 233,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "Our training setup consisted of a single workstation containing two NVIDIA GTX 1080Ti graphics cards operated by our own crafted scripts 20 that made full use of Dockerfiles provided in each Mozilla DeepSpeech release.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "Training times ranged from a couple of minutes for SET_1 based experiments to approximately an hour for SET_3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "In Table 6 , the k-fold cross validation (k=10) evaluation method (shown in bold), a commonly used resampling procedure to evaluate with a limited data set, was used as a means for reliably confirming the optimal model training configuration for each approach. Thus, the best WER scores are achieved by utilising as much speech data as possible (even if not yet validated by Common Voice volunteers) with transfer learning machine learning approach and a domain specific language model.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "We learn from contrasting experiment N1 (WER 100%) with its corresponding transfer learning experiment -TL7 (WER 26.01%) in table 6 that transfer learning with only one hour of training speech data (CV4_SET_1) can provide an immediate and drastic reduction in WER. When we utilise all of Mozilla's recommended data sets of nonrepeated recordings (CV4_SET_4, approximately 3 hours) and contrast results between experiments N5 and TL11, we observe there is no significant reduction in WER from the transfer learning method -0.26% -compared to a 3.63% reduction achieved in the WER for conventional machine learning approach. Additional hours of training data brings down the conventional learning method's WER to a best score of 25.21% in N3. The same amount of data however brings the transfer learning method in TL9 to its lowest WER at 15.97%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "We can also observe also from these results that a domain specific language model aids significantly in reducing the WER. Experiment TL6 shows that running a DeepSpeech decoder, trained with transfer learning and with an OSCAR based language model only achieves a WER of 41.83% for the Macsen voice assistant domain. When using instead a domain specific language model, as in TL9, a WER of 15.97% is achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "These results reinforce that for a lesser resourced language, transfer learning and domain specific language models provide the best and only feasible means at present to achieve effective speech recognition capability for voice assistants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "This paper has reported on a project to develop a Welsh language voice assistant app for Apple iOS and Android mobile devices as well present how Mozilla's Common 20 Our scripts and information for reproducing our experiments can be found on GitHub at https://github.com/techiaith/docker-deepspeech-cy Voice and DeepSpeech projects were exploited to provide an effective speech recognition engine component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Our speech recognition engine still has a WER above 10 whereas a score of below 10 -as is regularly reported for engines for larger languages -may be considered as a prerequisite. We believe however that our speech recognition engine is sufficiently practical and effective in a voice assistant application setting. As the output from speech recognition in a voice assistant provides the input to the intent parser, the intent parser's tolerance and flexibility on sentence variations may mitigate any issues arising from less than perfect recognition results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Initial user feedback has informed us that the Macsen voice assistant app is able to recognize and respond to nearly all of their questions and commands. The Spotify skill is particularly popular. Users also commented that they routinely ask for and obtain the latest news and weather from Macsen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "We aim to continue analysing and improving the Welsh data in Mozilla Common Voice in collaboration with Mozilla and the Welsh open source volunteer community. Analysis in this paper has highlighted the scale of repeated recordings and so an immediate task for further training of Welsh speech recognition is to understand and mitigate any bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "We also aim to continue working on Macsen by adding more skills and making it more useful as a resource to the developer community. A particular ambition is for Macsen to be useful for researchers and developers in other lesser resourced language communities to bootstrap voice assistants for their languages. In this regard, the app's user interface is easily localizable with Flutter's i18n support. Crucially however, this evaluation of Mozilla's DeepSpeech suggests that, even with as little as an hour of speech data from the Mozilla Common Voice project, utilising a transfer learning approach to training, along with possibly a reduced number of skills, and therefore a smaller language model and possibly a smaller alphabet, an effective and reliable speech recognition component for a voice assistant is achievable and can enable the more immediate availability of voice assistant apps for other lesser resourced languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "resourced language communities to develop their own applications such as voice assistants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "The work on developing the Macsen app was made possible with financial support from the Welsh government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "MyCroft -an open source voice assistant: https://mycroft.ai",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Flutter: https://flutter.dev/ 4 Spotify for Developers: https://developer.spotify.com 5 Source code for the app can be found on GitHub at: https://github.com/techiaith/macsen-flutter",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://vimeo.com/showcase/6772051 9 https://github.com/MycroftAI/padatious",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.meddal.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank all the volunteers who have recorded and contributed their voices to the Welsh Common Voice effort. We in particular thank Rhoslyn Prys who has given a great deal of his time and energy in a volunteer capacity to initially translate the Common Voice website to Welsh and subsequently to promoting the project relentlessly. Recruitment campaigns have been supported by the Welsh Government, Welsh Language Commissioner, large public organisations in Wales as well as coverage by local media.We thank also Mozilla for the opportunities it has provided via its Common Voice and DeepSpeech projects, as open and decentralized speech technologies, to empower lesser",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "9."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Crowdsourcing the Paldaruo Speech Corpus of Welsh for Speech Technology",
"authors": [
{
"first": "S",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Jones",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Prys",
"suffix": ""
}
],
"year": 2019,
"venue": "Information",
"volume": "10",
"issue": "8",
"pages": "",
"other_ids": {
"DOI": [
"10.3390/info10080247"
]
},
"num": null,
"urls": [],
"raw_text": "Cooper,S. Jones, D.B. and Prys, D. (2019) Crowdsourcing the Paldaruo Speech Corpus of Welsh for Speech Technology. Information, 10(8), p.247. Available at: http://dx.doi.org/10.3390/info10080247",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Report on language equality in the digital age",
"authors": [
{
"first": "J",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 2018,
"venue": "European Parliament Report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, J. (2018) Report on language equality in the digital age. European Parliament Report 2018/2028 INI. Available at http://www.europarl.europa.eu/doceo/document/A-8- 2018-0228_EN.pdf Henretty, M. (2018) More Common Voices. https://medium.com/mozilla-open-innovation/more- common-voices-24a80c879944 [Accessed Feb 5, 2020]",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Voice recognition project offers big opportunity to Welsh language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jones, A. (2019) Voice recognition project offers big opportunity to Welsh language.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Building Intelligent Digitial Assistants for Speakers of a Lesser-Resourced Language",
"authors": [
{
"first": "D",
"middle": [
"B"
],
"last": "Jones",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC 2016 Workshop \"CCURL 2016 -Towards an Alliance for Digital Language Diversity",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jones,D.B. Cooper,S. (2016) Building Intelligent Digitial Assistants for Speakers of a Lesser-Resourced Language. Proceedings of LREC 2016 Workshop \"CCURL 2016 -Towards an Alliance for Digital Language Diversity\" p.74. Claudia Soria et. al. (eds). Available at: http://www.lrec- conf.org/proceedings/lrec2016/workshops/LREC2016 Workshop-CCURL2016_Proceedings.pdf",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Synthetic voices for patients that are about to loose their ability to speak as a result of diseases such as Motor Neurone Disease or throat cancer",
"authors": [
{
"first": "",
"middle": [],
"last": "Lleisiwr",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lleisiwr (2019) Synthetic voices for patients that are about to loose their ability to speak as a result of diseases such as Motor Neurone Disease or throat cancer. Available at: https://lleisiwr.techiaith.cymru/?lang=en [Accessed: Feb 5, 2020]",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A TensorFlow implementation of Baidu's DeepSpeech architecture",
"authors": [
{
"first": "",
"middle": [],
"last": "Mozilla",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mozilla (n.d.). A TensorFlow implementation of Baidu's DeepSpeech architecture. Available at https://github.com/mozilla/DeepSpeech [Accessed: Feb 5, 2020].",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Glem",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Silovsk\u00fd",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vesel\u00fd",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem, Qian,Y., Schwarz, P., Silovsk\u00fd, J., Stemmer, G., and Vesel\u00fd,K. (2011). The Kaldi speech recognition toolkit. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Waikoloa, HI, USA",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cysill Ar-lein Corpus: A corpus of written contemporary Welsh compiled from an online spelling and grammar checker",
"authors": [
{
"first": "D",
"middle": [],
"last": "Prys",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Prys",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Jones",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prys, D., Prys, G., Jones, D.B. (2016). Cysill Ar-lein Corpus: A corpus of written contemporary Welsh compiled from an online spelling and grammar checker. Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Gathering Data for Speech Technology in the Welsh Language: A Case Study",
"authors": [
{
"first": "D",
"middle": [],
"last": "Prys",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Jones",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the LREC 2018 Workshop \"CCURL 2018 -Sustaining Knowledge Diversity in the Digital Age",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prys,D. Jones, D.B. (2018(1)). Gathering Data for Speech Technology in the Welsh Language: A Case Study. Proceedings of the LREC 2018 Workshop \"CCURL 2018 -Sustaining Knowledge Diversity in the Digital Age\", p.56. Claudia Soria, Laurent Besacier and Laurette Pretorius (eds.). Available at: http://lrec- conf.org/workshops/lrec2018/W26/pdf/book_of_proce edings.pdf",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "National Language Technologies Portals for LRLs: A Case Study",
"authors": [
{
"first": "D",
"middle": [],
"last": "Prys",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Jones",
"suffix": ""
}
],
"year": 2018,
"venue": "Human Language Technology. Challenges for Computer Science and Linguistics. LTC 2015",
"volume": "10930",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prys, D., Jones D.B. (2018(2)) National Language Technologies Portals for LRLs: A Case Study. In: Vetulani Z., Mariani J., Kubis M. (eds) Human Language Technology. Challenges for Computer Science and Linguistics. LTC 2015. Lecture Notes in Computer Science, vol 10930. Springer, Cham 7. Language Resource References",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Common Voice: A Massively-Multilingual Speech Corpus",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ardila",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Branson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Henretty",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kohler",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Morais",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.06670v1"
]
},
"num": null,
"urls": [],
"raw_text": "Ardila R., Branson M., Davis K., Henretty M., Kohler M., Meyer J., Morais R., Saunders L., Tyers F., Weber G. (2019) Common Voice: A Massively-Multilingual Speech Corpus. arXiv:1912.06670v1 [cs.CL]",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures. 7 th Workshop on the Challenges in the Management of Large Corpora (CMLC-7)",
"authors": [
{
"first": "P",
"middle": [
"J O"
],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su\u00e1rez, P.J.O., Sagot,B., Romary, L. (2019) Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures. 7 th Workshop on the Challenges in the Management of Large Corpora (CMLC-7), Jul 2019, Cardiff, United Kingdom. (hal-02148693)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Screenshot of Macsen upon opening"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Example result from asking : \"What's the news in Wales?\""
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Repeat recordings in each Welsh Common Voice release (validated recordings) (x -no. of sentences, y -no of recordings)"
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>In comparison to previous efforts at crowdsourcing Welsh</td></tr><tr><td>speech data, and relative to some larger languages in</td></tr><tr><td>Common Voice, 77 hours of speech data for Welsh is a</td></tr><tr><td>significant achievement and is evidence of the hard work</td></tr><tr><td>done by Welsh open source volunteers in successfully</td></tr><tr><td>promoting and attracting the wider Welsh language</td></tr><tr><td>sgwrsfot/blob/master/online-</td></tr><tr><td>api/assistant/skills/tywydd/owm/status.cy</td></tr></table>",
"text": "shows how Welsh has progressed through each Common Voice release since June 2018. Simple interface for crowdsourcing recordings of Macsen domain sentences from users. \"Ask Welsh Wikipedia what is humanism?\" speaking community to contribute their voices to the Mozilla Common Voice project.",
"html": null
}
}
}
}