ACL-OCL / Base_JSON /prefixR /json /rocling /2019.rocling-1.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:55:10.154779Z"
},
"title": "Building of children speech corpus for improving automatic subtitling services",
"authors": [
{
"first": "Matus",
"middle": [],
"last": "Pleva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Kosice",
"location": {
"country": "Slovakia"
}
},
"email": "matus.pleva@tuke.sk"
},
{
"first": "Stanislav",
"middle": [],
"last": "Ondas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Kosice",
"location": {
"country": "Slovakia"
}
},
"email": "stanislav.ondas@tuke.sk"
},
{
"first": "Daniel",
"middle": [],
"last": "Hl\u00e1dek",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Kosice",
"location": {
"country": "Slovakia"
}
},
"email": "daniel.hladek@tuke.sk"
},
{
"first": "Jozef",
"middle": [],
"last": "Juhar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Kosice",
"location": {
"country": "Slovakia"
}
},
"email": "jozef.juhar@tuke.sk"
},
{
"first": "J\u00e1n",
"middle": [],
"last": "Sta\u0161",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Kosice",
"location": {
"country": "Slovakia"
}
},
"email": "jan.stas@tuke.sk"
},
{
"first": "Yuan-Fu",
"middle": [],
"last": "Liao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Taipei University of Technology",
"location": {}
},
"email": "yfliao@mail.ntut.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the development and first evaluation of the new Slovak children speech audio corpus for improving the automatic broadcast news subtitling engine developed on the Technical University of Kosice in cooperation with the Slovak Academy of Sciences. The current automatic speech recognition (ASR) systems are reliable for a clean, prepared speech of adults with not very long pause inside the sentences. For speech recognition of children's, it is still a challenge from different reasons. They use much slang, and diminutive words, undeveloped pronunciation, shorter vocal tract (different speech parameters), the sentence syntax is different. The paper presents the results of the children speech automatic recognition from the system built for broadcast news transcription.",
"pdf_parse": {
"paper_id": "2019",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the development and first evaluation of the new Slovak children speech audio corpus for improving the automatic broadcast news subtitling engine developed on the Technical University of Kosice in cooperation with the Slovak Academy of Sciences. The current automatic speech recognition (ASR) systems are reliable for a clean, prepared speech of adults with not very long pause inside the sentences. For speech recognition of children's, it is still a challenge from different reasons. They use much slang, and diminutive words, undeveloped pronunciation, shorter vocal tract (different speech parameters), the sentence syntax is different. The paper presents the results of the children speech automatic recognition from the system built for broadcast news transcription.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The speech technology has significant potential, currently it has growing interest among children and technically enthusiastic people [1] session on the prestigious Interspeech conference will be held in September 2019 in Graz called Spoken Language Processing for Children's Speech [3] .",
"cite_spans": [
{
"start": 134,
"end": 137,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 283,
"end": 286,
"text": "[3]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The development of children's speech corpora for different languages is in progress [4] (British English, German and Swedish), [5] (non-native English), [6] Chinese, [7] Cantonese, [8] Jamaican English, [9] interactive emotional children speech and many others. For European union also small European languages are essential for electronic communication, so we decided to start the building of Slovak children speech corpus for improvement of the Slovak automatic speech recognition engines already built [10, 11] . Of course, the speech parameters different for children speech because of different vocal tract sizes [12] , and they are many algorithms (Vocal-Tract Length Normalization -VTLN) how to handle it [13] . For children's speech, the formant frequencies are higher, the speech rate is slower or higher than in adult speech, and the language contains more home slang, garbled and imaginary words.",
"cite_spans": [
{
"start": 84,
"end": 87,
"text": "[4]",
"ref_id": "BIBREF2"
},
{
"start": 127,
"end": 130,
"text": "[5]",
"ref_id": "BIBREF3"
},
{
"start": 153,
"end": 156,
"text": "[6]",
"ref_id": "BIBREF4"
},
{
"start": 166,
"end": 169,
"text": "[7]",
"ref_id": "BIBREF5"
},
{
"start": 181,
"end": 184,
"text": "[8]",
"ref_id": "BIBREF6"
},
{
"start": 203,
"end": 206,
"text": "[9]",
"ref_id": "BIBREF7"
},
{
"start": 505,
"end": 509,
"text": "[10,",
"ref_id": "BIBREF8"
},
{
"start": 510,
"end": 513,
"text": "11]",
"ref_id": "BIBREF9"
},
{
"start": 618,
"end": 622,
"text": "[12]",
"ref_id": "BIBREF10"
},
{
"start": 712,
"end": 716,
"text": "[13]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The Slovak language belongs to a group of Slavic languages, which are typical of inflection and free word order, which means it is morphologically rich and uses a very large vocabulary [10, 14] . These features make the Slovak automatic speech recognition task very complicated, and a large amount of data is required for automatic large vocabulary spontaneous speech recognition [14] .",
"cite_spans": [
{
"start": 185,
"end": 189,
"text": "[10,",
"ref_id": "BIBREF8"
},
{
"start": 190,
"end": 193,
"text": "14]",
"ref_id": "BIBREF12"
},
{
"start": 380,
"end": 384,
"text": "[14]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This article describes the first step, the collection of the first data, manual annotation, and testing of the current ASR system with children and adult speech recordings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "For children's speech, there are very few freely available recordings on the Internet, especially in the form suitable for speech recognition system acoustic model training. We decided to use the TV series' recordings. There were several problems when using TV series recordings. The vast majority of segments are tinged with music, which would not matter if we were trying to build a model that would recognize where the sound begins and ends.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building the database",
"sec_num": "2."
},
{
"text": "However, when it comes to recognizing children's speech, it can cause distortions that will be undesirable for our purpose, and our results will be affected to some extent [15] .",
"cite_spans": [
{
"start": 172,
"end": 176,
"text": "[15]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building the database",
"sec_num": "2."
},
{
"text": "The main problem with a database suitable for acoustic models training is the resources needed for quality data annotation. This task is very time-consuming, and another reason is the lack of publicly available data, and therefore, our database is of a more modest size [15] .",
"cite_spans": [
{
"start": 270,
"end": 274,
"text": "[15]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building the database",
"sec_num": "2."
},
{
"text": "The database of children's recordings is made up of segments of children's speech from the television TV series of commercial Slovak broadcasters Mark\u00edza and JoJ. Specifically, they are the TV series Daddy (Oteckovia), broadcast on Mark\u00edza since early 2018, and Holidays (Pr\u00e1zdniny) from JoJ, the first part aired January 18, 2017.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building the database",
"sec_num": "2."
},
{
"text": "The recordings were downloaded from premium archives of the TV broadcasters in Full HD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building the database",
"sec_num": "2."
},
{
"text": "We cut the utterances with children speech out from the .MP4 recording and merged the parts without background music. The audio codec used in original file was AAC LC (Advanced Audio Coding -Low Complexity profile) with 48kHz 257.05 kbps Stereo settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building the database",
"sec_num": "2."
},
{
"text": "Then the WAV file was exported in 48kHz Stereo PCM format and annotated with the Transcriber [16] application (Figure 1.) . The collected database statistic is summarized in the Table 1 . ",
"cite_spans": [
{
"start": 93,
"end": 97,
"text": "[16]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 110,
"end": 121,
"text": "(Figure 1.)",
"ref_id": "FIGREF0"
},
{
"start": 178,
"end": 185,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Building the database",
"sec_num": "2."
},
{
"text": "In our database, we have annotated the age, real names and surnames of publicly known children actors, so that we can see how the system performs with different ages of children (Figure 1. ). The gender, dialect, and the mother tongue (native or non-native speaker) were annotated for each speaker.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 188,
"text": "(Figure 1.",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Transcription process",
"sec_num": "3."
},
{
"text": "The mode field was set for speaker turns. We use the spontaneous option to indicate that the speech is spontaneous, unprepared speech or conversation. Mainly spontaneous speech was annotated for children. The planned speech is commonly used by studio moderators and sport news anchors. We follow the rules from standard broadcast news transcriptions [14] for the fidelity and the channel quality. Similarly, annotations mark the background noise and intermittent noise (see Figure 2. ).",
"cite_spans": [
{
"start": 350,
"end": 354,
"text": "[14]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 474,
"end": 483,
"text": "Figure 2.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transcription process",
"sec_num": "3."
},
{
"text": "Annotations of the speaker turns follow the rule that one speaker turn should be no longer than 5 seconds. The capital letters at the beginning of the sentences were not used for easier named entity recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transcription process",
"sec_num": "3."
},
{
"text": "The transcription process was made manually by the bachelor student and verified by his advisor. The plan is to extend the database following this proposed process in next year using more student annotators and expert verifications. [11] for broadcast news transcription [17] . This version was also made online for public testing and evaluation on [18] as seen in Figure 3 . The current engine could achieve 14.6% WER (Word Error Ratethe number of errors divided by the number of words in ground trough data) for broadcast news transcriptions where the variety of speakers and speech styles is wide [17] . For comparison, the dictation engine could achieve 3.93% WER for prepared Slovak dictation [10] .",
"cite_spans": [
{
"start": 233,
"end": 237,
"text": "[11]",
"ref_id": "BIBREF9"
},
{
"start": 271,
"end": 275,
"text": "[17]",
"ref_id": "BIBREF15"
},
{
"start": 349,
"end": 353,
"text": "[18]",
"ref_id": "BIBREF17"
},
{
"start": 600,
"end": 604,
"text": "[17]",
"ref_id": "BIBREF15"
},
{
"start": 698,
"end": 702,
"text": "[10]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Transcription process",
"sec_num": "3."
},
{
"text": "After the uploading and evaluation of the results from presented children Slovak speech database, we achieve only 47.81% WER, mainly because of 9.18% OOV (Out of vocabulary words) rate and very spontaneous speech segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transcription process",
"sec_num": "3."
},
{
"text": "The resources of children speech are scarce, even for major languages [7] . The presented database of children speech is the first one for the Slovak language and provides essential experiences about acoustic and mainly linguistic features of Slovak children speech. The development of adapted acoustic and language models for the Slovak automatic children speech recognition is in progress. There are several goals ahead, but mainly the extension of the presented dataset is planned for next year using more undergraduate students and expert verifications of the transcriptions. The goal is to present a special version of the SARRA models [17, 18] for children speech and evaluation by real users also for dictation and Human-Robot interaction purposes [19] based on the running international collaboration and projects.",
"cite_spans": [
{
"start": 70,
"end": 73,
"text": "[7]",
"ref_id": "BIBREF5"
},
{
"start": 641,
"end": 645,
"text": "[17,",
"ref_id": "BIBREF15"
},
{
"start": 646,
"end": 649,
"text": "18]",
"ref_id": "BIBREF17"
},
{
"start": 755,
"end": 759,
"text": "[19]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A review of ASR technologies for children's speech",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gerosa",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Giuliani",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Potamianos",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2nd Workshop on Child",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerosa, M., Giuliani, D., Narayanan, S., & Potamianos, A.: A review of ASR technologies for children's speech. In Proceedings of the 2nd Workshop on Child, Computer and Interaction, ACM, p. 7, 2009.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spoken Language Processing for Children's Speech, Interspeech 2019 Special session proposal",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spoken Language Processing for Children's Speech, Interspeech 2019 Special session proposal. [Online]. Available: https://sites.google.com/view/wocci/home/interspeech-2019-special-session. [Accessed: July 30, 2019].",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The PF_STAR children's speech corpus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Batliner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blomberg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "D'arcy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Elenius",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Giuliani",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gerosa",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hacker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Steidl",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2005,
"venue": "Ninth European Conference on Speech Communication and Technology -INTERSPEECH 2005",
"volume": "",
"issue": "",
"pages": "3761--3764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Batliner, A., Blomberg, M., D'Arcy, S., Elenius, D., Giuliani, D., Gerosa, M., Hacker, C., Russell, M., Steidl, S., & Wong, M.: The PF_STAR children's speech corpus. In Ninth European Conference on Speech Communication and Technology -INTERSPEECH 2005. pp. 3761-3764, 2005.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "TBALL data collection: the making of a young children's speech corpus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iseli",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Heritage",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Alwan",
"suffix": ""
}
],
"year": 2005,
"venue": "Ninth European Conference on Speech Communication and Technology -INTERSPEECH 2005",
"volume": "",
"issue": "",
"pages": "1581--1584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazemzadeh, A., You, H., Iseli, M., Jones, B., Cui, X., Heritage, M., Price, P., Anderson, E., Narayanan, S., & Alwan, A.: TBALL data collection: the making of a young children's speech corpus. In Ninth European Conference on Speech Communication and Technology -INTERSPEECH 2005. pp. 1581-1584, 2005.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A multimedia corpus of child Mandarin: The Tong corpus",
"authors": [
{
"first": "D",
"middle": [],
"last": "Xiangjun",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Yip",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Chinese Linguistics",
"volume": "46",
"issue": "1",
"pages": "69--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangjun, D., & Yip, V.: A multimedia corpus of child Mandarin: The Tong corpus. Journal of Chinese Linguistics, 46(1), pp. 69-92, 2018.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A study on acoustic modeling for child speech based on multi-task learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "S",
"middle": [
"I"
],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "W",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 11th International Symposium on Chinese Spoken Language Processing (ISCSLP). Taipei, IEEE",
"volume": "",
"issue": "",
"pages": "389--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, J., Ng, S. I., Tao, D., Ng, W. Y., & Lee, T.: A study on acoustic modeling for child speech based on multi-task learning. In 2018 11th International Symposium on Chinese Spoken Language Processing (ISCSLP). Taipei, IEEE, pp. 389-393, 2018.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "JAMLIT: A Corpus of Jamaican Standard English for Automatic Speech Recognition of Children's Speech",
"authors": [
{
"first": "S",
"middle": [],
"last": "Watson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Coy",
"suffix": ""
}
],
"year": 2018,
"venue": "SLTU",
"volume": "",
"issue": "",
"pages": "243--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Watson, S., & Coy, A.: JAMLIT: A Corpus of Jamaican Standard English for Automatic Speech Recognition of Children's Speech. In SLTU. pp. 243-247, 2018.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "IESC-Child: An Interactive Emotional Children's Speech Corpus",
"authors": [
{
"first": "H",
"middle": [],
"last": "P\u00e9rez-Espinosa",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mart\u00ednez-Miranda",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Espinosa-Curiel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rodr\u00edguez-Jacobo",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Villase\u00f1or-Pineda",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Avila-George",
"suffix": ""
}
],
"year": 2020,
"venue": "Computer Speech & Language",
"volume": "59",
"issue": "",
"pages": "55--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P\u00e9rez-Espinosa, H., Mart\u00ednez-Miranda, J., Espinosa-Curiel, I., Rodr\u00edguez-Jacobo, J., Villase\u00f1or-Pineda, L., & Avila-George, H.: IESC-Child: An Interactive Emotional Children's Speech Corpus. Computer Speech & Language, 59, pp. 55-74, 2020.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Human Language Technology. Challenges for Computer Science and Linguistics. LTC 2013 -Revised selected papers",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rusko",
"suffix": ""
}
],
"year": 2016,
"venue": "Lecture Notes in Computer Science",
"volume": "9561",
"issue": "",
"pages": "55--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rusko, M. et al.: Advances in the Slovak Judicial Domain Dictation System. In: Vetulani Z., Uszkoreit H., Kubis M. (eds) Human Language Technology. Challenges for Computer Science and Linguistics. LTC 2013 -Revised selected papers. Lecture Notes in Computer Science, vol 9561. Springer, Cham, pp. 55-67, 2016.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic subtitling system for transcription, archiving and indexing of Slovak audiovisual recordings",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sta\u0161",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 7th Language & Technology Conference, LTC 2015",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sta\u0161, J. et al.: Automatic subtitling system for transcription, archiving and indexing of Slovak audiovisual recordings. In Proceedings of the 7th Language & Technology Conference, LTC 2015. pp. 186-191, 2015.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Acoustics of children's speech: Developmental changes of temporal and spectral parameters",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Potamianos",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 1999,
"venue": "The Journal of the Acoustical Society of America",
"volume": "105",
"issue": "3",
"pages": "1455--1468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, S., Potamianos, A., and Narayanan, S.: Acoustics of children's speech: Developmental changes of temporal and spectral parameters. The Journal of the Acoustical Society of America, vol. 105, no. 3, pp.1455-1468, 1999.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving speech recognition for children using acoustic adaptation and pronunciation modeling",
"authors": [
{
"first": "P",
"middle": [
"G"
],
"last": "Shivakumar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Potamianos",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. Workshop on Child, Computer and Interaction (WOCCI)",
"volume": "",
"issue": "",
"pages": "15--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shivakumar, P. G., Potamianos, A., Lee, S., and Narayanan, S.: Improving speech recognition for children using acoustic adaptation and pronunciation modeling. In: Proc. Workshop on Child, Computer and Interaction (WOCCI), pp. 15-19, 2014.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "TUKE-BNews-SK: Slovak Broadcast News Corpus Construction and Evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pleva",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Juh\u00e1r",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1709--1713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pleva, M. and Juh\u00e1r, J.: TUKE-BNews-SK: Slovak Broadcast News Corpus Construction and Evaluation. In: LREC, Reykjavik, pp. 1709-1713, 2014.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic speech recognition for children",
"authors": [
{
"first": "M",
"middle": [],
"last": "Feh\u00e9r",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feh\u00e9r, M.: Automatic speech recognition for children. Bachelor thesis, Technical University of Kosice, p. 39, 2019.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Transcriber -a tool for segmenting, labeling and transcribing speech",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Transcriber -a tool for segmenting, labeling and transcribing speech. [Online]. Available: http://trans.sourceforge.net/en/presentation.php [Accessed: July 30, 2019].",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Slovak Broadcast News Speech Recognition and Transcription System",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lojka",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Viszlay",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sta\u0161",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hl\u00e1dek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Juh\u00e1r",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Network-Based Information Systems. NBiS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lojka M., Viszlay P., Sta\u0161 J., Hl\u00e1dek D., Juh\u00e1r J.: Slovak Broadcast News Speech Recognition and Transcription System. In: Barolli L., Kryvinska N., Enokido T., Takizawa M. (eds) Advances in Network-Based Information Systems. NBiS 2018.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SARRA -the automatic subtitling system for transcription, archiving, and indexing of Slovak audiovisual recordings",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SARRA -the automatic subtitling system for transcription, archiving, and indexing of Slovak audiovisual recordings. [Online]. Available: https://marhula.fei.tuke.sk/sarra/ [Accessed: July 30, 2019].",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Novice User Experiences with a Voice-Enabled Human-Robot Interaction Tool",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pleva",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Juhar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ondas",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Hudson",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Bethel",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Carruth",
"suffix": ""
}
],
"year": 2019,
"venue": "29th International Conference Radioelektronika",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pleva, M., Juhar, J., Ondas, S., Hudson, C. R., Bethel, C. L., & Carruth, D. W.: Novice User Experiences with a Voice-Enabled Human-Robot Interaction Tool. In 2019 29th International Conference Radioelektronika, Pardubice. IEEE, pp. 1-5, 2019.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Transcription software used (Transcriber 1.5.1)",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Transcriber window for speaker turn metadata. Evaluation of the current subtitling system with children recordings The current automatic subtitling system for Slovak TV broadcasters was developed thanks to many years of Slovak automatic speech recognition development of Technical University of Kosice and Slovak Academy of Sciences consortium. The previous system was based on Julius [10] mainly prepared for speech dictation into word processing editor. The next generation was built on Time Delay Deep Neural Network (TDNN) models based on Kaldi",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "SARRA web user interface of automatically subtitled contentThe SARRA system is built to work in multitasking and scaling environment, so the user's task could run on more instances of the recognition toolkit at once. The first part of the process is voice activity detection and speaker diarization for better segmentation of the large audio uploads. The smaller segmented parts of the audio could be scaled better.The next part is the primary automatic speech recognition process built on models from about 600 hours of Slovak speech from broadcast news and TV discussions recordings. The acoustic model is using 40MFCC coefficients with online Cepstral mean normalization (CMN, in first training phase) and 100 dim i-vector. The language model was built from 1.89 billion token corpus with 500 thousand unique words vocabulary smoothed by the Witten-Bell algorithm[17].The last part is the post-processing of the recognized text to convert it to the TV subtitle suitable form. The requirements were that the amount of text is limited on one subtitle caption and also the time for showing the caption should be long enough to read them by the viewers. Finally, it is expected to have the subtitles adapted to speaker changes where the diarization engine results are used.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"content": "<table/>",
"num": null,
"text": ". The International Speech and Communication Association (ISCA) has a Special Interest Group (SIG) for Child Computer Interaction (CHILD) [2] and is organizing a special Workshop on Child Computer Interaction -WOCCI and last years also Language Teaching, Learning and Technology -LTLT. This year a special The 2019 Conference on Computational Linguistics and Speech Processing ROCLING 2019, pp. 325-333 \u00a9The Association for Computational Linguistics and Chinese Language Processing",
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table><tr><td>TV Series_Episode (Date)</td><td>Lenght [minutes]</td><td>Number of words</td></tr><tr><td>Oteckovia_E1(1.1.2018)</td><td>2:59</td><td>298</td></tr><tr><td>Oteckovia_E2(2.1.2018)</td><td>5:15</td><td>550</td></tr><tr><td>Oteckovia_E3(3.1.2018)</td><td>4:35</td><td>601</td></tr><tr><td>Oteckovia_E4(4.1.2018)</td><td>2:56</td><td>364</td></tr><tr><td>Oteckovia_E5(5.1.2018)</td><td>4:14</td><td>510</td></tr><tr><td>Oteckovia_E6(8.1.2018)</td><td>3:49</td><td>519</td></tr><tr><td>Oteckovia_E7(9.1.2018)</td><td>4:57</td><td>650</td></tr><tr><td>Prazdniny_E1(18.1.2017)</td><td>4:23</td><td>513</td></tr><tr><td>Prazdniny_E2(25.1.2017)</td><td>7:58</td><td>739</td></tr><tr><td>Total</td><td>41:01</td><td>4744</td></tr></table>",
"num": null,
"text": "Database statistics",
"type_str": "table",
"html": null
}
}
}
}