| { |
| "paper_id": "Y17-1024", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:33:07.021172Z" |
| }, |
| "title": "Unsupervised Method for Improving Arabic Speech Recognition Systems", |
| "authors": [ |
| { |
| "first": "Mohamed", |
| "middle": [], |
| "last": "Labidi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "LaTICE laboratory Unit of Monastir", |
| "institution": "", |
| "location": { |
| "postCode": "5000", |
| "settlement": "Monastir", |
| "country": "Tunisia" |
| } |
| }, |
| "email": "labidi8mohamed@gmail.c" |
| }, |
| { |
| "first": "Mohsen", |
| "middle": [], |
| "last": "Maraoui", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "maraoui.mohsen@gmail.c" |
| }, |
| { |
| "first": "Mounir", |
| "middle": [], |
| "last": "Zrigui", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "LaTICE laboratory Unit of Monastir", |
| "institution": "", |
| "location": { |
| "postCode": "5000", |
| "settlement": "Monastir", |
| "country": "Tunisia" |
| } |
| }, |
| "email": "mounir.zrigui@fsm.rnu.tn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "One of the big challenges connected to large vocabulary Arabic speech recognition is the limit of vocabulary, which causes high outof-vocabulary words. Also, the Arabic language characteristics are another challenge. These challenges negatively affect the performance of the created systems. In this work we try to handle these challenges by proposing a new unsupervised graph-base method. Finally, we have obtained a 4.6% relative reduction in the word error rate. Comparing our suggested method with other methods in the literature, it has given better results. Moreover, it has presented a major step towards solving this problem. In addition, it can be easily adaptable to other languages.", |
| "pdf_parse": { |
| "paper_id": "Y17-1024", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "One of the big challenges connected to large vocabulary Arabic speech recognition is the limit of vocabulary, which causes high outof-vocabulary words. Also, the Arabic language characteristics are another challenge. These challenges negatively affect the performance of the created systems. In this work we try to handle these challenges by proposing a new unsupervised graph-base method. Finally, we have obtained a 4.6% relative reduction in the word error rate. Comparing our suggested method with other methods in the literature, it has given better results. Moreover, it has presented a major step towards solving this problem. In addition, it can be easily adaptable to other languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "One of the big challenges in speech recognition is how to cover all possible words by a speech recognition system. The vocabulary of a conventional large-vocabulary continuous speech recognition system is finite, and this vocabulary limits the terms that appear in speech transcriptions. The words that do not occur in the vocabulary of the recognizer are called \"out-ofvocabulary\" words. This problem is a perennial challenge for speech recognition, where the outof-vocabulary words are badly recognized. A larger vocabulary for the automatic speech recognition system is not the solution, since language is in constant growth and new words are steadily enriching the vocabulary. In (Ng and Zue, 2000) , an analysis of news text demonstrated that the vocabulary size would continue to grow as the dataset got larger. In other words, it was not possible to create single large vocabulary that would eliminate the out-ofvocabulary problem. Consequently, it was not possible to create a language model that would cover all the words of any language. Furthermore, under certain conditions, adding more words could compromise the recognition performance of words already in the vocabulary. According to (Logan et al., 2005) , up to 10% of all query words in a typical application that used a word-based recognizer with large vocabulary could be out-of-vocabulary words. Of course it was possible to update the vocabulary of the Automatic Speech Recognition (ASR) systems by adding new words to the language model. However, as noted by (Logan et al., 2005) , it could be difficult to obtain enough training data to train the language model for new words. Additionally, for most application scenarios, it would not be feasible to re-recognize spoken content once the initial transcription was generated, due to the high computation cost of the ASR process and the huge sizes of daily spoken content collections. For these reasons, the out-of-vocabulary problem was a formidable one.", |
| "cite_spans": [ |
| { |
| "start": 684, |
| "end": 702, |
| "text": "(Ng and Zue, 2000)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1199, |
| "end": 1219, |
| "text": "(Logan et al., 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1531, |
| "end": 1551, |
| "text": "(Logan et al., 2005)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and state of the art", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For the Arabic language, this problem limits the performances of speech recognition systems. As noted in the previous paragraph, it is not practical to recreate a new language model each time we want to enrich our systems by new vocabulary. To deal with these problems, some superficial work has been done. In (Novotney et al., 2011) , a morpho-base language model was used in speech recognition systems for four morphologically rich languages which were Turkish, Finnish, colloquial Egyptian Arabic and Estonian. The authors said that the experiments showed that the morph models performed fairly well on out-of-vocabulary words without compromising the recognition accuracy on invocabulary ones. Nevertheless, they reported that the Arabic language was the exception where their proposed method failed. They noted that this might be due to the Arabic language characteristics. The second work belongs to (El-Desoky et al., 2009) , where the authors addressed the out-of-vocabulary problem and the non-appearance of diacritical-marks at the Arabic written transcriptions. The authors introduced a morphological decomposition, as well as a diacritization in Arabic language modeling. Their experiments showed a reduction in the Word Error Rate (WER) by 3.7%. However, they still suffer from the new words in languages and diacritical marks in the Arabic words, which present a big problem for Arabic speech recognition. Other work related to this topic has been done in other domains, as in (Al-Shareef and Hain, 2012) , (Razmara et al., 2013) , (Creutz et al., 2007) , (Diehl et al., 2009) and (Habash, 2009) .", |
| "cite_spans": [ |
| { |
| "start": 310, |
| "end": 333, |
| "text": "(Novotney et al., 2011)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 906, |
| "end": 930, |
| "text": "(El-Desoky et al., 2009)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1491, |
| "end": 1518, |
| "text": "(Al-Shareef and Hain, 2012)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1521, |
| "end": 1543, |
| "text": "(Razmara et al., 2013)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1546, |
| "end": 1567, |
| "text": "(Creutz et al., 2007)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1570, |
| "end": 1590, |
| "text": "(Diehl et al., 2009)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1595, |
| "end": 1609, |
| "text": "(Habash, 2009)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and state of the art", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In our work, we investigate a graph-based method to deal with the present challenge. We use our web crawler to collect text data from the Internet on a regular, continuous and up-to-date basis. We use the collected text for the construction of an oriented weighted graph, where each node presents a word and each arc presents the relationship of succession between two words in the Arabic language. After that, we use a graph search method to detect the false words in the transcription. Finally, we discover the best words that can be replacements.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and state of the art", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The paper is organized as follows. In section 2, we present our methodology of performing false-word correction and we deal with out-ofvocabulary words. Our experiments are discussed in section 3, while section 4 gives the conclusions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and state of the art", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section we describe how the corrections of false words are performed. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our acoustic model is built with the help of the CMU Sphinx (Lamere et al., 2003) . We train it using 51h of audio material for the modern standard Arabic, recorded by 41 native speakers. Each audio file is accompanied by its transcription. The audio files are converted to 16 kHz, 16 bits, mono speakers, and in an MS WAV format, as required by the Sphinx trainer. The phonetic dictionary is similarly used by almost all researchers in the construction of Arabic speech recognition systems (Ali et al., 2009) .", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 81, |
| "text": "(Lamere et al., 2003)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 491, |
| "end": 509, |
| "text": "(Ali et al., 2009)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linguistic tools", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Our language model training corpora consist of around 200 million running full words including data from Ajdir Corpora, Tashkeela corpora (Zerrouki and Balla, 2017) , Abbas corpora (Abbas et al., 2011) , OSAC corpora (Saad and Ashour, 2010) and collected corpora. Our statistical language model is constructed using the SRILM toolkit (Stolcke and others, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 164, |
| "text": "(Zerrouki and Balla, 2017)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 181, |
| "end": 201, |
| "text": "(Abbas et al., 2011)", |
| "ref_id": null |
| }, |
| { |
| "start": 334, |
| "end": 360, |
| "text": "(Stolcke and others, 2002)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linguistic tools", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To evaluate the recognition performance, our small audio corpus of 8h for all our experiments is divided into 12 audio files. Each one contains almost 40 minutes of speech. They contain almost 48,000 Arabic words where 2,000 of them are out of vocabulary (they do not exist in the vocabulary of the system).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linguistic tools", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For the construction of the oriented weighted graph we use our web crawler to collect text from the Internet and our Java implementation to construct the graph, where each sentence in the collected corpus is transformed to a set of connected words in the graph (i.e., each node of the graph contains one word).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linguistic tools", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To make the speech correction, it is much easier to work on the text more than spoken documents. For this reason, we have to use a speech recognition system to get the transcriptions of the spoken documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speech recognition (B0)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We use the CMU Sphinx tools to construct our speech recognition systems. The utilized data are described in the linguistic tools section (section 2.1) and the obtained results are described in section 3. The system gives us the transcriptions for the recognized speech files.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speech recognition (B0)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The text collection is a process to collect Arabic texts from the Internet to establish a corpus of Arabic text. We use our web crawler in this task. It proceeds as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Search for the addresses of Arabic web sites in the Internet using API search engines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Only Keep addresses of authentic sites: (using the WOT tool, which is a tool powered by 140 million users, machine learning, which is a free browser extensions, and mobile app and API, which let us check whether a website is safe and contains correct information before reaching it).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Save the authentic addresses in a database.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Parse the authentic web pages and collect the Arabic texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Save the collected Arabic texts in files (text corpus).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The first successful execution of our web crawler allows collecting more than 2,981 Arabic text files. The advantage of our web crawler is that it systematically updates the corpus. That way we guarantee that our corpus is updated and increased each time. We guarantee also that each new word in the language will be added as soon as possible. The collected corpus is used to create our oriented weighted graph in the next section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The graph is systematically auto-updated by new texts from the Internet, which make it bigger day after day. The update of the corpus follows the next steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Search for the addresses of Arabic web sites in the Internet using API search engines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Only Keep addresses of authentic site: (using the WOT tool).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 For each found authentic address check whether it does not exist in our database, then save it; else do not save it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Parse the authentic web pages and collect the Arabic texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\uf0b7 Save the collected Arabic texts in files (text corpus).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text collection (B1)", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Using the collected corpus in the previous section, where our web crawler is issued, we create an oriented weighted graph that depicts the Arabic language words succession ( Figure 2 ). Each word in the corpus is transformed to a node in the graph. And each two words that succeed in the corpus they will linked by an arc in the graph as described in the following table. The graph of Figure 2 presents the relationship of succession between the four words and the probabilities of these successions. Where the value (0.5) that exist on the arc between \"word 1\" and \"word 2\" presents the probability P(\"word 2\"| \"word 1\"). It is systematically autoupdated by new texts from the Internet, which make it bigger day after day. This graph is used to correct false words in the transcription.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 174, |
| "end": 182, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 385, |
| "end": 393, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Graph construction (B2)", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Each node in the graph is a word from the corpus. Also, it contains only one Arabic word and the information related to it. (a) describes the node structure and its fields. Hence, each sentence in the corpus is transformed to a set of connected nodes in the graph. The following points describe the following node fields. To create our graph we pass by the following steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph construction (B2)", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "\uf0b7 Create for each word in the corpus a node in the graph. Each word has only one node in the graph, even if it exists several times.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph construction (B2)", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "\uf0b7 If a word \"X\" comes after another word \"Y\" in the textual corpus, then the node of the word \"X\" will be linked by an arc to the node of the word \"Y\" in the graph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph construction (B2)", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The following example explains how two words can be transformed to the graph and how we make the link between them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph construction (B2)", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "In the graph \u00ab Hello word \u00bb Table 1 : Illustration of the arc construction between words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 28, |
| "end": 35, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "In the text", |
| "sec_num": null |
| }, |
| { |
| "text": "The arc between any two words \"W\" and \"Y\" is weighted by P(W|Y), which is the probability of the appearance of \"W\" and \"Y\" together such as that \"Y\" arrives after \"W\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "In the text", |
| "sec_num": null |
| }, |
| { |
| "text": "Our goal in this section is to correct the false words in the transcriptions using the graph created in section 2.4. The correction passes by the steps explained in the next sections:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word correction (B3)", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "Suppose we have the following sentence, which contains a false word (Word 3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word correction (B3)", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "To correct the false word (Word 3) we follow the following steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word1 Word2 Word3 Word4 Word5 Word6", |
| "sec_num": null |
| }, |
| { |
| "text": "First of all, we should detect the false words in the transcriptions, for that we use the oriented weighted graph created in section 2.4. The graph contains the Arabic words collected from the Internet, books, journals, etc. Added to that, the graph is automatically updated by the new words that appear in the language. Logically, any correct word in the transcription should be presented in the graph. To know whether a word is false or not, we search for it in our created graph. If it exists, then it will be correct. Else, it will be considered as a false word and it will pass to the correction step.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "False-word detection", |
| "sec_num": "2.5.1" |
| }, |
| { |
| "text": "The context window is a set of words that appears with the false word in the same sentence or in the same phrase. It contains N words from both the left and the right of the false word. The context window is used to search correct words that appear in the same context as our false word. Table 2 gives an example of the context-window construction.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 288, |
| "end": 295, |
| "text": "Table 2", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Context-window construction", |
| "sec_num": "2.5.2" |
| }, |
| { |
| "text": "Therefore, each false word has more than one context window. Each context window has a different size. The size of the context windows for a false word starts from N=1 (one word from the left and one word from the right of the false words) and reaches N=N, which is the maximum number of words that appear with the false word in the transcription.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context-window construction", |
| "sec_num": "2.5.2" |
| }, |
| { |
| "text": "We vary the size of the context window for each false word in order to search for the most appropriate context window size that filters out the best possible replacements for the false word. We consider the best context window size as the size that gives us the minimum of possible replacements. We make this choice because we consider that the context window which gives the minimum number of replacements is a better semantic filter than the windows which give more replacements. The false word (Word 3) in the following example :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context-window construction", |
| "sec_num": "2.5.2" |
| }, |
| { |
| "text": "Word1 Word2 Word3 Word4 Word5 Word6 has 2 words on the left and 3 words on the right. Two or more of these words can describe the context of the false word that we want to replace. The number of words of the context window (N words) cannot exceed 3 in the example provided in section 2.5, because this is the maximum number of words that can be found with the false Word X Word Y word (Word 3) in one of its two sides (left and right).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context-window construction", |
| "sec_num": "2.5.2" |
| }, |
| { |
| "text": "Left side ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "N=1 N=2 N=3", |
| "sec_num": null |
| }, |
| { |
| "text": "Word 2 (Word2- Word1) (Word2- Word1) Right side Word 4 (Word4- Word5) (Word4- Word5- Word6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "N=1 N=2 N=3", |
| "sec_num": null |
| }, |
| { |
| "text": "After the construction of the context windows, we search for possible replacements of false words, using the context windows created in the previous section. We search in the graph for the word that has the same context window as our false word. We take the words order of the context windows into consideration. For example, if the false word \"word3\" appears between the two words \"word4\" and \"word2\" in the transcription, then we search in the graph for replacements that appear between \"word4\" and \"word2\". The result of this search step is a set of words. Each set contains a set of possible replacements for the false word. Also, each set presents the search results using one of the context windows of the false word; i.e., for each context window for the false word, this step will give us a set of possible replacements. Table 3 describes the created context windows for the false word (Word 3) given in as example in the following sentence : \"Word1 Word2 Word3 Word4 Word5 Word6\". The next section describes the selection of the best set of replacements for the false word.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 829, |
| "end": 836, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Search for possible replacements", |
| "sec_num": "2.5.3" |
| }, |
| { |
| "text": "The best context window is the one that gives us the replacements that are semantically the closest to the false word in its context. Then, the best context window will give us the minimum possible of replacements because it filters the words well and it proposes only the semantically closest words to the false one. Thus, the best set of replacements is the one that contains the minimum number of replacements. This step is explained in Table 3 and Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 440, |
| "end": 459, |
| "text": "Table 3 and Table 4", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Selection of best set of replacements", |
| "sec_num": "2.5.4" |
| }, |
| { |
| "text": "N= 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context window size", |
| "sec_num": null |
| }, |
| { |
| "text": "Word X Word Z Table 4 : Example of selecting the best replacement set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 21, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Possible replacement", |
| "sec_num": null |
| }, |
| { |
| "text": "In the previous step, we chose the set of replacements that were semantically closest to the false word because they have the same context and it works as a semantic filter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replacement of false word", |
| "sec_num": "2.5.5" |
| }, |
| { |
| "text": "Researchers usually choose one word as a substitute to the wrong one. For us, we opt for replacing the false word by all possible replacements selected from the previous step. On the other hand, each replacement is put with its probability of succession that appears in the graph. This probability defines its relationship of succession of the replacement with its successor and predecessor. This process is explained in the following example. We suppose that the replacements appear in the graph as represented in Figure 3 where \"Word X\" and \"Word Z\" are the possible correct replacements of the false word. These two possible replacements will replace the false word in the transcription as described in Table 5 . Where, the false word is replaced by its possible replacements. And each replacement is accompanied by its probabilities of successions between it and the words of the contextual window. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 515, |
| "end": 523, |
| "text": "Figure 3", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 706, |
| "end": 713, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Replacement of false word", |
| "sec_num": "2.5.5" |
| }, |
| { |
| "text": "Word1 Word2 (65%)WordX (45%) Word4 Word5 Word6 Word1 Word2(35%)WordZ (55%) Word4 Word5 Word6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replacing false word by the selected ones", |
| "sec_num": null |
| }, |
| { |
| "text": "Our experiments are decomposed in two parts. The first one is the post-correction experiments where we evaluate our speech recognition system performance before the use of our proposed method. The second one is the correction experiments where we evaluate our suggested method. We evaluate our correction method twice: the first one before updating the graph and the second one after updating it. The material used in the experiments is described in the experimental setup section just after the introduction. We use the WER metric, because it is mostly used by researchers to evaluate automatic speech recognition systems (Ali et al., 2009) , (Diehl et al., 2009) .", |
| "cite_spans": [ |
| { |
| "start": 623, |
| "end": 641, |
| "text": "(Ali et al., 2009)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 644, |
| "end": 664, |
| "text": "(Diehl et al., 2009)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "12.5% WER% after first correction 8.11%", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WER% before correction", |
| "sec_num": null |
| }, |
| { |
| "text": "WER% after second correction (after updating the graph )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WER% before correction", |
| "sec_num": null |
| }, |
| { |
| "text": "7.9% Table 6 : Tests results. Table 6 shows the obtained results. The first line describes the WER obtained with our speech recognition system before the correction step. The obtained WER is 12.5% ,which means that the transcription contains 6,000 wrongly recognized words, including the 2,000 out-ofvocabulary words. After that, to decrease the WER we execute our proposed method. The second line of Table 6 contains the WER% obtained after the execution of our correction approach, which is 8.11%. This execution was released with the graph constructed in section 3.3. We notice that the WER is decreased. We have recorded a gain of 4.39% in terms of WER, which means a reduction in the number of the false words. We pass from 6,000 to 3,896 false words in the transcriptions. Then, 2,104 words are corrected and 956 of them are out-ofvocabulary words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 30, |
| "end": 37, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 401, |
| "end": 408, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "WER% before correction", |
| "sec_num": null |
| }, |
| { |
| "text": "After the correction step, we update our graph automatically. Then, we relaunch the correction again, but this time with a richer graph. Line 3 of Table 6 indicates the obtained results. The WER becomes 7.9%, with a reduction of 0.21% from the previous correction; i.e., we pass from 6,000 false words in the transcription to 3,792 ones. However, the number of the corrected out-ofvocabulary words is bigger this time. We pass from 956 corrected out-of-vocabulary words in the first correction to 1,148 ones in the second correction, which proves that the update of the graph has added new words and has positively influenced the correction process.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 147, |
| "end": 154, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "WER% before correction", |
| "sec_num": null |
| }, |
| { |
| "text": "Gain in WER% (El-Desoky et al., 2009) 3.7%", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 37, |
| "text": "(El-Desoky et al., 2009)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Work", |
| "sec_num": null |
| }, |
| { |
| "text": "Our method 4.6% (Messaoudi et al., 2006) 1.2% (Afify et al., 2005) 1.4% Table 7 : Comparison between methods.", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 40, |
| "text": "(Messaoudi et al., 2006)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 46, |
| "end": 66, |
| "text": "(Afify et al., 2005)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 72, |
| "end": 79, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Work", |
| "sec_num": null |
| }, |
| { |
| "text": "The obtained results show the efficiency of our proposed method in the detection and correction of false words. In addition, the results show the ease, speed and performance of our method in the enrichment of the corpus and in correction, unlike the classical language models and the difficulties of their enrichment. As cited in the methodology section, our method does not replace the false word by another word from the possible replacements, but it replaces it by all possible replacements accompanied by their probabilities, which gives a huge advantage to the transcription so that it can be used in various fields. Moreover, any researcher can utilize any selection method to give preference to the suitable word. Furthermore, Table 8 shows that our proposed method gives better results and deals better with false words and out-ofvocabulary ones in the Arabic speech recognition systems than that of the most recent work in the field.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 734, |
| "end": 741, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Work", |
| "sec_num": null |
| }, |
| { |
| "text": "We have proposed a method to correct badly recognized words by any Arabic speech recognition system. Our method shows a good performance in the correction task. Furthermore, it shows an admirable performance in dealing with out-of-vocabulary words. This is thanks to our proposed graph which is systemically autoupdated by new vocabulary and texts from the Internet. Also, it gives a probabilistic description for the words succession in the language. Our method shows a better correction rate than other methods in the literature (El-Desoky et al., 2009) , (Creutz et al., 2007) especially for out-ofvocabulary words. In addition, our proposed method provides better results because it takes into consideration the Arabic language characteristics. All this gives our method a great advantage over other ones. Besides, our proposed method can be adapted to other languages easily.", |
| "cite_spans": [ |
| { |
| "start": 531, |
| "end": 555, |
| "text": "(El-Desoky et al., 2009)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 558, |
| "end": 579, |
| "text": "(Creutz et al., 2007)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We believe that the correction of false recognized words in any transcription given by any Arabic automatic speech recognition system should take into account two major points. The first is the language characteristics and the second is the new vocabulary that is appearing in the language day after day. Our proposed method is a good step in this field and it can be improved by other methods like the rule-based ones. This is going to be our goal during the next work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this paper we have tried to deal with the challenges of the limit of vocabulary and the Arabic language characteristics in large vocabulary Arabic speech recognition systems. We have tested a graph-based method. It has given a good reduction by 4.6% in terms of WER. Furthermore, it has fairly dealt with the Arabic language characteristics. The proposed method presents a good step in this field and in dealing with the challenges. Another important thing is that our method can be easily adapted to work with other languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Recent progress in Arabic broadcast news transcription at BBN", |
| "authors": [ |
| { |
| "first": "Mohamed", |
| "middle": [], |
| "last": "Afify", |
| "suffix": "" |
| }, |
| { |
| "first": "Long", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Xiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sherif", |
| "middle": [], |
| "last": "Abdou", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Makhoul", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Interspeech", |
| "volume": "5", |
| "issue": "", |
| "pages": "1637--1640", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohamed Afify, Long Nguyen, Bing Xiang, Sherif Abdou, and John Makhoul. 2005. Recent progress in Arabic broadcast news transcription at BBN. In Interspeech, volume 5, pages 1637-1640.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Arabic phonetic dictionaries for speech recognition", |
| "authors": [ |
| { |
| "first": "Mohamed", |
| "middle": [], |
| "last": "Ali", |
| "suffix": "" |
| }, |
| { |
| "first": "Moustafa", |
| "middle": [], |
| "last": "Elshafei", |
| "suffix": "" |
| }, |
| { |
| "first": "Mansour", |
| "middle": [], |
| "last": "Al-Ghamdi", |
| "suffix": "" |
| }, |
| { |
| "first": "Husni", |
| "middle": [], |
| "last": "Al-Muhtaseb", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Journal of Information Technology Research", |
| "volume": "2", |
| "issue": "4", |
| "pages": "67--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohamed Ali, Moustafa Elshafei, Mansour Al- Ghamdi, and Husni Al-Muhtaseb. 2009. Arabic phonetic dictionaries for speech recognition. Journal of Information Technology Research (JITR), 2(4):67-80.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "CRF-based Diacritisation of Colloquial Arabic for Automatic Speech Recognition", |
| "authors": [ |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Al", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Shareef", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Hain", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "INTERSPEECH", |
| "volume": "", |
| "issue": "", |
| "pages": "1824--1827", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sarah Al-Shareef and Thomas Hain. 2012. CRF-based Diacritisation of Colloquial Arabic for Automatic Speech Recognition. In INTERSPEECH, pages 1824-1827.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Morph-based speech recognition and modeling of out-of-vocabulary words across languages", |
| "authors": [ |
| { |
| "first": "Mathias", |
| "middle": [], |
| "last": "Creutz", |
| "suffix": "" |
| }, |
| { |
| "first": "Teemu", |
| "middle": [], |
| "last": "Hirsim\u00e4ki", |
| "suffix": "" |
| }, |
| { |
| "first": "Mikko", |
| "middle": [], |
| "last": "Kurimo", |
| "suffix": "" |
| }, |
| { |
| "first": "Antti", |
| "middle": [], |
| "last": "Puurula", |
| "suffix": "" |
| }, |
| { |
| "first": "Janne", |
| "middle": [], |
| "last": "Pylkk\u00f6nen", |
| "suffix": "" |
| }, |
| { |
| "first": "Vesa", |
| "middle": [], |
| "last": "Siivola", |
| "suffix": "" |
| }, |
| { |
| "first": "Matti", |
| "middle": [], |
| "last": "Varjokallio", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACM Transactions on Speech and Language Processing (TSLP)", |
| "volume": "5", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mathias Creutz, Teemu Hirsim\u00e4ki, Mikko Kurimo, Antti Puurula, Janne Pylkk\u00f6nen, Vesa Siivola, Matti Varjokallio, Ebru Arisoy, Murat Sara\u00e7lar, and Andreas Stolcke. 2007. Morph-based speech recognition and modeling of out-of-vocabulary words across languages. ACM Transactions on Speech and Language Processing (TSLP), 5(1):3.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Morphological analysis and decomposition for Arabic speech-to-text systems", |
| "authors": [ |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Diehl", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "F" |
| ], |
| "last": "Mark", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Gales", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip C", |
| "middle": [], |
| "last": "Tomalin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Woodland", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "INTERSPEECH", |
| "volume": "", |
| "issue": "", |
| "pages": "2675--2678", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank Diehl, Mark JF Gales, Marcus Tomalin, and Philip C Woodland. 2009. Morphological analysis and decomposition for Arabic speech-to-text systems. In INTERSPEECH, pages 2675-2678.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Investigating the use of morphological decomposition and diacritization for improving Arabic LVCSR", |
| "authors": [ |
| { |
| "first": "Amr", |
| "middle": [], |
| "last": "El-Desoky", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Gollan", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Rybach", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Schl\u00fcter", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "2679--2682", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amr El-Desoky, Christian Gollan, David Rybach, Ralf Schl\u00fcter, and Hermann Ney. 2009. Investigating the use of morphological decomposition and diacritization for improving Arabic LVCSR. In Interspeech, pages 2679-2682.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "REMOOV: A tool for online handling of out-of-vocabulary words in machine translation", |
| "authors": [ |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2nd International Conference on Arabic Language Resources and Tools (MEDAR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nizar Habash. 2009. REMOOV: A tool for online handling of out-of-vocabulary words in machine translation. In Proceedings of the 2nd International Conference on Arabic Language Resources and Tools (MEDAR), Cairo, Egypt.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Design of the CMU sphinx-4 decoder", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Lamere", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Kwok", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Evandro", |
| "suffix": "" |
| }, |
| { |
| "first": "Rita", |
| "middle": [], |
| "last": "Gouv\u00eaa", |
| "suffix": "" |
| }, |
| { |
| "first": "Bhiksha", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Raj", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Lamere, Philip Kwok, William Walker, Evandro B Gouv\u00eaa, Rita Singh, Bhiksha Raj, and Peter Wolf. 2003. Design of the CMU sphinx-4 decoder. In INTERSPEECH. Citeseer.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Approaches to reduce the effects of OOV queries on indexed spoken audio", |
| "authors": [ |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Logan", |
| "suffix": "" |
| }, |
| { |
| "first": "J-M", |
| "middle": [], |
| "last": "Van Thong", |
| "suffix": "" |
| }, |
| { |
| "first": "Pedro", |
| "middle": [ |
| "J" |
| ], |
| "last": "Moreno", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "IEEE transactions on multimedia", |
| "volume": "7", |
| "issue": "", |
| "pages": "899--906", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beth Logan, J-M Van Thong, and Pedro J Moreno. 2005. Approaches to reduce the effects of OOV queries on indexed spoken audio. IEEE transactions on multimedia, 7(5):899-906.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Arabic broadcast news transcription using a one million word vocalized vocabulary", |
| "authors": [ |
| { |
| "first": "Abdelkhalek", |
| "middle": [], |
| "last": "Messaoudi", |
| "suffix": "" |
| }, |
| { |
| "first": "Lori", |
| "middle": [], |
| "last": "Gauvain", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lamel", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings. 2006 IEEE International Conference on", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abdelkhalek Messaoudi, J Gauvain, and Lori Lamel. 2006. Arabic broadcast news transcription using a one million word vocalized vocabulary. In Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, volume 1, pages I-I. IEEE.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Subword-based approaches for spoken document retrieval", |
| "authors": [ |
| { |
| "first": "Kenney", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Victor W Zue", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Speech Communication", |
| "volume": "32", |
| "issue": "3", |
| "pages": "157--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenney Ng and Victor W Zue. 2000. Subword-based approaches for spoken document retrieval. Speech Communication, 32(3):157-186.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Unsupervised Arabic Dialect Adaptation with Self-Training", |
| "authors": [ |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Novotney", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Richard", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "INTERSPEECH", |
| "volume": "", |
| "issue": "", |
| "pages": "541--544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott Novotney, Richard M Schwartz, and Sanjeev Khudanpur. 2011. Unsupervised Arabic Dialect Adaptation with Self-Training. In INTERSPEECH, pages 541-544.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Graph Propagation for Paraphrasing Out-of-Vocabulary Words in Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Maryam", |
| "middle": [], |
| "last": "Majid Razmara", |
| "suffix": "" |
| }, |
| { |
| "first": "Reza", |
| "middle": [], |
| "last": "Siahbani", |
| "suffix": "" |
| }, |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Haffari", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sarkar", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACL (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "1105--1115", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Majid Razmara, Maryam Siahbani, Reza Haffari, and Anoop Sarkar. 2013. Graph Propagation for Paraphrasing Out-of-Vocabulary Words in Statistical Machine Translation. In ACL (1), pages 1105-1115. Citeseer.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Arabic morphological tools for text mining", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Motaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Wesam", |
| "middle": [], |
| "last": "Saad", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ashour", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Corpora", |
| "volume": "18", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Motaz K Saad and Wesam Ashour. 2010. Arabic morphological tools for text mining. Corpora, 18:19.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "SRILM-an extensible language modeling toolkit", |
| "authors": [], |
| "year": 2002, |
| "venue": "Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke and others. 2002. SRILM-an extensible language modeling toolkit. In Interspeech, volume 2002, page 2002.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems", |
| "authors": [ |
| { |
| "first": "Taha", |
| "middle": [], |
| "last": "Zerrouki", |
| "suffix": "" |
| }, |
| { |
| "first": "Amar", |
| "middle": [], |
| "last": "Balla", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Data in Brief", |
| "volume": "11", |
| "issue": "", |
| "pages": "147--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taha Zerrouki and Amar Balla. 2017. Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems. Data in Brief, 11:147- 151.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Figure 1describes the steps of the work.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "Architecture of the proposed system.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF3": { |
| "text": "Illustration of the constructed oriented weighted graph and the structure of its nodes.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF4": { |
| "text": "Word: Field containing the word \uf0b7 Number of occurrences: Field containing the number of occurrences of the word in the Date of first use: Field containing the first appearance of the word in the Internet or in documents \uf0b7 Next nodes: Links to the next nodes \uf0b7 Weight of the next relations: Field containing the weight of the relations between the current word and the next words.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF5": { |
| "text": "Replacement relations in the graph.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "Example of context-window construction.", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "text": "Example of searching possible replacements.", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "text": "Replacing the false word.", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |