| { |
| "paper_id": "P16-1038", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:59:31.778098Z" |
| }, |
| "title": "Grapheme-to-Phoneme Models for (Almost) Any Language", |
| "authors": [ |
| { |
| "first": "Aliya", |
| "middle": [], |
| "last": "Deri", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "aderi@isi.edu" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "knight@isi.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Grapheme-to-phoneme (g2p) models are rarely available in low-resource languages, as the creation of training and evaluation data is expensive and time-consuming. We use Wiktionary to obtain more than 650k word-pronunciation pairs in more than 500 languages. We then develop phoneme and language distance metrics based on phonological and linguistic knowledge; applying those, we adapt g2p models for highresource languages to create models for related low-resource languages. We provide results for models for 229 adapted languages.", |
| "pdf_parse": { |
| "paper_id": "P16-1038", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Grapheme-to-phoneme (g2p) models are rarely available in low-resource languages, as the creation of training and evaluation data is expensive and time-consuming. We use Wiktionary to obtain more than 650k word-pronunciation pairs in more than 500 languages. We then develop phoneme and language distance metrics based on phonological and linguistic knowledge; applying those, we adapt g2p models for highresource languages to create models for related low-resource languages. We provide results for models for 229 adapted languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Grapheme-to-phoneme (g2p) models convert words into pronunciations, and are ubiquitous in speech-and text-processing systems. Due to the diversity of scripts, phoneme inventories, phonotactic constraints, and spelling conventions among the world's languages, they are typically languagespecific. Thus, while most statistical g2p learning methods are language-agnostic, they are trained on language-specific data-namely, a pronunciation dictionary consisting of word-pronunciation pairs, as in Table 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 493, |
| "end": 500, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Building such a dictionary for a new language is both time-consuming and expensive, because it requires expertise in both the language and a notation system like the International Phonetic Alphabet, applied to thousands of word-pronunciation pairs. Unsurprisingly, resources have been allocated only to the most heavily-researched languages. Global-Phone, one of the most extensive multilingual text and speech databases, has pronunciation dictionaries in only 20 languages 1 . lang word pronunciation eng anybody e\u031e n i\u02d0 b \u0252 d i\u02d0 pol \u017co\u0142\u0105dka z\u033b o w o n\u032a t \u032a k a ben \u09b6\u0995\u09cd s\u032a \u0254 k t \u032a \u0254 heb \u202b\u05d7\u05dc\u05d5\u05de\u05d5\u05ea\u202c \u0281 a l o m o t Table 1 : Examples of English, Polish, Bengali, and Hebrew pronunciation dictionary entries, with pronunciations represented with the International Phonetic Alphabet (IPA).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 611, |
| "end": 618, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "word eng deu nld gift \u0261 \u026a f t\u02b0 \u0261 \u026a f t \u0263 \u026a f t class k\u02b0 l ae s k l a\u02d0 s k l \u0251 s send s e\u031e n d z \u025b n t s \u025b n t Table 2 : Example pronunciations of English words using English, German, and Dutch g2p models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 110, |
| "end": 117, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For most of the world's more than 7,100 languages (Lewis et al., 2009) , no data exists and the many technologies enabled by g2p models are inaccessible.", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 70, |
| "text": "(Lewis et al., 2009)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Intuitively, however, pronouncing an unknown language should not necessarily require large amounts of language-specific knowledge or data. A native German or Dutch speaker, with no knowledge of English, can approximate the pronunciations of an English word, albeit with slightly different phonemes. Table 2 demonstrates that German and Dutch g2p models can do the same.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 299, |
| "end": 306, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Motivated by this, we create and evaluate g2p models for low-resource languages by adapting existing g2p models for high-resource languages using linguistic and phonological information. To facilitate our experiments, we create several notable data resources, including a multilingual pronunciation dictionary with entries for more than 500 languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The contributions of this work are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Using data scraped from Wiktionary, we clean and normalize pronunciation dictionaries for 531 languages. To our knowledge, this is the most comprehensive multilingual pronunciation dictionary available. \u2022 We synthesize several named entities corpora to create a multilingual corpus covering 384 languages. \u2022 We develop a language-independent distance metric between IPA phonemes. \u2022 We extend previous metrics for languagelanguage distance with additional information and metrics. \u2022 We create two sets of g2p models for \"high resource\" languages: 97 simple rule-based models extracted from Wikipedia's \"IPA Help\" pages, and 85 data-driven models built from Wiktionary data. \u2022 We develop methods for adapting these g2p models to related languages, and describe results for 229 adapted models. \u2022 We release all data and models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Because of the severe lack of multilingual pronunciation dictionaries and g2p models, different methods of rapid resource generation have been proposed. Schultz (2009) reduces the amount of expertise needed to build a pronunciation dictionary, by providing a native speaker with an intuitive rulegeneration user interface. Schlippe et al. (2010) crawl web resources like Wiktionary for wordpronunciation pairs. More recently, attempts have been made to automatically extract pronunciation dictionaries directly from audio data (Stahlberg et al., 2016) . However, the requirement of a native speaker, web resources, or audio data specific to the language still blocks development, and the number of g2p resources remains very low. Our method avoids these issues by relying only on text data from high-resource languages.", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 167, |
| "text": "Schultz (2009)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 323, |
| "end": 345, |
| "text": "Schlippe et al. (2010)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 527, |
| "end": 551, |
| "text": "(Stahlberg et al., 2016)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Instead of generating language-specific resources, we are instead inspired by research on cross-lingual automatic speech recognition (ASR) by and Vu et al. (2014) , who exploit linguistic and phonetic relationships in low-resource scenarios. Although these works focus on ASR instead of g2p models and rely on audio data, they demonstrate that speech technology is portable across related languages. ", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 162, |
| "text": "Vu et al. (2014)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "g2p h word training h pron h M h\u2192l pron l (a) g2p h\u2192l word M h\u2192l training h pron l (b)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given a low-resource language l without g2p rules or training data, we adapt resources (either an existing g2p model or a pronunciation dictionary) from a high-resource language h to create a g2p for l. We assume the existence of two modules: a phoneme-to-phoneme distance metric phon2phon, which allows us to map between the phonemes used by h to the phonemes used by l, and a closest language module lang2lang, which provides us with related language h.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Using these resources, we adapt resources from h to l in two different ways:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Output mapping ( Figure 1a ): We use g2p h to pronounce word l , then map the output to the phonemes used by l with phon2phon. \u2022 Training data mapping ( Figure 1b ): We use phon2phon to map the pronunciations in h's pronunciation dictionary to the phonemes used by l, then train a g2p model using the adapted data. The next sections describe how we collect data, create phoneme-to-phoneme and languageto-language distance metrics, and build highresource g2p models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 28, |
| "text": "Figure 1a", |
| "ref_id": null |
| }, |
| { |
| "start": 155, |
| "end": 164, |
| "text": "Figure 1b", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This section describes our data sources, which are summarized in Table 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 72, |
| "text": "Table 3", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Phoible (Moran et al., 2014) is an online repository of cross-lingual phonological data. We use two of its components: language phoneme inventories and phonetic features.", |
| "cite_spans": [ |
| { |
| "start": 8, |
| "end": 28, |
| "text": "(Moran et al., 2014)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoible", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "A phoneme inventory is the set of phonemes used to pronounce a language, represented in IPA. Phoible provides 2156 phoneme inventories for 1674 languages. (Some languages have multiple inventories from different linguistic studies.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme inventories", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "For each phoneme included in its phoneme inventories, Phoible provides information about 37 phonological features, such as whether the phoneme is nasal, consonantal, sonorant, or a tone. Each phoneme thus maps to a unique feature vector, with features expressed as +, -, or 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme feature vectors", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "For our language-to-language distance metric, it is useful to have written text in many languages. The most easily accessible source of this data is multilingual named entity (NE) resources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Resources", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We synthesize 7 different NE corpora: Chinese-English names (Ji et al., 2009) , Geonames (Vatant and Wick, 2006) , JRC names (Steinberger et al., 2011) , corpora from LDC 2 , NEWS 2015 (Banchs et al., 2015), Wikipedia names (Irvine et al., 2010) , and Wikipedia titles (Lin et al., 2011) ; to this, we also add multilingual Wikipedia titles for place names from an online English-language gazetteer (Everett-Heath, 2014). This yields a list of 9.9m named entities (8.9 not including English data) across 384 languages, which include the En-", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 77, |
| "text": "(Ji et al., 2009)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 89, |
| "end": 112, |
| "text": "(Vatant and Wick, 2006)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 125, |
| "end": 151, |
| "text": "(Steinberger et al., 2011)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 224, |
| "end": 245, |
| "text": "(Irvine et al., 2010)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 269, |
| "end": 287, |
| "text": "(Lin et al., 2011)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Resources", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "2 LDC2015E13, LDC2015E70, LDC2015E82, LDC2015E90,", |
| "eq_num": "LDC2015E84, LDC2014E115, and LDC2015E91" |
| } |
| ], |
| "section": "Named Entity Resources", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "glish translation, named entity type, and script information where possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Resources", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To explain different languages' phonetic notations, Wikipedia users have created \"IPA Help\" pages, 3 which provide tables of simple grapheme examples of a language's phonemes. For example, on the English page, the phoneme z has the examples \"zoo\" and \"has.\" We automatically scrape these tables for 97 languages to create simple graphemephoneme rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wikipedia IPA Help tables", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Using the phon2phon distance metric and mapping technique described in Section 5, we clean each table by mapping its IPA phonemes to the language's Phoible phoneme inventory, if it exists. If it does not exist, we map the phonemes to valid Phoible phonemes and create a phoneme inventory for that language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wikipedia IPA Help tables", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Ironically, to train data-driven g2p models for high-resource languages, and to evaluate our low-resource g2p models, we require pronunciation dictionaries for many languages. A common and successful technique for obtaining this data (Schlippe et al., 2010; Schlippe et al., 2012a; Yao and Kondrak, 2015) is scraping Wiktionary, an open-source multilingual dictionary maintained by Wikimedia. We extract unique word-pronunciation pairs from the English, German, Greek, Japanese, Korean, and Russian sites of Wiktionary. (Each Wiktionary site, while written in its respective language, contains word entries in multiple languages.)", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 257, |
| "text": "(Schlippe et al., 2010;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 258, |
| "end": 281, |
| "text": "Schlippe et al., 2012a;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 282, |
| "end": 304, |
| "text": "Yao and Kondrak, 2015)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wiktionary pronunciation dictionaries", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Since Wiktionary data is very noisy, we apply length filtering as discussed by Schlippe et al. (2012b) , as well as simple regular expression filters for HTML. We also map Wiktionary pronunciations to valid Phoible phonemes and language phoneme inventories, if they exist, as discussed in Section 5. This yields 658k word-pronunciation pairs for 531 languages. However, this data is not uniformly distributed across languages-German, English, and French account for 51% of the data.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 102, |
| "text": "Schlippe et al. (2012b)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wiktionary pronunciation dictionaries", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We extract test and training data as follows: For each language with at least 1 word-pron pair with a valid word (at least 3 letters and alphabetic), we extract a test set of a maximum of 200 valid words. From the remaining data, for every language with 50 or more entries, we create a training set with the available data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wiktionary pronunciation dictionaries", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Ultimately, this yields a training set with 629k word-pronunciation pairs in 85 languages, and a test set with 26k pairs in 501 languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wiktionary pronunciation dictionaries", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Automatically comparing pronunciations across languages is especially difficult in text form. Although two versions of the \"sh\" sound, \"\u0283\" and \"\u0255,\" sound very similar to most people and very different from \"m,\" to a machine all three characters seem equidistant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phonetic Distance Metric", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Previous research (\u00d6zbal and Strapparava, 2012; Vu et al., 2014) has addressed this issue by matching exact phonemes by character or manually selecting comparison features; however, we are interested in an automatic metric covering all possible IPA phoneme pairs.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 47, |
| "text": "(\u00d6zbal and Strapparava, 2012;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 48, |
| "end": 64, |
| "text": "Vu et al., 2014)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phonetic Distance Metric", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We handle this problem by using Phoible's phoneme feature vectors to create phon2phon, a distance metric between IPA phonemes. In this section we also describe how we use this metric to clean open-source data and build phonememapping models between languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phonetic Distance Metric", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As described in Section 4.1.2, each phoneme in Phoible maps to a unique feature vector; each feature value is +, -, or 0, representing whether a feature is present, not present, or not applicable. (Tones, for example, can never be syllabic or stressed.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "phon2phon", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We convert each feature vector into a bit representation by mapping each value to 3 bits. + to 110, -to 101, and 0 to 000. This captures the idea that ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "phon2phon", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "= ASCII(p s ); end p p = min \u2200 p t \u2208T (phon2phon(p s , p t ));", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "phon2phon", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "add p s \u2192 p p to M ; end Algorithm 1: A condensed version of our procedure for mapping scraped phoneme sets from Wikipedia and Wiktionary to Phoible language inventories. The full algorithm handles segmentation of the scraped pronunciation and heuristically promotes coverage of the Phoible inventory.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "phon2phon", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "the features + andare more similar than 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "phon2phon", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We then compute the normalized Hamming distance between every phoneme pair p 1,2 with feature vectors f 1,2 and feature vector length n as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "phon2phon", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "phon2phon(p 1 , p 2 ) = \u2211 n i=1 1, iff i 1 \u0338 = f i 2 n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "phon2phon", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We now combine phon2phon distances and Phoible phoneme inventories to map phonemes from scraped Wikipedia IPA help tables and Wiktionary pronunciation dictionaries to Phoible phonemes and inventories. We describe a condensed version of our procedure in Algorithm 1, and provide examples of cleaned Wiktionary output in Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 319, |
| "end": 326, |
| "text": "Table 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data cleaning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Another application of phon2phon is to transform pronunciations in one language to another language's phoneme inventory. We can do this by ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme mapping models", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "\u2200 p i \u2208I,po\u2208O W.add(p i , p o , 1 \u2212 phon2phon(p i , p o ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme mapping models", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "W can then be used to map a pronunciation to a new language; this has the interesting effect of modeling accents by foreign-language speakers: think in English (pronounced \"\u03b8 \u026a \u014b k\u02b0\") becomes \"s\u032a \u025b \u014b k\" in German; the capital city Dhaka (pronounced in Bengali with a voiced aspirated \"\u0256 \u0324 \") becomes the unaspirated \"d ae k\u02b0 ae\" in English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme mapping models", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Since we are interested in mapping high-resource languages to low-resource related languages, an important subtask is finding the related languages of a given language. The URIEL Typological Compendium (Littell et al., 2016) is an invaluable resource for this task. By using features from linguistic databases (including Phoible), URIEL provides 5 distance metrics between languages: genetic, geographic, composite (a weighted composite of genetic and geographic), syntactic, and phonetic. We extend URIEL by adding two additional metrics, providing averaged distances over all metrics, and adding additional information about resources. This creates lang2lang, a table which provides distances between and information about 2,790 languages.", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 224, |
| "text": "(Littell et al., 2016)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language Distance Metric", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Although URIEL provides a distance metric between languages based on Phoible features, it only takes into account broad phonetic features, such as whether each language has voiced plosives. This can result in some non-intuitive results: based on this metric, there are almost 100 languages phonetically equivalent to the South Asian language Gujarati, among them Arawak and Chechen.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme inventory distance", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "To provide a more fine-grained phonetic distance metric, we create a phoneme inventory distance metric using phon2phon. For each pair of language phoneme inventories L 1,2 in Phoible, we compute the following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme inventory distance", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "d(L 1 , L 2 ) = \u2211 p 1 \u2208L 1 min p 2 \u2208L 2 (phon2phon(p 1 , p 2 ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme inventory distance", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "and normalize by dividing by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme inventory distance", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "\u2211 i d(L 1 , L i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme inventory distance", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Although Urdu is very similar to Hindi, its different alphabet and writing conventions would make it difficult to transfer an Urdu g2p model to Hindi. A better candidate language would be Nepali, which shares the Devanagari script, or even Bengali, which uses a similar South Asian script. A metric comparing the character sets used by two languages is very useful for capturing this relationship. We first use our multilingual named entity data to extract character sets for the 232 languages with more than 500 NE pairs; then, we note that Unicode character names are similar for linguistically related scripts. This is most notable in South Asian scripts: for example, the Bengali \u0995, Gujarati \u0a95, and Hindi \u0915 have Unicode names BENGALI LETTER KA, GUJARATI LETTER KA, and DEVANAGARI LETTER KA, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Script distance", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We remove script, accent, and form identifiers from the Unicode names of all characters in our character sets, to create a set of reduced character names used across languages. Then we create a binary feature vector f for every language, with each feature indicating the language's use of a reduced character (like LETTER KA). The distance between two languages L 1,2 can then be computed with a spatial cosine distance:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Script distance", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "d(L 1 , L 2 ) = 1 \u2212 f 1 \u2022 f 2 \u2225f 1 \u2225 2 \u2225f 2 \u2225 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Script distance", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Each entry in our lang2lang distance table also includes the following features for the second language: the number of named entities, whether it is in Europarl (Koehn, 2005) , whether it has its own Wikipedia, whether it is primarily written in the same script as the first language, whether it has an IPA Help page, whether it is in our Wiktionary test set, and whether it is in our Wiktionary training set. Table 5 shows examples of the closest languages to English, Hindi, and Vietnamese, according to different lang2lang metrics. ", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 174, |
| "text": "(Koehn, 2005)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 410, |
| "end": 417, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Resource information", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The next two sections describe our high-resource and adapted g2p models. To evaluate these models, we compute the following metrics:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\u2022 % of words skipped: This shows the coverage of the g2p model. Some g2p models do not cover all character sequences. All other metrics are computed over non-skipped words. \u2022 word error rate (WER): The percent of incorrect 1-best pronunciations. \u2022 word error rate 100-best (WER 100): The percent of 100-best lists without the correct pronunciation. \u2022 phoneme error rate (PER): The percent of errors per phoneme. A PER of 15.0 indicates that, on average, a linguist would have to edit 15 out of 100 phonemes of the output. We then average these metrics across all languages (weighting each language equally).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We now build and evaluate g2p models for the \"high-resource\" languages for which we have either IPA Help tables or sufficient training data from Wiktionary. Table 6 shows our evaluation of these models on Wiktionary test data, and Table 7 shows results for individual languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 157, |
| "end": 164, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 231, |
| "end": 238, |
| "text": "Table 7", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "High Resource g2p Models", |
| "sec_num": "8" |
| }, |
| { |
| "text": "We first use the rules scraped from Wikipedia's IPA Help pages to build rule-based g2p models. We build a wFST for each language, with a path for each rule g \u2192 p and weight w = 1/count(g).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IPA Help models", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "This method prefers rules with longer grapheme segments; for example, for the word tin, the output \"\u0283 n\" is preferred over the correct \"t\u02b0 \u026a n\" because of the rule ti\u2192\u0283. We build 97 IPA Help models, but have test data for only 91-some languages, like Mayan, do not have any Wiktionary entries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IPA Help models", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "As shown in Table 6 , these rule-based models do not perform very well, suffering especially from a high percentage of skipped words. This is because IPA Help tables explain phonemes' relationships to graphemes, rather than vice versa. Thus, the English letter x is omitted, since its composite phonemes are better explained by other letters.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "IPA Help models", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "We next build models for the 85 languages in our Wiktionary train data set, using the wFSTbased Phonetisaurus (Novak et al., 2011) and MITLM (Hsu and Glass, 2008) , as described by Novak et al (2012) . We use a maximum of 10k pairs of training data, a 7-gram language model, and 50 iterations of EM.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 130, |
| "text": "(Novak et al., 2011)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 141, |
| "end": 162, |
| "text": "(Hsu and Glass, 2008)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 181, |
| "end": 199, |
| "text": "Novak et al (2012)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wiktionary-trained models", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "These data-driven models outperform IPA Help models by a considerable amount, achieving a WER of 44.69 and PER of 15.06 averaged across all 85 languages. Restricting data to 2.5k or more training examples boosts results to a WER of 28.02 and PER of 7.20, but creates models for only 29 languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wiktionary-trained models", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "However, in some languages good results are obtained with very limited data; Figure 2 shows the varying quality across languages and data availability.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 77, |
| "end": 85, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Wiktionary-trained models", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "We also use our rule-based IPA Help tables to improve Wiktionary model performance. We accomplish this very simply, by prepending IPA help rules like the German sch\u2192\u0283 to the Wiktionary training data as word-pronunciation pairs, then running the Phonetisaurus pipeline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unioned models", |
| "sec_num": "8.3" |
| }, |
| { |
| "text": "Overall, the unioned g2p models outperform both the IPA help and Wiktionary models; however, as shown in Table 7 : WER scores for Bengali, Tagalog, Turkish, and German models. Unioned models with IPA Help rules tend to perform better than Wiktionary-only models, but not consistently.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 105, |
| "end": 112, |
| "text": "Table 7", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unioned models", |
| "sec_num": "8.3" |
| }, |
| { |
| "text": "Having created a set of high-resource models and our phon2phon and lang2lang metrics, we now explore different methods for adapting highresource models and data for related low-resource languages. For comparable results, we restrict the set of high-resource languages to those covered by both our IPA Help and Wiktionary data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adapted g2p Models", |
| "sec_num": "9" |
| }, |
| { |
| "text": "The simplest experiment is to run our g2p models on related low-resource languages, without adaptation. For each language l in our test set, we determine the top high-resource related languages h 1,2,... according to the lang2lang averaged metric that have both IPA Help and Wiktionary data and the same script, not including the language itself. For IPA Help models, we choose the 3 most related languages h 1,2,3 and build a g2p model from their combined g-p rules. For Wiktionary and unioned models, we compile 5k words from the closest languages h 1,2,... such that each h contributes no more than one third of the data (adding IPA Help rules for unioned models) and train a model from the combined data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No mapping", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "For each test word-pronunciation pair, we trivially map the word's letters to the characters used in h 1,2,... by removing accents where necessary; we then use the high-resource g2p model to produce a pronunciation for the word. For example, our Czech IPA Help model uses a model built from g-p rules from Serbo-Croatian, Polish, and Slovenian; the Wiktionary and unioned models use data and rules from these languages and Latin as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No mapping", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "This expands 56 g2p models (the languages covered by both IPA Help and Wiktionary models) to models for 211 languages. However, as shown in Table 8 , results are very poor, with a very high WER of 92% using the unioned models and a PER of more than 50%. Interestingly, IPA Help models perform better than the unioned models, but this is primarily due to their high skip rate.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 140, |
| "end": 147, |
| "text": "Table 8", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "No mapping", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "We next attempt to improve these results by creating a wFST that maps phonemes from the inventories of h 1,2... to l (as described in Section 5.3). As shown in Figure 1a , by chaining this wFST to h 1,2... 's g2p model, we map the g2p model's output phonemes to the phonemes used by l. In each base model type, this process considerably improves accuracy over the no mapping approach; however, the IPA Help skip rate increases (Table 8 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 160, |
| "end": 169, |
| "text": "Figure 1a", |
| "ref_id": null |
| }, |
| { |
| "start": 427, |
| "end": 435, |
| "text": "(Table 8", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Output mapping", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "We now build g2p models for l by creating synthetic data for the Wiktionary and unioned models, as in Figure 1b . After compiling wordpronunciation pairs and IPA Help g-p rules from closest languages h 1,2,... , we then map the pronunciations to l and use the new pronunciations as training data. We again create unioned models by adding the related languages' IPA Help rules to the training data. This method performs slightly worse in accuracy than output mapping, a WER of 87%, but has a much lower skip rate of 7%. Table 9 : Sample words, gold pronunciations, and hypothesis pronunciations for English, Egyptian Arabic, Afrikaans, Yakut, Kannada, and Gujarati.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 111, |
| "text": "Figure 1b", |
| "ref_id": null |
| }, |
| { |
| "start": 519, |
| "end": 526, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training data mapping", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "Adaptation methods thus far have required that h and l share a script. However, this excludes languages with related scripts, like Hindi and Bengali. We replicate our data mapping experiment, but now allow related languages h 1,2,... with different scripts from l but a script distance of less than 0.2. We then build a simple \"rescripting\" table based on matching Unicode character names; we can then map not only h's pronunciations to l's phoneme set, but also h's word to l's script.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rescripting", |
| "sec_num": "9.4" |
| }, |
| { |
| "text": "Although performance is relatively poor, rescripting adds 10 new languages, including Telugu, Gujarati, and Marwari. Table 8 shows evaluation metrics for all adaptation methods. We also show results using all 85 Wiktionary models (using unioned where IPA Help is available) and rescripting, which increases the total number of languages to 229. Table 9 provides examples of output with different languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 124, |
| "text": "Table 8", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 345, |
| "end": 352, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rescripting", |
| "sec_num": "9.4" |
| }, |
| { |
| "text": "In general, mapping combined with IPA Help rules in unioned models provides the best results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "9.5" |
| }, |
| { |
| "text": "Training data mapping achieves similar scores as output mapping as well as a lower skip rate. Word skipping is problematic, but could be lowered by collecting g-p rules for the low-resource language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "9.5" |
| }, |
| { |
| "text": "Although the adapted g2p models make many individual phonetic errors, they nevertheless capture overall pronunciation conventions, without requiring language-specific data or rules. Specific points of failure include rules that do not exist in related languages (e.g., the silent \"e\" at the end of \"fuse\" and the conversion of \"d\u032a \u0283\" to \"\u0261\" in Egyptian Arabic), mistakes in phoneme mapping, and overall \"pronounceability\" of the output.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "9.5" |
| }, |
| { |
| "text": "Although our adaptation strategies are flexible, several limitations prevent us from building a g2p model for any language. If there is not enough information about the language, our lang2lang table will not be able to provide related highresource languages. Additionally, if the language's script is not closely related to another language's and thus cannot be rescripted (as with Thai and Armenian), we are not able to adapt related g2p data or models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Limitations", |
| "sec_num": "9.6" |
| }, |
| { |
| "text": "Using a large multilingual pronunciation dictionary from Wiktionary and rule tables from Wikipedia, we build high-resource g2p models and show that adding g-p rules as training data can improve g2p performance. We then leverage lang2lang distance metrics and phon2phon phoneme distances to adapt g2p resources for highresource languages for 229 related low-resource languages. Our experiments show that adapting training data for low-resource languages outperforms adapting output. To our knowledge, these are the most broadly multilingual g2p experiments to date.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "10" |
| }, |
| { |
| "text": "With this publication, we release a number of resources to the NLP community: a large multilingual Wiktionary pronunciation dictionary, scraped Wikipedia IPA Help tables, compiled named entity resources (including a multilingual gazetteer), and our phon2phon and lang2lang distance tables. 4 Future directions for this work include further improving the number and quality of g2p models, as well as performing external evaluations of the models in speech-and text-processing tasks. We plan to use the presented data and methods for other areas of multilingual natural language processing.", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 291, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "10" |
| }, |
| { |
| "text": "We have been unable to obtain this dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://en.wikipedia.org/wiki/Category: International_Phonetic_Alphabet_help", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Instructions for obtaining this data are available at the authors' websites. John Everett-Heath. 2014. The Concise Dictionary of World Place-Names. Oxford University Press, 2nd edition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the anonymous reviewers for their helpful comments, as well as our colleagues Marjan Ghazvininejad, Jonathan May, Nima Pourdamghani, Xing Shi, and Ashish Vaswani for their advice. We would also like to thank Deniz Yuret for his invaluable help with data collection. This work was supported in part by DARPA (HR0011-15-C-0115) and ARL/ARO (W911NF-10-1-0533). Computation for the work described in this paper was supported by the University of Southern California's Center for High-Performance Computing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": "11" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Report of NEWS 2015 machine transliteration shared task", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Rafael", |
| "suffix": "" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Banchs", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Duan", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kumaran", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. NEWS Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafael E Banchs, Min Zhang, Xiangyu Duan, Haizhou Li, and A Kumaran. 2015. Report of NEWS 2015 machine transliteration shared task. In Proc. NEWS Workshop.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Iterative language model estimation: efficient data structure & algorithms", |
| "authors": [ |
| { |
| "first": "Bo-June Paul", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "James R", |
| "middle": [], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo-June Paul Hsu and James R Glass. 2008. Iterative language model estimation: efficient data structure & algorithms. In Proc. Interspeech.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Transliterating from all languages", |
| "authors": [ |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Irvine", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Klementiev", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. AMTA", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ann Irvine, Chris Callison-Burch, and Alexandre Kle- mentiev. 2010. Transliterating from all languages. In Proc. AMTA.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Name extraction and translation for distillation. Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation", |
| "authors": [ |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "Dayne", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Blume", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shahram", |
| "middle": [], |
| "last": "Khadivi", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heng Ji, Ralph Grishman, Dayne Freitag, Matthias Blume, John Wang, Shahram Khadivi, Richard Zens, and Hermann Ney. 2009. Name extraction and translation for distillation. Handbook of Natu- ral Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Europarl: A parallel corpus for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. MT Summit", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. MT Summit.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Ethnologue: Languages of the world. SIL international", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Gary", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "D" |
| ], |
| "last": "Simons", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fennig", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M Paul Lewis, Gary F Simons, and Charles D Fennig. 2009. Ethnologue: Languages of the world. SIL international, Dallas.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Unsupervised language-independent name translation mining from Wikipedia infoboxes", |
| "authors": [ |
| { |
| "first": "Wen-Pin", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Snover", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. Workshop on Unsupervised Learning in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wen-Pin Lin, Matthew Snover, and Heng Ji. 2011. Un- supervised language-independent name translation mining from Wikipedia infoboxes. In Proc. Work- shop on Unsupervised Learning in NLP.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "URIEL. Pittsburgh: Carnegie Mellon University", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Littell", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mortensen", |
| "suffix": "" |
| }, |
| { |
| "first": "Lori", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "2016--2019", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick Littell, David Mortensen, and Lori Levin. 2016. URIEL. Pittsburgh: Carnegie Mellon Uni- versity. http://www.cs.cmu.edu/~dmortens/ uriel.html. Accessed: 2016-03-19.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Phonetisaurus: A WFST-driven phoneticizer", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Josef R Novak", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Minematsu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hirose", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Josef R Novak, D Yang, N Minematsu, and K Hirose. 2011. Phonetisaurus: A WFST-driven phoneticizer. The University of Tokyo, Tokyo Institute of Technol- ogy.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "WFST-based grapheme-to-phoneme conversion: open source tools for alignment, modelbuilding and decoding", |
| "authors": [ |
| { |
| "first": "Nobuaki", |
| "middle": [], |
| "last": "Josef R Novak", |
| "suffix": "" |
| }, |
| { |
| "first": "Keikichi", |
| "middle": [], |
| "last": "Minematsu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hirose", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. International Workshop on Finite State Methods and Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Josef R Novak, Nobuaki Minematsu, and Keikichi Hi- rose. 2012. WFST-based grapheme-to-phoneme conversion: open source tools for alignment, model- building and decoding. In Proc. International Work- shop on Finite State Methods and Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A computational approach to the automation of creative naming", |
| "authors": [ |
| { |
| "first": "G\u00f6zde", |
| "middle": [], |
| "last": "\u00d6zbal", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlo", |
| "middle": [], |
| "last": "Strapparava", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G\u00f6zde \u00d6zbal and Carlo Strapparava. 2012. A compu- tational approach to the automation of creative nam- ing. In Proc. ACL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Wiktionary as a source for automatic pronunciation extraction", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Schlippe", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ochs", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tim Schlippe, Sebastian Ochs, and Tanja Schultz. 2010. Wiktionary as a source for automatic pronun- ciation extraction. In Proc. Interspeech.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Grapheme-to-phoneme model generation for Indo-European languages", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Schlippe", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ochs", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tim Schlippe, Sebastian Ochs, and Tanja Schultz. 2012a. Grapheme-to-phoneme model generation for Indo-European languages. In Proc. ICASSP.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Automatic error recovery for pronunciation dictionaries", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Schlippe", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ochs", |
| "suffix": "" |
| }, |
| { |
| "first": "Ngoc", |
| "middle": [ |
| "Thang" |
| ], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tim Schlippe, Sebastian Ochs, Ngoc Thang Vu, and Tanja Schultz. 2012b. Automatic error recovery for pronunciation dictionaries. In Proc. Interspeech.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "GlobalPhone: A multilingual text & speech database in 20 languages", |
| "authors": [ |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| }, |
| { |
| "first": "Ngoc", |
| "middle": [ |
| "Thang" |
| ], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Schlippe", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proc. ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tanja Schultz, Ngoc Thang Vu, and Tim Schlippe. 2013. GlobalPhone: A multilingual text & speech database in 20 languages. In Proc. ICASSP.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Rapid language adaptation tools and technologies for multilingual speech processing systems", |
| "authors": [ |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. IEEE Workshop on Automatic Speech Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tanja Schultz. 2009. Rapid language adaptation tools and technologies for multilingual speech processing systems. In Proc. IEEE Workshop on Automatic Speech Recognition.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Word segmentation and pronunciation extraction from phoneme sequences through cross-lingual word-to-phoneme alignment", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Stahlberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Schlippe", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Computer Speech & Language", |
| "volume": "35", |
| "issue": "", |
| "pages": "234--261", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Stahlberg, Tim Schlippe, Stephan Vogel, and Tanja Schultz. 2016. Word segmentation and pronunciation extraction from phoneme sequences through cross-lingual word-to-phoneme alignment. Computer Speech & Language, 35:234 -261.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "JRC-Names: A freely available, highly multilingual named entity resource", |
| "authors": [ |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Steinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruno", |
| "middle": [], |
| "last": "Pouliquen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mijail", |
| "middle": [], |
| "last": "Kabadjov", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Van Der Goot", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralf Steinberger, Bruno Pouliquen, Mijail Kabadjov, and Erik Van der Goot. 2011. JRC-Names: A freely available, highly multilingual named entity resource. In Proc. Recent Advances in Natural Language Pro- cessing.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Geonames ontology", |
| "authors": [ |
| { |
| "first": "Bernard", |
| "middle": [], |
| "last": "Vatant", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Wick", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernard Vatant and Marc Wick. 2006. Geonames on- tology. Online at http://www.geonames.org/ontology.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Multilingual multilayer perceptron for rapid language adaptation between and across language families", |
| "authors": [ |
| { |
| "first": "Ngoc", |
| "middle": [ |
| "Thang" |
| ], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proc. Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ngoc Thang Vu and Tanja Schultz. 2013. Multilingual multilayer perceptron for rapid language adaptation between and across language families. In Proc. In- terspeech.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Multilingual deep neural network based acoustic modeling for rapid language adaptation", |
| "authors": [ |
| { |
| "first": "Ngoc", |
| "middle": [ |
| "Thang" |
| ], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Imseng", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Povey", |
| "suffix": "" |
| }, |
| { |
| "first": "Petr", |
| "middle": [ |
| "Motlicek" |
| ], |
| "last": "Motlicek", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanja", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| }, |
| { |
| "first": "Herv\u00e9", |
| "middle": [], |
| "last": "Bourlard", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ngoc Thang Vu, David Imseng, Daniel Povey, Petr Motlicek Motlicek, Tanja Schultz, and Herv\u00e9 Bourlard. 2014. Multilingual deep neural network based acoustic modeling for rapid language adapta- tion. In Proc. ICASSP.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Joint generation of transliterations from multiple representations", |
| "authors": [ |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Kondrak", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. NAACL HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lei Yao and Grzegorz Kondrak. 2015. Joint generation of transliterations from multiple representations. In Proc. NAACL HLT.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Figure 1: Strategies for adapting existing language resources through output mapping (a) and training data mapping (b)." |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Training data size vs. PER for 85 models trained from Wiktionary. Labeled languages: English (eng), Serbo-Croatian (hbs), Russian (rus), Tagalog (tgl), and Chinese macrolanguage (zho)." |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>", |
| "text": "Summary of data resources obtained from Phoible, named entity resources, Wikipedia IPA Help tables, and Wiktionary. Note that, although our Wiktionary data technically covers over 500 languages, fewer than 100 include more than 250 entries (Wiktionary train).", |
| "html": null |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Data: all phonemes P , scraped phoneme set</td></tr><tr><td>S, language inventory T</td></tr><tr><td>Result: Mapping table M</td></tr><tr><td>initialize empty table M ;</td></tr><tr><td>for p s in S do</td></tr><tr><td>if p s / \u2208 P and ASCII(p s ) \u2208 P then</td></tr><tr><td>p s</td></tr></table>", |
| "text": "Examples of scraped and cleaned Wiktionary pronunciation data in Czech, Pashto, Kannada, Armenian, and Ukrainian.", |
| "html": null |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>", |
| "text": "Closest languages with Wikipedia versions, based on lang2lang averaged metrics, phonetic inventory distance, and script distance. creating a single-state weighted finite-state transducer (wFST) W for input language inventory I and output language inventory O:", |
| "html": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>, the effects vary across</td></tr><tr><td>different languages. It is unclear what effect lan-</td></tr><tr><td>guage characteristics, quality of IPA Help rules,</td></tr><tr><td>and training data size have on unioned model im-</td></tr><tr><td>provement.</td></tr></table>", |
| "text": "", |
| "html": null |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>lang</td><td>ben</td><td>tgl</td><td>tur</td><td>deu</td></tr><tr><td># train</td><td>114</td><td colspan=\"3\">126 2.5k 10k</td></tr><tr><td colspan=\"5\">ipa-help 100.0 64.8 69.0 40.2</td></tr><tr><td>wikt</td><td colspan=\"4\">85.6 34.2 39.0 32.5</td></tr><tr><td colspan=\"5\">unioned 66.2 36.2 39.0 24.5</td></tr></table>", |
| "text": "Results for high-resource models. The top portion of the table shows results for all models; the bottom shows results only for languages with both IPA Help and Wiktionary models.", |
| "html": null |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>lang eng arz afr training mapping method no mapping output mapping sah training mapping kan rescripted guj rescripted</td><td>base model ipa-help unioned unioned unioned unioned unioned</td><td>rel langs deu, nld, swe fas, urd nld, lat, isl rus, bul, ukr hin, ben san, ben, hin \u0a97\u0ab3\u0ae0\u0acb\u0a8f\u0abf\u0ab6\u0a86 k \u027e o e \u00e7 \u026a a k \u027e \u00f5\u02d0 \u0259 \u0282 \u026a a word gold hyp fuse f j u\u02d0 z f \u028f s \u025b \u202b\u062c\u0648\u202c \u202b\u0628\u0627\u0646\u202c b ae\u02d0 n\u032a \u0261 u\u02d0 b a n\u032a d\u032a \u0283 u\u02d0 dood d \u0254 t d u\u02d0 t \u0445\u0430\u0442\u044b\u0440\u044b\u043a k a t \u032a \u026f r\u032a \u026f k k a t \u032a i r\u032a i k \u0ca6\u0cc1\u0cb7 d\u032a u \u0282 \u0288\u02b0 a d\u032a \u0324 u\u02d0 \u0282 \u0288\u02b0</td></tr></table>", |
| "text": "Results for adapted g2p models. Final adapted results (using the 85 languages covered by Wiktionary and unioned high-resource models, as well as rescripting) cover 229 languages.", |
| "html": null |
| } |
| } |
| } |
| } |