{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:11:51.016232Z" }, "title": "G i 2P i : Rule-based, index-preserving grapheme-to-phoneme transformations Aidan Pine 1 aidan.pine", "authors": [ { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Research Council", "location": {} }, "email": "" }, { "first": "Eric", "middle": [], "last": "Joanis", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Research Council", "location": {} }, "email": "" }, { "first": "David", "middle": [], "last": "Huggins-Daines", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Christopher", "middle": [], "last": "Cox", "suffix": "", "affiliation": { "laboratory": "Carleton University\u037e @carleton.ca 4 Wiichihitotaak ILR Inc. 5 University College Dublin", "institution": "", "location": {} }, "email": "" }, { "first": "Eddie", "middle": [ "Antonio" ], "last": "Santos", "suffix": "", "affiliation": {}, "email": "eddie.santos@ucdconnect.ie" }, { "first": "Delasie", "middle": [], "last": "Torkornoo", "suffix": "", "affiliation": { "laboratory": "Carleton University\u037e @carleton.ca 4 Wiichihitotaak ILR Inc. 5 University College Dublin", "institution": "", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the motivation and implementation details for a rule-based, indexpreserving grapheme-to-phoneme engine 'G i 2P i ' implemented in pure Python and released under the open source MIT license 8. The engine and interface have been designed to prioritize the developer experience of potential contributors without requiring a high level of programming knowledge. G i 2P i already provides mappings for 30 (mostly Indigenous) languages, and the package is accompanied by a web-based interactive development environment, a RESTful API, and extensive documentation to encourage the addition of more mappings in the future. We also present three downstream applications of G i 2P i and show results of a preliminary evaluation.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the motivation and implementation details for a rule-based, indexpreserving grapheme-to-phoneme engine 'G i 2P i ' implemented in pure Python and released under the open source MIT license 8. The engine and interface have been designed to prioritize the developer experience of potential contributors without requiring a high level of programming knowledge. G i 2P i already provides mappings for 30 (mostly Indigenous) languages, and the package is accompanied by a web-based interactive development environment, a RESTful API, and extensive documentation to encourage the addition of more mappings in the future. We also present three downstream applications of G i 2P i and show results of a preliminary evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "G i 2P i is a library 9 for grapheme-to-phoneme and orthographic transformation, with a particular focus on the needs of digital humanities projects. While libraries for general-purpose G2P exist, we found that our downstream projects had special needs that existing libraries did not entirely meet. In particular, 1. Subject-matter experts for these languages are often teachers or linguists without a background in computer science, who are unfamiliar with the conventions of programming and need more intuitive interfaces to convert their knowledge into executable code ( \u00a72.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "2. Most existing libraries operate on unstructured text, under the assumption that the original document will be discarded after its linguistic information is extracted. Our downstream use-cases, however, often involve the augmentation of the original document with downstream results (e.g., with pronunciations, alternative orthographies, or timealigned highlighting). We need to be able to trace results backward to their original counterparts (e.g. by using indices as shown in Figure 1) , maintaining this information through every step of transduction, so that markup, IDs, punctuation, and other features of the original document can be preserved ( \u00a72.2). : Screenshot from G2P Studio of interactive visualization of the indices preserved when composing transductions of the French word \"deux\", between the orthographic form, a phonetic representation in the International Phonetic Alphabet (IPA), the closest English phonemes according to PanPhon (Mortensen et al., 2016) , and finally to English ARPABET (see \u00a72.3).", "cite_spans": [ { "start": 481, "end": 490, "text": "Figure 1)", "ref_id": null }, { "start": 954, "end": 978, "text": "(Mortensen et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "3. Software packages for linguistic transformation, and their dependencies, can be difficult to compile and install, or cannot be installed on all operating systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "The need for such specialized knowledge presents a bottleneck in the development of G2P engines, particularly when we venture away from the NLP space and into the digital humanities space\u037e experts of a particular language's sound patterns should not necessarily need experience in programming and compiling software in order to translate their knowledge of the language into a machine-readable format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "Meanwhile, however, the languages that we are concentrating on (in particular, Indigenous languages spoken in Canada) do not typically have extensive, publicly-available corpora of parallel orthographic and phonetic renderings, from which we could learn a weighted FST (Novak et al., 2016\u037e Deri and Knight, 2016) or neural model (Rao et al., 2015\u037e Peters et al., 2017 . For most of these languages, rule-based approaches based on expert knowledge will be the norm for the foreseeable future. Fortunately, these are mostly languages with regular, linguistically-informed orthographies, such that rule-based approaches are adequate.", "cite_spans": [ { "start": 269, "end": 298, "text": "(Novak et al., 2016\u037e Deri and", "ref_id": null }, { "start": 299, "end": 312, "text": "Knight, 2016)", "ref_id": "BIBREF0" }, { "start": 329, "end": 367, "text": "(Rao et al., 2015\u037e Peters et al., 2017", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "In broad strokes, our library is most similar to Epitran (Mortensen et al., 2018) , which shares some of these design decisions\u037e it prioritizes ease of installation and adopts a method for defining rule-based G2P mappings inspired by phonological re-write rule syntax that would be familiar to linguists. Our work differs by allowing rules to be written in a spreadsheet format ( \u00a72.1), by having a core engine that preserves the indices between inputs and output transductions ( \u00a72.2), and by providing a bundled web interface for writing and running G2P mappings ( \u00a72.4) with an accompanying RESTful API ( \u00a72.6.1).", "cite_spans": [ { "start": 57, "end": 81, "text": "(Mortensen et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "This section briefly describes the process for writing rules ( \u00a72.1), the motivation and implementation details for preserving indices between inputs and outputs ( \u00a72.2), the automatic generation of cross-linguistic phoneme-to-phoneme mappings ( \u00a72.3), the 'G2P Studio' development environment ( \u00a72.4), a list of currently supported languages ( \u00a72.5), documentation information ( \u00a72.6), and a description of various applications ( \u00a72.7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G i 2P i", "sec_num": "2" }, { "text": "Rules are written in either a tabular, spreadsheet format or in JSON (See Figure 2) . The core functionality of G i 2P i is expressible in the spreadsheet format (CSV), while the JSON format allows for more functionality. Each mapping is also accompanied by a configuration file written in YAML. In its most basic form, a rule just has an input and an output, like in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 83, "text": "Figure 2)", "ref_id": "FIGREF2" }, { "start": 368, "end": 376, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Writing Rules", "sec_num": "2.1" }, { "text": "(a) Minimal CSV Rule { \"in\": \"a\", \"out\" : \"b\" } Context-sensitive rules can also be written which conditionally apply rules based on whether a pattern is matched before or after the input as shown in Figure 3 . { \"in\": \"a\", \"out\" : \"b\", \"context_before\": \"b\", \"context_after\": \"c\" } Figure 3 : A minimal context-sensitive rule in JSON format for converting 'a' to 'b' only when 'a' is preceded by 'b' and followed by 'c'. The equivalent rule written in the CSV format is 'a,b,b,c'.", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 208, "text": "Figure 3", "ref_id": null }, { "start": 283, "end": 291, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "a,b", "sec_num": null }, { "text": "Under the hood, these rules are compiled into regular expressions, where the input is the pattern to match, and the 'context before' and 'context after' values are turned into positive lookbehinds and lookaheads respectively. Lookbehinds are first converted to be fixed width and several other preprocessing steps are applied before constructing the regular expression. Namely, any explicit indices ( \u00a72.2) are removed, optional case insensitivity flags are applied, Unicode normalization (NFC or NFD) is done, and special characters can be escaped.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a,b", "sec_num": null }, { "text": "A collection of rules with a configuration constitutes a 'mapping' which can then be run in the sequence the rules are defined or in an automatically generated order that runs the rules in reverse order of input length. This mode is intended to help prevent particular rule 'bleeding' relationships where if the input to a hypothetical rule r 1 is a substring (1) baata r 2 baeta r 1 baet@ baet@", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "a,b", "sec_num": null }, { "text": "(2) baata r 1 b@@t@ r 2 b@@t@ Figure 4 : Example of rule ordering relationships in a made-up language. r 1 is the rule a \u2192 @, and r 2 is the rule aa \u2192 ae. In this made-up language, 'baet@' is the correct transduced form of 'baata', and therefore we want to order rule r 2 before r 1 , as shown in (1), so that r 1 does not bleed the context for r 2 to apply, as shown in (2).", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 38, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "a,b", "sec_num": null }, { "text": "of the input to rule r 2 and is ordered to apply first, it will remove, or 'bleed' the context for r 2 to apply, erroneously preventing the application of r 2 as shown in example (2) in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 186, "end": 194, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "a,b", "sec_num": null }, { "text": "Another type of rule interaction that can be avoided is a feeding relationship between rules-i.e. where the output of one rule creates the context for another rule to apply. In some situations this is desired, but it can also create problems in your rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preventing Feeding Relationships", "sec_num": "2.1.1" }, { "text": "To handle this, we allow prevent_feeding to be declared either for an individual rule in a JSONformatted mapping or for each rule in a mapping. When prevent_feeding is set to true, the output of a rule is replaced with a character from the Supplementary Private Use Area A Unicode block, offset by the index of the rule in a given mapping. Thus, they will never match the input or context of other rules. After applying all rules in a mapping, these intermediate representations are transformed back into the appropriate values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preventing Feeding Relationships", "sec_num": "2.1.1" }, { "text": "In practice, many real-world transduction tasks comprise a sequence of simpler transductions. For example, a G2P transduction used in ReadAlong Studio ( \u00a72.7.3) might start with converting a fontencoded orthography into a Unicode compliant form, then replacing confusable characters, converting the orthographic form into IPA, mapping those characters onto their closest English equivalents, and finally mapping the English IPA characters into the ARPABET alphabet used by the acoustic model. G i 2P i is built with this in mind\u037e an arbitrary number of transducers can be combined, and chains of transducers can be inferred automatically. If the user requests a mapping from one language code 10 to another, e.g., from alq (Algonquin) to eng-arpabet, and that particular mapping does not exist, the software can search for the shortest possible chain of transducers with those endpoints, and it will act as if it were an ordinary transducer (including maintaining indices between the ultimate inputs and outputs, and all intermediate forms, as seen in Fig. 1 ). ", "cite_spans": [], "ref_spans": [ { "start": 1052, "end": 1058, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Composite Transducers", "sec_num": "2.1.2" }, { "text": "G i 2P i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite Transducers", "sec_num": "2.1.2" }, { "text": "This modularity is intended, in part, to help subject-matter experts contribute their domain knowledge (e.g., the pronunciation of their language's orthography) without having to understand the other specialized components of the transduction pipeline (e.g., confusable Unicode characters or ARPABET), or even the structure of the pipeline as a whole. They only have to contribute their particular piece\u037e G i 2P i can compose the pipeline as a whole, and even auto-generate certain kinds of missing pieces ( \u00a72.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composite Transducers", "sec_num": "2.1.2" }, { "text": "Debugging transductions can be a difficult task when there are multiple mappings involved, each with possibly dozens of rules. In order to help ease the burden on developers, multiple debugging tools have been developed to assist contributors. In the G2P Studio ( \u00a72.4), there is an automatic visualization mode which allows users to visualize transductions in an interactive way\u037e Figure 1 is a screenshot of this visualization.", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 389, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Debugging", "sec_num": "2.1.3" }, { "text": "There are also two alternative options for de-bugging\u037e the --debugger flag used in either the CLI or REST API shows each transduction applied in sequence along with any intermediate steps ( \u00a72.1.1). Additionally, there is a g2p doctor command in the CLI that checks a specific mapping for a list of common errors, such as the declaration of IPA characters not recognized by Pan-Phon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Debugging", "sec_num": "2.1.3" }, { "text": "The concerns in ( \u00a71) are not as pressing when considering a G2P transformation to create artifacts such as training data for a speech recognition model. There, once the necessary information has been extracted from the document, features in the original document like punctuation can be ignored, as only the transformed version is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preserving Indices", "sec_num": "2.2" }, { "text": "However, consider how the project needs differ when force-aligning a storybook with an accompanying recording, such that a beginner reader can see words highlighted when they are read, click to hear words in isolation, etc. If our transformation pipeline has thrown out all non-speech features of the document on the way to the ARPABET needed by the decoder, we are left with timestamps that correspond only to a text document with an unclear relationship with the original structured data. This would be fine if the storybook were only to be used as training data, but if we want to re-associate those timestamps with the original document, we would have an additional problem of re-alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preserving Indices", "sec_num": "2.2" }, { "text": "We could potentially try to learn an alignment model between the output and the original document, but data is extremely scarce in most of our target languages, and in any case these alignments are something that the model itself could have maintained. Therefore, we designed the G i 2P i library to maintain index alignments throughout the process, even when transformations are composed. This is also true below the level of the word. Many of our target languages are very morphologically complex and long words are the norm. Therefore, educational material in these languages often has subword highlighting. For example, educational material from the Onkwawenna Kentyohkwa immersion school for the Kanyen'k\u00e9ha (Mohawk) language has a systematic association with particular kinds of morphemes with colors. Another example is when the downstream project involves subword phenomena: a bouncing-ball sing-along video requires syllable-level alignment to get the bounce at the correct place and time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preserving Indices", "sec_num": "2.2" }, { "text": "However, in both of these examples, the unit of transformation is still the word\u037e the word is the domain over which most phonological transformations apply. Splitting the original word into subword units before processing will not necessarily produce correct results, as this would introduce new \"word\" boundaries. Therefore, we must also keep and compose indices below the level of the word, so that even when transforming whole words we can associate the resulting pronunciations, timestamps, etc. with subword markup in the original document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preserving Indices", "sec_num": "2.2" }, { "text": "Maintaining these indices is also useful for a debugging visualization in the bundled development environment ( \u00a72.4), making clear in composed transductions how inputs, intermediate, and output forms correspond to each other (Fig. 1) .", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 234, "text": "(Fig. 1)", "ref_id": null } ], "eq_spans": [], "section": "Preserving Indices", "sec_num": "2.2" }, { "text": "The default interpretation of rules is to assign indices evenly between inputs and outputs\u037e if there is a mismatch in length between inputs and outputs, excess characters are assigned the index of the last character of the shorter string, as seen in Figures 6a and 6b . However, G i 2P i also allows a more explicit syntax for defining indexing relationships between inputs and outputs: rules can be marked up with curly braces to indicate a specific indexing of characters between inputs and outputs as seen in Figure 6c .", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 268, "text": "Figures 6a and 6b", "ref_id": "FIGREF4" }, { "start": 513, "end": 523, "text": "Figure 6c", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Default and Explicit Indexing", "sec_num": "2.2.1" }, { "text": "Another use of the G i 2P i library, beyond grapheme-to-phoneme transformation or orthographic transliteration, is to map the sounds of one language onto the sounds of another, for cross-linguistic comparison. This is used, for example, in ReadAlong Studio ( \u00a72.7.3) to align text and speech in an arbitrary language using only an English-language acoustic model. While these mappings can be written by hand, it is somewhat of a specialized art, typically performed by speech technology specialists. Therefore, the G i 2P i library also includes functionality to automatically generate phone-to-phone mappings, by leveraging the phone-to-phone distance metrics included in PanPhon (Mortensen et al., 2016 ) to serve as \"glue\" in a composite transducer ( \u00a72.1.2).", "cite_spans": [ { "start": 681, "end": 704, "text": "(Mortensen et al., 2016", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Phoneme-to-Phoneme Mappings", "sec_num": "2.3" }, { "text": "For composing a mapping A with a mapping B, where both the output vocabulary of A and in the input vocabulary B represent IPA characters (but not necessarily the same inventory of IPA characters), the G i 2P i library can generate a mapping in which each character in the output of A is mapped to its nearest neighbor in the input of B, according to the PanPhon's calculated phone-to-phone distance between the characters' phonological feature vector representations. PanPhon allows for a variety of distance metrics between IPA characters\u037e by default we use PanPhon's Hamming distance metric between IPA phonological feature vector representations. This allows non-specialist users to generate cross-linguistic transductions of the sort used in cross-lingual speech synthesis ( \u00a72.7.2) or ReadAlong Studio ( \u00a72.7.3), without necessarily having to be a linguist familiar with the IPA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Phoneme-to-Phoneme Mappings", "sec_num": "2.3" }, { "text": "In addition to developing rules locally as described in \u00a72.1, writing and running mappings can be performed in a web interface called 'G2P Studio'. The G2P Studio is written using a vanilla JavaScript front-end with Skeleton CSS 11 and a Python backend written in Flask with low-latency, bidirectional communication handled through WebSockets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G2P Studio", "sec_num": "2.4" }, { "text": "The G2P Studio is hosted at https://bit.ly/ g2p-studio but can also be deployed in a distributed fashion, as the lightweight server/app code is bundled in the Python package.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G2P Studio", "sec_num": "2.4" }, { "text": "In addition to creating rules in spreadsheets or JSON files, G2P Studio includes a visual program- ming interface for authoring rules (Figure 7 ). This visual programming interface was created with Blockly 12 (Fraser, 2015\u037e Pasternak et al., 2017 .", "cite_spans": [ { "start": 209, "end": 246, "text": "(Fraser, 2015\u037e Pasternak et al., 2017", "ref_id": null } ], "ref_spans": [ { "start": 134, "end": 143, "text": "(Figure 7", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Visual Programming Rule Creator", "sec_num": "2.4.1" }, { "text": "At the time of writing, 30 languages are supported: Anishin\u00e0bemiwin (alq), Atikamekw (atj), Michif (crg), Southern & Northern East Cree (crj), Plains Cree (crk), Moose Cree (crm), Swampy Cree (csw), Western Highland Chatino (ctp), Danish (dan), French (fra), Gitksan (git), Scottish Gaelic (gla), Gwich'in (gwi), H\u00e4n (haa), Inuinnaqtun (ikt), Inuktitut (iku), Kaska (kkz), Kwak'wala (kwk), Raga (lml), Mi'kmaq (mic), Kanien'k\u00e9ha (moh), Anishinaabemowin (oji), Seneca (see), Tsuut'ina (srs), SEN\u0106O\u0166EN (str), Upper Tanana (tau), Southern Tutchone (tce), Northern Tutchone (ttm), Tagish (tgx), Tlingit (tli). G i 2P i is also bundled with other mappings, such as mappings from font-encoded writing systems in Heiltsuk, Tsilqot'in, and Navajo to Unicode-compliant versions as well a mapping from English IPA to English ARPABET.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supported Languages", "sec_num": "2.5" }, { "text": "Documentation on primary use cases and edge cases is an important part of the G i 2P i project. Without contributions to the mappings, the project will be less accessible, and more difficult to maintain. Technical documentation is therefore provided through ReadTheDocs 13 as well as a 7-part Figure 8 : Screenshot of Convertextract GUI for macOS blog series 14 written for a more general audience.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 301, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Documentation", "sec_num": "2.6" }, { "text": "The core functionality of G i 2P i is also exposed through a RESTful API. The API and its documentation are generated dynamically using Swagger 15 to provide up-to-date, interactive documentation 16 . The documentation allows users to interactively make requests to the API, see available mappings, and copy the related Curl commands along with other information like the Request URLs, and Response body and headers from their requests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESTful API documentation", "sec_num": "2.6.1" }, { "text": "Grapheme-to-phoneme transformations are used in a wide variety of natural language processing tasks, and so G i 2P i can be used for any such use case. Below, we briefly discuss three projects that are implemented using G i 2P i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2.7" }, { "text": "Convertextract (Pine and Turin, 2018) , is a tool that performs find/replace operations on Microsoft Office documents while preserving the original formatting of the file. Convertextract is available for the command line and with a macOS GUI (Figure 8) . Integration is fully automated between the libraries-whenever a new version of G i 2P i is released, it triggers a new build and release of convertextract which is able to perform any conversion between any of the mappings defined in G i 2P i .", "cite_spans": [ { "start": 15, "end": 37, "text": "(Pine and Turin, 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 242, "end": 252, "text": "(Figure 8)", "ref_id": null } ], "eq_spans": [], "section": "Convertextract", "sec_num": "2.7.1" }, { "text": "We have built speech synthesis models for SEN\u0106O\u0166EN, Kanyen'k\u00e9ha, and Gitksan using mappings developed with G i 2P i (Pine et al., 2022) . Part of the pipeline for these models involves transforming the orthographic form of utterances to a phonetic representation. This is necessary for the pre-processing step of forced alignment and the phonetic representation of the text is used as input to the feature prediction network in the speech synthesis pipeline. The phonetic form is represented either as one-hot encodings or as multihot phonological feature vectors derived from Pan-Phon. This is another use case for generated mappings ( \u00a72.3)\u037e a mapping between the IPA symbols of a target language could be mapped on to the IPA symbols of one or more languages in a pre-trained model using G i 2P i to facilitate fine-tuning on a language that was not present in the pre-trained model. This method allows for a principled rulebased method for mapping between symbol spaces in cross-lingual speech synthesis without the use of a learned phonetic transformation network like the one described by Tu et al. (2019) .", "cite_spans": [ { "start": 116, "end": 135, "text": "(Pine et al., 2022)", "ref_id": "BIBREF10" }, { "start": 1095, "end": 1111, "text": "Tu et al. (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Synthesis Front End", "sec_num": "2.7.2" }, { "text": "ReadAlong Studio 17 is a library for the creation of time-aligned \"read-along\" audiobooks, intended to make text/speech alignment easy for non-specialist users. It utilizes a zero-shot speech alignment paradigm in which target-language text is converted to English-language phonemes, and then force-aligned using the default English-language acoustic model from PocketSphinx (Huggins-Daines et al., 2006) .", "cite_spans": [ { "start": 375, "end": 404, "text": "(Huggins-Daines et al., 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "ReadAlong Studio", "sec_num": "2.7.3" }, { "text": "By its nature, this text conversion is a composite transduction ( \u00a72.1.2) -first converting the target-language text to target-language phonemes, then converting the target-language phonemes into similar English-language phonemes ( \u00a72.3), and finally converting the English-language phonemes into the ARPABET symbols that the aligner expects as shown previously in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 365, "end": 373, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "ReadAlong Studio", "sec_num": "2.7.3" }, { "text": "While none of these steps is, in itself, difficult to specify by hand, in combination they require a relatively rare expertise: (1) understanding of a specific language's orthography, (2) understanding how sounds map to each other between languages, and (3) familiarity with the ARPABET conventions and the specific phone vocabulary of the English-language acoustic model used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ReadAlong Studio", "sec_num": "2.7.3" }, { "text": "By automating the second and third steps, and automating their composition, the G i 2P i library only requires the user to be able to do the first, putting it within reach of a linguisticallyinformed teacher or other knowledge worker. It does require knowledge of the IPA, but this is relatively widespread knowledge, and IPAequivalence charts for many languages are easy to come by in books and online.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ReadAlong Studio", "sec_num": "2.7.3" }, { "text": "As mentioned previously, this paper shares many similarities with Epitran, however we cannot evaluate our system using the same method. Epitran leverages baseline data available in some of the languages it supports to evaluate the system indirectly using the downstream task of developing ASR systems. The word error rates (WER) of ASR systems created using letter-to-sound rules from Epitran are then compared against those created using the available baseline. The primary focus for this library is on extremely low resource languages, and we do not possess baseline data to recreate the evaluation procedure implemented by Epitran.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "As a crude replacement, we evaluate two of our mappings by reporting the accuracy of a downstream forced alignment task using ReadAlong Studio ( \u00a72.7.3). We manually annotated data from SEN\u0106O\u0166EN and Kanyen'k\u00e9ha with word-level alignments in Praat. Given the time-consuming nature of manual alignment, we were limited to a single document from each language\u037e the SEN\u0106O\u0166EN document is 5:47 long and contains 417 words and the Kanyen'k\u00e9ha document is 5:07 long and contains 249 words. Both documents are private materials owned by the language communities and shared with us by linguist Timothy Montler and Kanyen'k\u00e9ha educator Owennatekha Brian Maracle respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "We evaluate our hand-written mappings against a baseline zero-shot G2P method. The baseline we use is ReadAlong Studio's fallback method for 'und' (ISO 639-3 for 'undetermined') text. This fallback method uses the text-unidecode 18 package to convert all characters to ASCII equivalents, and then uses a rule-based mapping from ASCII to IPA. For our hand-written SEN\u0106O\u0166EN and Kanyen'k\u00e9ha mappings, we use G i 2P i 's built-in automatic mapping functionality to map the IPA inventories to the closest English IPA equivalents ( \u00a72.3). All methods are then mapped from IPA to the ARPABET vocabulary used by the decoder. 2017, we evaluate the system by reporting the accuracy of the word boundaries predicted by the aligner within thresholds of < 10, < 25, < 50, and < 100 millisec-onds\u037e for example, a result of 0.88 with a threshold of <100ms means that 88% of system boundaries were within 100ms of the reference boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "As shown in Table 1 , the results for SEN\u0106O\u0166EN and Kanyen'k\u00e9ha are not the same. While alignment created from handmade mappings for SEN\u0106O\u0166EN outperforms the baseline by 26% with a 100ms tolerance threshold, the results from Kanyen'k\u00e9ha are less clear, and do not show an improvement over the baseline. We suspect this is in part because while the Kanyen'k\u00e9ha orthography is quite consistent with other Latin-based orthographies, the SEN\u0106O\u0166EN orthography is considerably different (for example \u0106 corresponds to the sound /\u00d9/), which would affect the text-unidecode library's ability to predict reasonable ASCII equivalents. These results could point to a finding that for simpler 19 orthographies that are strongly aligned with English, the text unidecode technique could be sufficient\u037e however, caution should be applied in interpreting these preliminary results and further evaluation with additional languages, data, and downstream tasks would be needed.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "In this paper we presented G i 2P i along with its motivations, and some descriptions of its use cases. The library is written in pure Python to support (relatively) easy installation, with support for 30 different languages, index preservation between in- 19 Kanyen'k\u00e9ha contains fewer than half as many segmental phonemes as SEN\u0106O\u0166EN puts and outputs, an accompanying graphical web interface, a RESTful API, and extensive documentation to encourage the development of mappings for more languages in the future.", "cite_spans": [ { "start": 257, "end": 259, "text": "19", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "We recognize that language experts are the best people suited to write mappings between a language's orthography and the IPA, and we hope that through a variety of features that such a contributor would \"get for free\" by contributing, that G i 2P i is an attractive option for rule-based G2P. To summarize, by contributing a mapping, a contributor will acquire the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "\u2022 Integration into the broader G i 2P i transduction network for cross-lingual purposes ( \u00a72.1.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "\u2022 Debugging tools ( \u00a72.1.3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "\u2022 Index preservation for transductions ( \u00a72.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "\u2022 A graphical interface ( \u00a72.4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "\u2022 A RESTful API ( \u00a72.6.1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "\u2022 Automatic downstream support in Convertextract ( \u00a72.7.1) and ReadAlong Studio ( \u00a72.7.3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Each time a mapping is added to G i 2P i it becomes more useful software, so we have prioritized the developer experience of contributing a mapping through documentation, debugging tools and the features described above. We hope that these measures will make using and contributing to G i 2P i more accessible and we will measure the success of the project by the number of collaborator contributions of mappings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "In Gi2Pi, a mapping is a collection of rules with a defined input language code and output language code. By \"code\" we mean an arbitrary label for the language. By convention we start with the ISO 639-3 code and add a descriptive suffix (e.g.-ipa when required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://getskeleton.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://developers.google.com/blockly 13 https://g2p.readthedocs.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://blog.mothertongues.org/ g2p-background/15 https://swagger.io/ 16 https://bit.ly/g2p-api-docs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ReadAlongs/Studio", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://pypi.org/project/text-unidecode/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to acknowledge the Yukon Native Language Centre and the W S\u00c1NE\u0106 School Board for extremely helpful contributions in providing alphabet charts, sample text and other materials that aided in the development of some of the mappings described in \u00a72.5. We would also like to thank Bradley Ellert for his contributions in helping developing mappings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Grapheme-tophoneme models for (almost) any language", "authors": [ { "first": "Aliya", "middle": [], "last": "Deri", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "399--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aliya Deri and Kevin Knight. 2016. Grapheme-to- phoneme models for (almost) any language. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 399-408.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ten things we've learned from Blockly", "authors": [ { "first": "Neil", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Blocks and Beyond Workshop", "volume": "", "issue": "", "pages": "49--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neil Fraser. 2015. Ten things we've learned from Blockly. In 2015 IEEE Blocks and Beyond Work- shop, pages 49-50.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Pocketsphinx: A free, real-time continuous speech recognition system for hand-held devices", "authors": [ { "first": "David", "middle": [], "last": "Huggins-Daines", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Mosur", "middle": [], "last": "Ravishankar", "suffix": "" }, { "first": "Alexander", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" } ], "year": 2006, "venue": "2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Huggins-Daines, Mohit Kumar, Arthur Chan, Alan W Black, Mosur Ravishankar, and Alexander I Rudnicky. 2006. Pocketsphinx: A free, real-time continuous speech recognition system for hand-held devices. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceed- ings, volume 1, pages I-I. IEEE.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Montreal forced aligner: Trainable text-speech alignment using kaldi", "authors": [ { "first": "Michael", "middle": [], "last": "Mcauliffe", "suffix": "" }, { "first": "Michaela", "middle": [], "last": "Socolof", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Mihuc", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Sonderegger", "suffix": "" } ], "year": 2017, "venue": "Interspeech", "volume": "2017", "issue": "", "pages": "498--502", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal forced aligner: Trainable text-speech align- ment using kaldi. In Interspeech, volume 2017, pages 498-502.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Epitran: Precision G2P for many languages", "authors": [ { "first": "David", "middle": [ "R" ], "last": "Mortensen", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Dalmia", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David R. Mortensen, Siddharth Dalmia, and Patrick Lit- tell. 2018. Epitran: Precision G2P for many lan- guages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Re- sources Association.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Panphon: A resource for mapping IPA segments to articulatory feature vectors", "authors": [ { "first": "David", "middle": [ "R" ], "last": "Mortensen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Akash", "middle": [], "last": "Bharadwaj", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Lori", "middle": [ "S" ], "last": "Levin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "3475--3484", "other_ids": {}, "num": null, "urls": [], "raw_text": "David R. Mortensen, Patrick Littell, Akash Bharad- waj, Kartik Goyal, Chris Dyer, and Lori S. Levin. 2016. Panphon: A resource for mapping IPA seg- ments to articulatory feature vectors. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Pa- pers, pages 3475-3484. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Phonetisaurus: Exploring graphemeto-phoneme conversion with joint n-gram models in the wfst framework", "authors": [ { "first": "Josef", "middle": [ "Robert" ], "last": "Novak", "suffix": "" }, { "first": "Nobuaki", "middle": [], "last": "Minematsu", "suffix": "" }, { "first": "Keikichi", "middle": [], "last": "Hirose", "suffix": "" } ], "year": 2016, "venue": "Natural Language Engineering", "volume": "22", "issue": "6", "pages": "907--938", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josef Robert Novak, Nobuaki Minematsu, and Keikichi Hirose. 2016. Phonetisaurus: Exploring grapheme- to-phoneme conversion with joint n-gram models in the wfst framework. Natural Language Engineering, 22(6):907-938.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Tips for creating a block language with Blockly", "authors": [ { "first": "Eric", "middle": [], "last": "Pasternak", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Fenichel", "suffix": "" }, { "first": "Andrew", "middle": [ "N" ], "last": "", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Blocks and Beyond Workshop", "volume": "", "issue": "", "pages": "21--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Pasternak, Rachel Fenichel, and Andrew N. Mar- shall. 2017. Tips for creating a block language with Blockly. In 2017 IEEE Blocks and Beyond Work- shop, pages 21-24.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Massively Multilingual Neural Grapheme-to-Phoneme Conversion", "authors": [ { "first": "Ben", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Dehdari", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems", "volume": "", "issue": "", "pages": "19--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Peters, Jon Dehdari, and Josef van Genabith. 2017. Massively Multilingual Neural Grapheme-to- Phoneme Conversion. In Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems, pages 19-26, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Seeing the Heiltsuk orthography from font encoding through to Unicode: A case study using convertextract", "authors": [ { "first": "Aidan", "middle": [], "last": "Pine", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Turin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the LREC 2018 Workshop \"CCURL 2018 -Sustaining knowledge diversity in the digital age", "volume": "", "issue": "", "pages": "27--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aidan Pine and Mark Turin. 2018. Seeing the Heiltsuk orthography from font encoding through to Unicode: A case study using convertextract. In Claudia So- ria, Laurent Besacier, and Laurette Pretorius, editors, Proceedings of the LREC 2018 Workshop \"CCURL 2018 -Sustaining knowledge diversity in the digital age\", pages 27-30. European Language Resources Association.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Requirements and Motivations for Low Resource Speech Synthesis", "authors": [ { "first": "Aidan", "middle": [], "last": "Pine", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Wells", "suffix": "" }, { "first": "Nathan", "middle": [ "Thanyeht\u00e9nhas" ], "last": "Brinklow", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Korin", "middle": [], "last": "Richmond", "suffix": "" } ], "year": 2022, "venue": "Proceedings of ACL 2022", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aidan Pine, Dan Wells, Nathan Thanyeht\u00e9nhas Brin- klow, Patrick Littell, and Korin Richmond. 2022. Requirements and Motivations for Low Resource Speech Synthesis. In Proceedings of ACL 2022, Dublin, Ireland. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks", "authors": [ { "first": "Kanishka", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Ha\u015fim", "middle": [], "last": "Sak", "suffix": "" }, { "first": "Fran\u00e7oise", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "4225--4229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kanishka Rao, Fuchun Peng, Ha\u015fim Sak, and Fran\u00e7oise Beaufays. 2015. Grapheme-to-phoneme conversion using long short-term memory recurrent neural net- works. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4225-4229.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "End-to-end Text-to-speech for", "authors": [ { "first": "Tao", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Yuan-Jui", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Cheng-Chieh", "middle": [], "last": "Yeh", "suffix": "" }, { "first": "Hungyi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Tu, Yuan-Jui Chen, Cheng-chieh Yeh, and Hung- yi Lee. 2019. End-to-end Text-to-speech for", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Low-resource Languages by Cross-Lingual Transfer Learning", "authors": [], "year": null, "venue": "Interspeech 2019", "volume": "", "issue": "", "pages": "2075--2079", "other_ids": {}, "num": null, "urls": [], "raw_text": "Low-resource Languages by Cross-Lingual Transfer Learning. In Interspeech 2019, pages 2075-2079.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Figure 1: Screenshot from G2P Studio of interactive visualization of the indices preserved when composing transductions of the French word \"deux\", between the orthographic form, a phonetic representation in the International Phonetic Alphabet (IPA), the closest English phonemes according to PanPhon (Mortensen et al., 2016), and finally to English ARPABET (see \u00a72.3).", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "A minimal rule converting 'a' to 'b' expressed in both the CSV syntax (a) and JSON syntax (b)", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "on the following page illustrates the current network of transducers possible in", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "Examples of various strategies for assigning indices between inputs and outputs\u037e default assignment of indices is shown in 6a and 6b and explicit assignment of indices is shown in 6c.", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "Screenshot of the \"Rule Creator\" interface in G2P Studio showing a toy set of rules being made that take each vowel in the Vowels variable (declared elsewhere in G2P Studio) as input and return the same vowel prefixed by 't' as output.", "type_str": "figure" }, "FIGREF6": { "num": null, "uris": null, "text": "", "type_str": "figure" }, "TABREF0": { "text": "Visualization of the network created by G2P. Nodes represent orthographies or phonetic representations. They are labelled and colour coded according to the associated language's ISO 639-3 code. Arcs represent mappings. Nodes are sized relative to the number of upstream nodes. English (eng) has the largest nodes due to the large number of generated mappings into English for the purpose of the ReadAlong Studio project (see \u00a72.3, \u00a72.7.3).", "html": null, "content": "
hei-doulos
hei
hei-times-font
tau-ipagwi-ipa
crx-sro crx-sylcrm-ipa alq-ipa iku-sro-ipa iku-ipa ckt-ipa fra-ipa gla-ipa tli-ipa ctp-ipa eng-arpabet eng-ipa kkz-ipa str-ipa srs-ipa lml-ipa dan-ipa ttm-ipa moh-ipa
kwk-napa-ubccrg-ipa crk-ipasee-ipaatj-ipa crl-ipa git-ipa
mic-ipaund-ipa
kwk-napa-ubc-concsw-ipa
kwk-ipa
clc-doulos
clc
kwk-napa-uvic
kwk-napa-uvic-confn-unicode
nav nav-times-fontfn-unicode-font
Figure 5:
", "type_str": "table", "num": null } } } }