ACL-OCL / Base_JSON /prefixS /json /sigtyp /2021.sigtyp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:21:10.545949Z"
},
"title": "OTEANN: Estimating the Transparency of Orthographies with an Artificial Neural Network",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Marjou",
"suffix": "",
"affiliation": {},
"email": "xavier.marjou@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To transcribe spoken language to written medium, most alphabets enable an unambiguous sound-to-letter rule. However, some writing systems have distanced themselves from this simple concept and little work exists in Natural Language Processing (NLP) on measuring such distance. In this study, we use an Artificial Neural Network (ANN) model to evaluate the transparency between written words and their pronunciation, hence its name Orthographic Transparency Estimation with an ANN (OTEANN). Based on datasets derived from Wikimedia dictionaries, we trained and tested this model to score the percentage of correct predictions in phoneme-tographeme and grapheme-to-phoneme translation tasks. The scores obtained on 17 orthographies were in line with the estimations of other studies. Interestingly, the model also provided insight into typical mistakes made by learners who only consider the phonemic rule in reading and writing.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "To transcribe spoken language to written medium, most alphabets enable an unambiguous sound-to-letter rule. However, some writing systems have distanced themselves from this simple concept and little work exists in Natural Language Processing (NLP) on measuring such distance. In this study, we use an Artificial Neural Network (ANN) model to evaluate the transparency between written words and their pronunciation, hence its name Orthographic Transparency Estimation with an ANN (OTEANN). Based on datasets derived from Wikimedia dictionaries, we trained and tested this model to score the percentage of correct predictions in phoneme-tographeme and grapheme-to-phoneme translation tasks. The scores obtained on 17 orthographies were in line with the estimations of other studies. Interestingly, the model also provided insight into typical mistakes made by learners who only consider the phonemic rule in reading and writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An alphabet is a standard set of letters that represent the basic significant sounds of the spoken language it is used to write. When a spelling system (also referred as orthography) systematically uses a oneto-one correspondence between its sounds and its letters, the encoding of a sound (also referred as phoneme) into a letter (also referred as grapheme) leads to a single possibility; similarly the decoding of a letter into a sound leads to a single possibility as well. Such orthography is thus transparent with regards to phonemes with the advantage of offering no ambiguity when writing or reading the letters of a word, as illustrated in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 648,
"end": 656,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In real life, no existing orthography is fully transparent phonemically. One reason is that a word spoken alone is sometimes different from a word spoken in a sentence. An even more consequen-tial reason is that some orthographies like English 1 and French 2 have incorporated deeper depth rules that have moved them away from a transparent orthography (Seymour et al., 2003) ; this has created ambiguities when trying to write or read phonemically, as illustrated in Figure 2 .",
"cite_spans": [
{
"start": 353,
"end": 375,
"text": "(Seymour et al., 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 468,
"end": 476,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many studies have discussed the degree of transparency of orthographies (Borleffs et al., 2017) . These studies are mainly motivated by the estimation of the ease of reading and writing when learning a new language (Defior et al., 2002) . Finnish, Korean, Serbo-Croatian and Turkish orthographies are often referred as highly transparent (Aro, 2004) (Wang and Tsai, 2009) , (Turvey et al., 1984) , (\u00d6ney and Durgunoglu, 1997) , whereas English and French orthographies are referred as opaque (van den Bosch et al., 1994) . However, little work exists in NLP about measuring the level of transparency of an orthography. One noticeable exception is the work of van den Bosch et al. (1994) who have created grapheme-to-phoneme scores and tested them on three orthographies (Dutch, English and French).",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "(Borleffs et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 215,
"end": 236,
"text": "(Defior et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 338,
"end": 349,
"text": "(Aro, 2004)",
"ref_id": "BIBREF0"
},
{
"start": 350,
"end": 371,
"text": "(Wang and Tsai, 2009)",
"ref_id": "BIBREF14"
},
{
"start": 374,
"end": 395,
"text": "(Turvey et al., 1984)",
"ref_id": "BIBREF11"
},
{
"start": 398,
"end": 425,
"text": "(\u00d6ney and Durgunoglu, 1997)",
"ref_id": "BIBREF6"
},
{
"start": 492,
"end": 520,
"text": "(van den Bosch et al., 1994)",
"ref_id": "BIBREF12"
},
{
"start": 667,
"end": 686,
"text": "Bosch et al. (1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This study extends such work with a method called OTEANN, which models a word-based phoneme-to-grapheme task and a word-based grapheme-to-phoneme task using an ANN. For the sake of simplicity, the former task is called a writing task while the latter task is called a reading task. The goal is not to build a perfect spelling translator or a spell checker. Instead the goal is to build a translator which can indicate a degree of phonemic transparency and thus make it possible to rank orthographies according to this criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interestingly, recent years have seen tremendous progress regarding NLP with ANNs (Otter et al., 2018) . Sutskever et al. (2014) proposed an ANN /t/ <t> <t> /t/ ambiguous correspondence during writing and reading tasks in French. The /t/ phoneme can correspond to multiple graphemes, depending on the nature of the word and also depending on the nature of neighboring words in the sentence or even in a previous sentence. Similarly, the <t> grapheme can correspond to multiple phonemes. called a Sequence-to-sequence (seq2seq) model that has proven to be very successful on language translation tasks. More recently, ANNs based on as attention (Bahdanau et al., 2014) , (Vaswani et al., 2017) and transformers like Bidirectional Encoder Representations from Transformers (BERT), (Devlin et al., 2018) and Generative Pre-Training (GPT) (Radford, 2018) have again enhanced and outperformed seq2seqs. Considering writing a word and reading a word as two translations tasks allows re-using the transformers for our work. To this purpose, we used a minimalist GPT implementation (Karpathy, 2020) called minGPT. Notice that since we don't aim at building a perfect spelling translator, we do not have to translate a sequence of words into another sequence of words; our model only requires translating a spoken word into a spelled word (writing task) and a spelled word into a spoken word (reading task). In other words, our ANN operates at the character level within a sequence of characters of single words. The pronunciation and spelling of the word are both encoded as a sequence of UTF-8 characters; a pronounced word is encoded with the characters belonging to the set of phonemes of the target language, whereas a spelled word is encoded with the characters belonging to the alphabet of the target orthography. We directly re-used minGPT code with no modification. The only differences were the training data and the code for extracting the prediction at inference time.",
"cite_spans": [
{
"start": 77,
"end": 102,
"text": "ANNs (Otter et al., 2018)",
"ref_id": null
},
{
"start": 105,
"end": 128,
"text": "Sutskever et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 644,
"end": 667,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 670,
"end": 692,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 779,
"end": 800,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 835,
"end": 850,
"text": "(Radford, 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We used OTEANN to test seventeen orthographies in order to evaluate their degree of phonemic transparency. Sixteen of them are the official orthographies of their respective language (Arabic, Breton, Chinese, Dutch, English, Esperanto, Finnish, French, German, Italian, Korean, Portuguese, Russian, Serbo-Croatian, Spanish, and Turkish) while the seventeenth is a phonemic orthography proposed for French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A unique multi-orthography ANN model instance was trained to learn the writing and reading tasks on all languages at the same time. In other words, we used a single dataset containing samples of all studied orthographies. The multi-orthography ANN model was then tested for each orthography and each task with new samples, which allowed calculating an average percentage of correct translations. A score of 0% of correct translations represented a fully opaque orthography (no correlation between the input and the target), whereas a score close to 100% represented a fully transparent orthography (full correlation between the input and the target).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our study first confirms that orthographies like Arabic, Finnish, Korean, Serbo-Croatian and Turkish are highly transparent whereas other ones like Chinese, French and English are highly opaque. For example, when solely based on a phoneme-grapheme correspondence, we estimated the chances of correctly writing a French word at 28%; similarly, when solely based on a grapheme-phoneme correspondence, we estimated the chances of correctly pronouncing an English word at 31%. For Dutch, English and French reading tasks, our obtained ranking is in line with the one of van den Bosch et al. (1994) . One unexpected finding is that OTEANN also allows discovering Orthography Task Input Output en write dZ6b job en read job dZ6b ",
"cite_spans": [
{
"start": 587,
"end": 593,
"text": "(1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to evaluate a level of transparency of some orthographies two main steps were necessary: obtaining datasets and carrying out the training and testing experiments with the ANN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "As displayed in Table 1 , we needed a multiorthography dataset with four features per sample: the orthography, the task (write or read), the input word (pronunciation or spelled word) and the output word (spelled word or pronunciation). A spelled word was represented by a sequence of graphemes whereas a pronunciation was represented by a sequence of phonemes. The characters representing phonemes are also called International Phonetic Alphabet (IPA) characters. Having a single dataset with multiple orthographies and tasks allows a single multi-orthography ANN model to learn to read and write all orthographies; otherwise, it would require one ANN model per orthographytask pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "In order to build such dataset, we first generated one sub-dataset per orthography (e.g. one 'en' sub-dataset for English), each containing the pronunciation and the spelled word (e.g. 'dZ6b' and 'job').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.1"
},
{
"text": "We first created baselines representing a fully transparent orthography and a fully opaque orthography.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Orthographies",
"sec_num": "2.1.1"
},
{
"text": "Regarding a fully transparent orthography, we created a new artificial orthography called Entirely Transparent ('ent') orthography. We generated its samples by using the IPA pronunciation of real Esperanto words both as the pronunciation and as the spelled word, which resulted in a sub-dataset containing an 'ent' bijective orthography.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Orthographies",
"sec_num": "2.1.1"
},
{
"text": "Regarding a fully opaque orthography, we also created a new artificial orthography called Entirely Opaque ('eno') orthography. We generated its samples by taking the IPA pronunciation of real Esperanto words mapping each of theirs phonemes to a random grapheme from a list of 25 graphemes, which resulted in a sub-dataset containing an 'eno' orthography with no correlation between the pronunciation and the spelled word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Orthographies",
"sec_num": "2.1.1"
},
{
"text": "A sub-dataset was created for each of the following orthographies: Arabic ('ar'), Breton ('br'), German ('de'), English ('en'), Esperanto ('eo'), Spanish ('es'), Finnish ('fi'), French ('fr'), Italian ('it'), Korean ('ko'), Dutch ('nl'), Portuguese ('pt'), Russian ('ru'), Serbo-Croatian ('sh'), Turkish ('tr') and Chinese ('zh').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "We incorporated the words from the corresponding Wiktionary 3 dump 4 , with the exception of the following ones:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "\u2022 Words containing space characters;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "\u2022 Words containing more than 25 characters;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "\u2022 Words containing capital letters (except for German words);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "\u2022 Words containing non-standard characters with regard to the orthography's alphabet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "Two orthographies required additional processing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "\u2022 For German, proper nouns were discarded and the capital letter of common nouns was transformed into lower case;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "\u2022 For Korean, the syllabic blocks words were converted in a series of two or three letters (one vowel and one or two consonants) pertaining to the Korean alphabet with ko_pron 5 Python library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "Regarding pronunciation, we directly extracted the IPA pronunciation when available in the associated Wiktionary dump, which was the case for 'br', 'de', 'en', 'es', 'fr', 'it', 'nl', 'pt' and 'sh'. The Esperanto ('eo') pronunciation came from the French Wiktionary. For the others ('ar', 'ko', 'ru', 'fi', 'tr'), we had to derive it from the spelled word with additional software. For Russian, the Russian Wiktionary dump did not contain the IPA. We thus used wikt2pron ru_pron module 6 to obtain a pronunciation similar to the one displayed in the Russian Wiktionary web pages. For Chinese, we only selected Mandarin words in simplified Chinese and limited to one or two symbols (a.k.a. Hanzis); we then obtained their pronunciation from the CEDICT 7 dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "Extracting the phonemic pronunciation from Wiktionary may raise concerns given than IPA symbols can be used both for phonetic and phonemic notations and that there is no unified consistency between the different dictionaries. When processing the IPA strings, we nonetheless took care of preserving the highest surface pronunciation as possible: most pitches were removed since they represent no useful hint during the writing task (i.e. no consequence on the spelled word) and especially since they are generally impossible to predict when translating the spelled word into a pronunciation during the reading task. Nevertheless the /:/ pitch was noticed as indispensable for some orthographies, for instance for predicting double vowels in the spelling of Finnish words or the alif letter in Arabic. Regarding the /\"/ pitch, it can slightly influence Spanish translation scores: it can lead to a better writing score as it can be a hint for predicting accented letters, but it can also lead to a lower reading score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "Another interesting orthography was a proposal of an alternative orthography for French called French Ortofasil ('fro') 8 , which seeks to be phonemically transparent. Although not fully bijective (e.g. both /o/ and /O/ map to <o> letter), it indeed seems highly transparent. We therefore used it to generate a sub-dataset for the 'fro' orthography.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "It is debatable whether Chinese should be included in this study given the term alphabet is usually reserved for largely phonographic systems that have a small number of elements. We decided to include it because our ANN model allowed for alphabets with thousands of graphemes. Table 2 summarizes the sub-datasets obtained.",
"cite_spans": [],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Studied Orthographies",
"sec_num": "2.1.2"
},
{
"text": "11, 000 samples were randomly selected in each of the 17 sub-datasets. Each sample from a sub-dataset produced two samples in the multiorthography dataset: one sample for write task and one sample for the read task, as illustrated in Table 1 . This multi-orthography dataset was subsequently divided into a training dataset (10, 000 * 17 * 2 samples) and a test dataset (1, 000 * 17 * 2 samples).",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Training and test datasets",
"sec_num": "2.1.3"
},
{
"text": "We used minGTP (Karpathy, 2020) which runs on PyTorch 9 . Regarding the hyper-parameters, we configured a block size of 63 characters, 4 layers, 4 heads and 336 embedding tokens, which resulted in an ANN of 9, 589, 536 trainable parameters and an episode training time of 2 hours and 10 minutes on a 4 GPU node. No effort was spent to shrink or prune the ANN, so its size could still be optimized. The data and code are available on Github 10 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ANN architecture",
"sec_num": "2.1.4"
},
{
"text": "We used a simple score in order to assess the performance of the ANN prediction during the testing step. When all the predicted characters were equal to those of the true target, a prediction was considered successful, hence allowing to score the percentage of successful predictions performed for each orthography-task pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance metric",
"sec_num": "2.1.5"
},
{
"text": "We specified an episode as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing",
"sec_num": "2.1.6"
},
{
"text": "\u2022 Generating the training and test datasets. At the end of this step, each character present in these datasets was provisioned in the inventory of the ANN instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing",
"sec_num": "2.1.6"
},
{
"text": "\u2022 Training the ANN model. The full training dataset was processed to be used as text blocks containing the concatenation of the four features (orthography, task, input and output) separated by a comma. Therefore, a single instance of the model was used to learn to write and read all 17 orthographies in one training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing",
"sec_num": "2.1.6"
},
{
"text": "\u2022 Testing the ANN model for each orthography-task pair. For each orthography-task pair, 1, 000 new samples were tested. Each sample was fed into the model with the concatenation of the three first features (orthography, task and input) separated by a comma. predict a value equal to the output feature, which was the target to be found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing",
"sec_num": "2.1.6"
},
{
"text": "We performed 11 episodes to measure the mean and standard deviation of each orthography-task pair and thus assess the consistency of our results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing",
"sec_num": "2.1.6"
},
{
"text": "Future work may use more test samples to gain a statistical insight on the different types of errors depending on the orthography at hand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing",
"sec_num": "2.1.6"
},
{
"text": "First, regarding the results of the two baseline orthographies, the 'eno' opaque orthography obtained a score of 0% in both writing and reading, which was in line with the expectations given that there was no correlation between its phonemes and its graphemes; on the other hand, the 'ent' transparent orthography scored above 99.6% on the writing and reading tasks, which indicated a high level of correlation between its phonemes and its graphemes. We thus considered our ANN model satisfactory for our objective of comparing the performance of different orthographies. Figure 3 and Table 3 present our main results. They are significantly different between writing and reading since these tasks are generally not symmetrical. Two features are likely to influence the symmetry, and therefore the efficiency of each task. As recalled by Figure 2 , the most important feature would undoubtedly be the number of possible phoneme-to-grapheme and grapheme-to-phoneme ambiguities per tested orthography. Unfortunately we did not possess such data. Another impacting feature may be the number of possible values (graphemes or phonemes) for a given target character. The higher the number of values, the harder the prediction should be for the ANN. Future work should investigate the relative importance of these features on the OTEANN performances.",
"cite_spans": [],
"ref_spans": [
{
"start": 572,
"end": 580,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 585,
"end": 592,
"text": "Table 3",
"ref_id": null
},
{
"start": 838,
"end": 846,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Comparing OTEANN's reading results with those of van den Bosch et al. (1994) , OTEANN first seems to naturally assimilate the grapheme complexity (e.g. for French, it successfully learnt that \"cadeau\" should be pronounced /kado/). Regarding grapheme-to-phoneme complexity (G-P complexity), they ranked English (G-P complex-ity=90%) more complex than Dutch (G-P com-plexity=25%) which, in turn, was more complex than French (G-P complexity=15%). OTEANN results preserved the same ranking with transparency scores of 31%, 57% and 79s% for English, Dutch and French. Admittedly, OTEANN's scores were different in terms of scale but OTEANN had to deal (OTEANN trained with 10, 000 samples)",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "Bosch et al. (1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 Breton, German, Italian, Portuguese and Spanish: With all their scores above 65% their orthography was also measured as fairly transparent. For Spanish, the detailed results showed that the most common failure during writing occurs with accents: the ANN had great difficulty predicting whether a vowel should contain an accent or not. For Italian, typical errors observed in the results were the prediction of /E/ instead of a /e/ and /O/ instead of a /o/, which were harder to discriminate. Future work may revise the scoring formula to reduce the cost of some of these errors in the performance calculation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 Dutch: The Dutch reading score (56%) is low but might be slightly enhanced given a possible lack of consistency regarding the phonemes used in the Dutch sub-dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 Russian: The Russian writing score (41%) may seem low. However, Russian has strong stress-related vowel reduction, which makes it hard to know how to write a word without knowing the morphemes involved. Nevertheless, future work should either study their subdataset more in depth or use a different data source like wikipron 11 to possibly improve its scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 Chinese: The results indicated a low writing score (20%), which is not surprising given than some phonemes can have multiple corresponding graphemes and that there are thousands of graphemes (Hanzis) to be learnt. However, it turns out that its reading score is much higher (79%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 French: With a low writing score (28%), the results showed that the chances of correctly writing a French word on the sole basis of its pronunciation were rare, as anticipated given the high number of phoneme-to-grapheme possibilities. Without being able to access a broader context than the word itself, the ANN was not able to reliably predict how to write a French word. With a much higher reading score (80%), the ANN obtained good reading results. As a comparison, for the same language, the alternative 'fro' orthography obtained excellent writing score (99%) and reading score (90%). Recall that the difference between its two scores is due to the fact that the 'fro' orthography is not bijective. For instance, in the reading direction, the <o> letter can be translated into /o/ or /O/).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 English: With a low writing score (36%) and a low reading score (31%), the results showed that English orthography is also highly opaque, which is consistent with most studies. As a reminder, a phonemic reading of an English word often does not work because of its high number of grapheme-to-phoneme possibilities. For instance the grapheme <u> can either correspond to /2/ (as in \"hug\"), to /ju:/ (as in \"huge\"), to /3:r/ (as in \"cur\") or /jU@:/ as in \"cure\". As for Russian, additional work should be dedicated to check the English sub-dataset and possibly enhance it if necessary, which could improve 'en' scores by a few percent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Observing the detailed result of each prediction also made it possible to study the phonemic correspondences learned or not learned by the OTEANN model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 For task-orthographies with a high transparency score, the model successfully predicted most pronunciations or spellings even when the correspondences involved more than one letter. For instance, OTEANN predicted that the Italian word \"cerchia\" should be pronounced /\u00d9erkja/, hence showing that the model had successfully learned that <c>, when followed by <e>, should be pronounced as /\u00d9/ and also that <c>, when followed by <h>, should be pronounced as /k/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 For task-orthographies with a low transparency score, the model generally failed on letters involved in ambiguous correspondences (recall Figure 2) . For instance, it incorrectly predicted that the pronunciation of the English word \"level\" was \"liv9l\" instead of \"lEv9l\", which might be a bad generalization from words like \"lever\" learned at training time. OTEANN also incorrectly predicted that the spelling of the French word /ale/ was \"allez\" when the expected target was \"aller\" (another French homophone); this type of error is inevitable since the OTEANN model intentionally use single word input samples and therefore cannot rely on neighboring words as additional context to discriminate between homophones with different spelling.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 149,
"text": "Figure 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "\u2022 Surprisingly, the model also predicted spellings that do not exist but who could have existed, in the same vein as ThisWordDoes-NotExist.com 12 . For instance, OTEANN predicted that the spelling of the French word /swaKe/\" was \"soirer\", which does not exist but looks like a French infinitive verb that would mean \"to celebrate at a party\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "In addition, the results in Table 3 also showed that the ANN has less than a 30% chance of correctly writing a word in French or Chinese after training on 10000 samples while Figure 9 shows that the same ANN has more than a 85% chance of correctly writing a word in Finnish, Italian, Serbo-Croatian or Turkish after training only on 1000 samples. Such a discrepancy highlights the enormous additional cost in terms of time and energy for learning a non-transparent orthography.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 3",
"ref_id": null
},
{
"start": 175,
"end": 183,
"text": "Figure 9",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Among the tested orthographies, some shared the grapheme inventory. Given that they are all trained together, there might be an impact on performance. Although some of our preliminary experiments with a single ANN instance per orthography did not seem to lead to significant differences, it could be interesting to formally compare both approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "4"
},
{
"text": "The accuracy metric we used is all or nothing. Additional work could also study alternative accuracy metrics and compare their results on the different orthographies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "4"
},
{
"text": "Although Wiktionary data may be inconsistent in quality and therefore positively or negatively impact the measured metric, the results obtained for Dutch, English and French orthographies reasonably extended those of van den Bosch et al. (1994) while the other results reflected the perception of several other studies. Consequently, our OTEANN model showed that an ANN can convincingly estimate a level of phonemic transparency for multiple orthographies both for the phoneme-to-grapheme and grapheme-to-phoneme directions.",
"cite_spans": [
{
"start": 225,
"end": 244,
"text": "Bosch et al. (1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "4"
},
{
"text": "This method should be easily applicable to other orthographies beyond those tested in this study. However, since the superfluous IPA symbols slightly influence the score results, future work should closely examine and discuss the phonemes to use depending on the orthography to be tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "4"
},
{
"text": "As OTEANN also points out some possible grapheme or phoneme errors when writing or reading phonemically, it could also be used to detect possible errors in the dictionaries of transparent orthographies; it could also be used to evaluate proposals for improving opaque orthographies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "4"
},
{
"text": "Finally, it would be beneficial to investigate if our ANN and its artificial neural units somehow imitate the way a beginner learns to write and read a language. If so, it might suggest that a transparent orthography would be easier and faster to learn than an opaque orthography.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "4"
},
{
"text": "In addition to testing our OTEANN model trained on 10, 000 samples, we also tested the same OTEANN model but trained with fewer samples (1, 000, 2, 000, 3, 000, and 5, 000), each time following the methodology described in section 2. We then aggregated the results to summarize them in Figure 9 , which shows the learning curve of the studied orthographies as a function of the number of training samples. ",
"cite_spans": [],
"ref_spans": [
{
"start": 286,
"end": 294,
"text": "Figure 9",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "A Additional Experiments and Results",
"sec_num": null
},
{
"text": "https://en.wikipedia.org/wiki/ English_orthography#Spelling_patterns 2 https://fr.wiktionary.org/wiki/Annexe: Prononciation/fran\u00e7ais",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://wiktionary.org 4 https://dumps.wikimedia.org/ 5 https://pypi.org/project/ko-pron",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://wikt2pron.readthedocs.io/en/ latest/_modules/IPA/ru_pron.html 7 https://github.com/msavva/ transphoner/blob/master/data/ 8 https://fon\u00e9tik.fr/v0/faq-en.html# mapping-table",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pytorch.org/ 10 https://github.com/marxav/oteann4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.org/project/wikipron/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.thisworddoesnotexist.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Write Read ent 99.6 \u00b1 0.3 99.8 \u00b1 0.1 eno 0.0 \u00b1 0.0 0.0 \u00b1 0.0 ar 84.3 \u00b1 0.8 99.4 \u00b1 0.3 br 80.6 \u00b1 0.6 77.2 \u00b1 1.6 de 69.1 \u00b1 1.0 78.0 \u00b1 1.5 en 36.1 \u00b1 1.5 31.1 \u00b1 1.3 eo 99.3 \u00b1 0.2 99.7 \u00b1 0.1 es 66.9 \u00b1 2.0 85.3 \u00b1 1.3 fi 97.7 \u00b1 0.3 92.3 \u00b1 0.8 fr 28.0 \u00b1 1.4 79.6 \u00b1 1.7 fro 99.0 \u00b1 0.3 89.7 \u00b1 1.1 it 94.5 \u00b1 0.8 71.6 \u00b1 0.9 ko 81.9 \u00b1 1.0 97.5 \u00b1 0.5 nl 72.9 \u00b1 1.7 55.7 \u00b1 2.2 pt 75.8 \u00b1 1.0 82.4 \u00b1 0.9 ru 41.3 \u00b1 1.6 97.2 \u00b1 0.5 sh 99.2 \u00b1 0.3 99.3 \u00b1 0.3 tr 95.4 \u00b1 0.7 95.9 \u00b1 0.6 zh 19.9 \u00b1 1.4 78.7 \u00b1 0.9 with more orthographies as well as with the writing task. Figure 3 also allows categorizing the studied orthographies with respect to their degree of transparency:\u2022 Esperanto: With scores above 99.3%, Esperanto orthography is nearly as transparent as the 'ent' baseline. The most common error occurred on a doubled letter in the input, which was incorrectly translated to a single letter.\u2022 Arabic, Finnish, Korean, Serbo-Croatian and Turkish: Their scores above 80% both in writing and reading confirmed that their orthography is highly transparent as indicated in (Aro, 2004) , (Wang and Tsai, 2009) and (\u00d6ney and Durgunoglu, 1997) . The Arabic score is high on in the read direction, which is likely due to the use of diacritics in the dataset; without them, the score would undoubtedly be lower. Regarding Korean, its orthography became a little less transparent during the twentieth century; its high scores suggest that further work should check the dataset and evaluate new scores.",
"cite_spans": [
{
"start": 1052,
"end": 1063,
"text": "(Aro, 2004)",
"ref_id": "BIBREF0"
},
{
"start": 1066,
"end": 1087,
"text": "(Wang and Tsai, 2009)",
"ref_id": "BIBREF14"
},
{
"start": 1092,
"end": 1119,
"text": "(\u00d6ney and Durgunoglu, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 545,
"end": 553,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Orthography",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning to read: The effect of orthography. 237",
"authors": [
{
"first": "Mikko",
"middle": [],
"last": "Aro",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikko Aro. 2004. Learning to read: The effect of or- thography. 237. Jyv\u00e4skyl\u00e4n yliopisto.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Measuring orthographic transparency and morphological-syllabic complexity in alphabetic orthographies: a narrative review",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Borleffs",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Ben",
"suffix": ""
},
{
"first": "Heikki",
"middle": [],
"last": "Maassen",
"suffix": ""
},
{
"first": "Frans",
"middle": [],
"last": "Lyytinen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zwarts",
"suffix": ""
}
],
"year": 2017,
"venue": "Reading and writing",
"volume": "30",
"issue": "8",
"pages": "1617--1638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabeth Borleffs, Ben AM Maassen, Heikki Lyytinen, and Frans Zwarts. 2017. Measuring orthographic transparency and morphological-syllabic complex- ity in alphabetic orthographies: a narrative review. Reading and writing, 30(8):1617-1638.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Differences in reading acquisition development in two shallow orthographies: Portuguese and spanish",
"authors": [
{
"first": "Sylvia",
"middle": [],
"last": "Defior",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Martos",
"suffix": ""
},
{
"first": "Luz",
"middle": [],
"last": "Cary",
"suffix": ""
}
],
"year": 2002,
"venue": "Applied Psycholinguistics",
"volume": "23",
"issue": "1",
"pages": "135--148",
"other_ids": {
"DOI": [
"10.1017/S0142716402000073"
]
},
"num": null,
"urls": [],
"raw_text": "Sylvia Defior, Francisco Martos, and Luz Cary. 2002. Differences in reading acquisition development in two shallow orthographies: Portuguese and spanish. Applied Psycholinguistics, 23(1):135-148.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Available at https: //github.com/karpathy/minGPT, MIT licence",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy. 2020. mingtp. Available at https: //github.com/karpathy/minGPT, MIT li- cence.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Beginning to read in turkish: A phonologically transparent orthography",
"authors": [
{
"first": "Banu",
"middle": [],
"last": "\u00d6ney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aydin Y\u00fccesan Durgunoglu",
"suffix": ""
}
],
"year": 1997,
"venue": "Applied psycholinguistics",
"volume": "18",
"issue": "1",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Banu \u00d6ney and Aydin Y\u00fccesan Durgunoglu. 1997. Beginning to read in turkish: A phonologically transparent orthography. Applied psycholinguistics, 18(1):1-15.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A survey of the usages of deep learning in natural language processing",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Otter",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Julian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Medina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.10854"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel W Otter, Julian R Medina, and Jugal K Kalita. 2018. A survey of the usages of deep learning in natural language processing. arXiv preprint arXiv:1807.10854.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "A",
"middle": [],
"last": "Radford",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Radford. 2018. Improving language understanding by generative pre-training.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Jane M Erskine, and Collaboration with COST Action A8 Network",
"authors": [
{
"first": "H",
"middle": [
"K"
],
"last": "Philip",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Seymour",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aro",
"suffix": ""
}
],
"year": 2003,
"venue": "British Journal of psychology",
"volume": "94",
"issue": "2",
"pages": "143--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip HK Seymour, Mikko Aro, Jane M Erskine, and Collaboration with COST Action A8 Network. 2003. Foundation literacy acquisition in euro- pean orthographies. British Journal of psychology, 94(2):143-174.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The serbo-croatian orthography constrains the reader to a phonologically analytic strategy",
"authors": [
{
"first": "Laurie",
"middle": [
"B"
],
"last": "Mt Turvey",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lukatela++",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MT Turvey, Laurie B Feldman, and G Lukatela++. 1984. The serbo-croatian orthography constrains the reader to a phonologically analytic strategy. Sta- tus Report on Speech Research: A Report on the, page 17.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Analysing orthographic depth of different languages using dataoriented algorithms",
"authors": [
{
"first": "Antal",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Alain",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Content",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antal van den Bosch, Alain Content, Walter Daele- mans, and Beatrice de Gelder. 1994. Analysing or- thographic depth of different languages using data- oriented algorithms.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Rule-based korean grapheme to phoneme conversion using sound patterns",
"authors": [
{
"first": "Yu-Chun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Richard Tzong-Han",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation",
"volume": "2",
"issue": "",
"pages": "843--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Chun Wang and Richard Tzong-Han Tsai. 2009. Rule-based korean grapheme to phoneme conver- sion using sound patterns. In Proceedings of the 23rd Pacific Asia Conference on Language, Informa- tion and Computation, Volume 2, pages 843-850.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Example of unambiguous correspondence during writing and reading tasks in Esperanto.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Scatterplot of the mean scores.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Scores with 1, 000 training samples.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Scores with 2, 000 training samples.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Scores with 3, 000 training samples.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "Scores with 5, 000 training samples.",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "Scores with 10, 000 training samples.",
"num": null
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"text": "Scores according to the number of training samples",
"num": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Features of the multi-orthography dataset certain mistakes performed by a new learner during writing and reading.Remarkably, our method should apply to any orthography, provided a dataset is available."
},
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>ar</td><td>12,057</td><td>32</td><td>47</td><td>8.0 \u00b1 2.0</td><td>8.9 \u00b1 2.3</td></tr><tr><td>br</td><td>17,343</td><td>45</td><td>29</td><td>6.6 \u00b1 1.9</td><td>7.5 \u00b1 2.2</td></tr><tr><td>de</td><td>529,740</td><td>41</td><td>30</td><td>10.2 \u00b1 3.1</td><td>11.5 \u00b1 3.4</td></tr><tr><td>en</td><td>42,206</td><td>42</td><td>29</td><td>7.3 \u00b1 2.7</td><td>7.6 \u00b1 2.6</td></tr><tr><td>eo</td><td>26,845</td><td>25</td><td>28</td><td>8.8 \u00b1 2.6</td><td>8.6 \u00b1 2.5</td></tr><tr><td>es</td><td>40,824</td><td>34</td><td>33</td><td>8.1 \u00b1 2.7</td><td>8.7 \u00b1 2.6</td></tr><tr><td>fi</td><td>105,352</td><td>28</td><td>27</td><td>10.4 \u00b1 3.5</td><td>10.4 \u00b1 3.5</td></tr><tr><td>fr</td><td>1,214,248</td><td>35</td><td>41</td><td>9.0 \u00b1 2.7</td><td>11.2 \u00b1 2.9</td></tr><tr><td>fro</td><td>1,214,262</td><td>35</td><td>32</td><td>9.0 \u00b1 2.7</td><td>8.6 \u00b1 2.6</td></tr><tr><td>it</td><td>26,798</td><td>34</td><td>32</td><td>9.1 \u00b1 2.8</td><td>9.1 \u00b1 2.6</td></tr><tr><td>ko</td><td>64,669</td><td>41</td><td>67</td><td>10.6 \u00b1 4.0</td><td>8.3 \u00b1 3.0</td></tr><tr><td>nl</td><td>13,340</td><td>45</td><td>28</td><td>7.8 \u00b1 3.1</td><td>8.6 \u00b1 3.4</td></tr><tr><td>pt</td><td>12,190</td><td>37</td><td>38</td><td>7.7 \u00b1 2.3</td><td>7.9 \u00b1 2.3</td></tr><tr><td>ru</td><td>304,514</td><td>30</td><td>33</td><td>10.5 \u00b1 3.1</td><td>10.7 \u00b1 3.1</td></tr><tr><td>sh</td><td>98,575</td><td>27</td><td>27</td><td>9.1 \u00b1 2.8</td><td>8.9 \u00b1 2.7</td></tr><tr><td>tr</td><td>117,841</td><td>36</td><td>31</td><td>10.3 \u00b1 3.7</td><td>10.1 \u00b1 3.6</td></tr><tr><td>zh</td><td>27,688</td><td>32</td><td>4813</td><td>9.9 \u00b1 2.2</td><td>1.8 \u00b1 0.3</td></tr><tr><td>eno</td><td>26,845</td><td>25</td><td>25</td><td>8.8 \u00b1 2.6</td><td>8.8 \u00b1 2.6</td></tr><tr><td>ent</td><td>26,845</td><td>25</td><td>25</td><td>8.8 \u00b1 2.6</td><td>8.8 \u00b1 2.6</td></tr></table>",
"text": "The model had to Orthography Samples Phonemes Graphemes Nb. of Phonemes Nb of Graphemes"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Summary of the sub-datasets. For each sub-dataset, a line indicates the number of samples available, the number of different phoneme UTF-8 characters, the number of different grapheme UTF-8 characters, the mean number of phonemes in words, and the mean number of graphemes in words."
}
}
}
}