ACL-OCL / Base_JSON /prefixU /json /U05 /U05-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U05-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:08:21.605068Z"
},
"title": "Word Prediction in a Running Text: A Statistical Language Modeling for the Persian Language",
"authors": [
{
"first": "Masood",
"middle": [],
"last": "Ghayoomi",
"suffix": "",
"affiliation": {},
"email": "masood29@yahoo.com"
},
{
"first": "Seyyed",
"middle": [
"Mostafa"
],
"last": "Assi",
"suffix": "",
"affiliation": {},
"email": "s_m_assi@ihcs.ac.ir"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word prediction is the problem of guessing which words are likely to follow in a given segment of a text to help a user with disabilities. As the user enters each letters of the required word, the system displays a list of the most probable words that could appear in that position. In our research we designed and implemented a word predictor for the Persian language. Three standard performance metrics were used to evaluate the system including keystroke saving, the most important one. The system achieved 57.57% saving in keystrokes.",
"pdf_parse": {
"paper_id": "U05-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Word prediction is the problem of guessing which words are likely to follow in a given segment of a text to help a user with disabilities. As the user enters each letters of the required word, the system displays a list of the most probable words that could appear in that position. In our research we designed and implemented a word predictor for the Persian language. Three standard performance metrics were used to evaluate the system including keystroke saving, the most important one. The system achieved 57.57% saving in keystrokes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A word prediction system facilitates typing of a text for a user with physical or cognitive disabilities. As the user enters each letter of the required word, the system displays a list of the most likely completions of the partially typed word. As the user continues typing more letters, the system updates the suggestion list accordingly based on the new context. If the required word is in the list, the user can select it with a single keystroke. Then, the system tries to predict the next word. It displays a list of suggestions to the user. If he finds the next intended word, he selects it; otherwise he enters the first letter of the next word to restrict the suggestions. The process continues to complete the text. For someone with physical disabilities, each keystroke is an effort; as a result, the prediction system saves the user's energy by reducing his physical effort and also the system assists the user in the composition of the well-formed text qualitatively and quantitatively (Fazly, 2002) . Moreover, the system increases user's concentration (Klund and Novak, 2001) . Traditionally, word predictors have been built based on statistical language modeling (SLM) (Gustavii and Pederssen, 2003) . SLM is based on the probability of a sequence of n given words (n-gram). A number of word prediction systems are available today for English, Swedish, and other European languages. Most of these systems have used n-gram language modeling. The current research deals with the design and implementation of a word prediction system based on SLM for the Persian language.",
"cite_spans": [
{
"start": 998,
"end": 1011,
"text": "(Fazly, 2002)",
"ref_id": null
},
{
"start": 1066,
"end": 1089,
"text": "(Klund and Novak, 2001)",
"ref_id": null
},
{
"start": 1184,
"end": 1214,
"text": "(Gustavii and Pederssen, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "By looking back, early prediction systems mostly were developed in the 1980s. They were used as a writing assistance system for the one with disabilities. In the early systems, they only suggested the high frequency words that matched the partially typed word and ignored the entire previous context (Swiffin et al, 1985) . SoothSayer is such a system. To make suggestions more appropriate, some systems look at a larger context by exploiting word bigram language model beside the word unigram. WordQ (Nantais et al, 2001; Shein et al, 2001 ) is a system which is developed for English. Profet (Carlberger et al, 1997a; Carlberger et al, 1997b ) is a system developed in four languages: English, Norwegian, Polish and French. PAL (Predicative Adaptive Lexicon) is one of the major projects at ACSD (Applied Computer Studies Division) at Dundee University, Scotland (Booth et al, 1990) . These systems have used word unigrams and bigrams; also, the systems try being adapted to the user's typing behavior by employing information on the user's recency and frequency of use. Since there are no previous works of any developed word prediction systems for Persian, what we have done is the first attempt to design and implement a word predictor for this language. We have used the experience of the developed systems for the English and Swedish languages in our research. Details are presented in Ghayoomi (2004) .",
"cite_spans": [
{
"start": 300,
"end": 321,
"text": "(Swiffin et al, 1985)",
"ref_id": null
},
{
"start": 501,
"end": 522,
"text": "(Nantais et al, 2001;",
"ref_id": null
},
{
"start": 523,
"end": 540,
"text": "Shein et al, 2001",
"ref_id": null
},
{
"start": 594,
"end": 619,
"text": "(Carlberger et al, 1997a;",
"ref_id": null
},
{
"start": 620,
"end": 643,
"text": "Carlberger et al, 1997b",
"ref_id": "BIBREF1"
},
{
"start": 865,
"end": 884,
"text": "(Booth et al, 1990)",
"ref_id": null
},
{
"start": 1393,
"end": 1408,
"text": "Ghayoomi (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Persian is a member of the Indo-European languages and has many features in common with them in morphology, syntax, the sound system, and the lexicon. Arabic is from the Semitic family and differs from Persian in many respects. The Persian alphabet is a modified version of the Arabic alphabet. Hence it is more appropriate to the Arabic sound system and less suitable for Persian. For instance \u202b,'\u0632'\u202c \u202b,'\u0630'\u202c \u202b'\u0636'\u202c and \u202b'\u0638'\u202c are four alphabets both in Persian and Arabic, but all pronounced the same /z/ in Persian and differently in Arabic. So there is a little correspondence between Persian letters and sounds. Although some alphabets are written differently and there is no difference in their pronunciations, they make differentiations in the meanings of words. Letters have joined or disjoined forms; i.e. based on the position that the letters appear in a word, they have different forms. Persian writing system is right to left, the same as Arabic; but quite contrary to the European languages that have left to right writing system. The vocabularies have been greatly influenced by Arabic and to some extent by French, and a great amount of words are borrowed from these languages. Talking about Persian syntax, only verbs are inflected in the language. The subjective mood is widely used in it. It is an SOV language, and also a free word order language. The language does not make use of gender; not even the third person of he or she distinctions that exists in English (Assi, 2004) .",
"cite_spans": [
{
"start": 1482,
"end": 1494,
"text": "(Assi, 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Some Facts about the Persian Language",
"sec_num": "3."
},
{
"text": "The task of predicting the next word can be stated as attempting to estimate the probability function P:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Word Model",
"sec_num": "4."
},
{
"text": "P(W n |W 1 ,\u2026, W n-1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Word Model",
"sec_num": "4."
},
{
"text": "In such a stochastic problem, we use the previous word(s), the history, to predict the next word. To give reasonable prediction to the words which appear together, we try to use Markov assumption that only the last few words affect the next word. So if we construct a model where all histories restrict the word that would appear in the next position, we have then an (n-1) th order Markov model or an n-gram word model. (Manning and Sch\u00fcdze, 1999; Jurafsky and Martin, 2000) The aim of our study is to design a word predictor that uses a unigram (n=1), bigram (n=2), and trigram (n=3) word model for Persian.",
"cite_spans": [
{
"start": 421,
"end": 448,
"text": "(Manning and Sch\u00fcdze, 1999;",
"ref_id": null
},
{
"start": 449,
"end": 475,
"text": "Jurafsky and Martin, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Word Model",
"sec_num": "4."
},
{
"text": "Suppose the user is typing a sentence and the following sequence has been entered so far from right to left based on Persian writing system:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Prediction Algorithm",
"sec_num": "4.1."
},
{
"text": "CW i W i-1 W i-2 \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Prediction Algorithm",
"sec_num": "4.1."
},
{
"text": "where W i-2 and W i-1 are the most recently completed words and CW i is the current word that is going to be predicted or completed. Let W be the set of all words in the lexicon that likely would appear in that position. A statistical word prediction algorithm attempts to select the N most appropriate words from W that are likely to be the user's intended words, where N is usually between 1, 5, 9 or 10 based on the experiment done by Soede and Foulds (1986) . The general approach is to estimate the probability of each candidate word, w i \u2208 W, being the user's required word in that context.",
"cite_spans": [
{
"start": 438,
"end": 461,
"text": "Soede and Foulds (1986)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Prediction Algorithm",
"sec_num": "4.1."
},
{
"text": "To do our research, we made a balanced corpus in different genres from 8 months of the on-line Hamshahri newspaper archive on the web. Although the corpus was small, it was a good representative for the Persian language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5.1.Corpus",
"sec_num": null
},
{
"text": "The corpus contained approximately 8 million tokens. After downloading the web pages, HTML pages were converted to their plain text equivalents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5.1.Corpus",
"sec_num": null
},
{
"text": "The plain text corpus was annotated. One of the annotations was replacing various spellings of a word by a selected spelling. In Persian, some words have various spellings without any changes in the meaning. To choose one spelling among various ones, the highest frequency of use was used to consider the word as the default spelling, and the various spellings were replaced by the selected one. Replacing was done manually. By doing so, the distribution of frequencies of a word with different spellings would be gathered together to assign a single frequency to the selected spelling; because of the smallness of the corpus. For example, these four words were found in the corpus: \u202b\u0627\ufee3\ufeae\ufef3\ufb91\ufe8e\ufef3\ufbfd\"\u202c /?emrik\u0101yi/\", \u202b\u0627\ufee3\ufeae\ufef3\ufb91\ufe8e\ufe8b\ufbfd\"\u202c /?emrik\u0101?i/\", \u202b\ufe81\ufee3\ufeae\ufef3\ufb91\ufe8e\ufef3\ufbfd\"\u202c /?\u0101mrik\u0101yi/\", \u202b\ufe81\ufee3\ufeae\ufef3\ufb91\ufe8e\ufe8b\ufbfd\"\u202c /?\u0101mrik\u0101?i/\". All the words mean \"American\". Between them, only the spelling \u202b\"\ufe81\ufee3\ufeae\ufef3\ufb91\ufe8e\ufef3\ufbfd\"\u202c with the highest frequency of use was selected as default and the other spellings were replaced to that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "5.2."
},
{
"text": "The other annotation was removing words or phrases in the corpus from other languages or other Persian dialects comparing to the standard language that do not belong to Persian at all and not be used by native speakers of the language. Email or internet addresses were removed from the corpus. Headlines, footnotes and references in the articles were also removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "5.2."
},
{
"text": "After annotation, the corpus was divided into three sections: one was the training corpus that contained 6258000 tokens, and 72494 types; the other section was used as the developing corpus which contained 872450 tokens, and the last section was used as the test corpus which contained 11960 tokens. To do the tokenization process, the training corpus was ran on NSP (N-gram Statistic Package), a program which was written in Perl in Linux (Banerjee and Pedersen, 2003) , and uni-, bi-, and trigram statistics were extracted. Words with frequency of one and two regarded as Out-Of-Vocabulary (OOV) and only the most common sequence of words with the frequency of three and more were taken into account and the statistics of word uni-, bi-, and trigrams were extracted. In NSP a token is defined as a continuance sequence of characters to be space delimited alphanumeric strings or individual characters.",
"cite_spans": [
{
"start": 434,
"end": 469,
"text": "Linux (Banerjee and Pedersen, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5.3.Tokenization",
"sec_num": null
},
{
"text": "Since a big corpus includes only a fraction of n-grams, increasing n makes the distribution of the events rarer. We have used the Simple Linear Interpolation (SLI) method (Manning and Sch\u00fcdze, 1999) to smooth the probability distribution.",
"cite_spans": [
{
"start": 171,
"end": 198,
"text": "(Manning and Sch\u00fcdze, 1999)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Sparseness",
"sec_num": "5.4."
},
{
"text": "The architecture of our algorithm is shown in figure 1. The system we developed has four major components: a) the statistical information extracted from the training corpus for the prediction algorithm; b) the predictive program that tries to suggest words to the simulated user. This component has two parts: one is word completion and the other one is word prediction. The prediction algorithm first completes the partially spelled word and then it predicts the probable words and present them in the suggestion list; c) a simulate user that types the test text. The simulated typist is a perfect user who always chooses the desired word when it is available in the prediction list and does not miss it; d) the component of updating the statistics of the words' recency of use and adding new words along with their frequency of use. To get the system adaptive to the user, two processes will be done. One is extracting word uni-, bi-, and trigrams from the current text that is being entered. The other process is saving and updating the recent extracted statistical information in a dynamic file. The recent information is related to the static file which keeps the statistical information resulted from the training corpus. When the predictor tries to predict words, first it weight to the words that are recently used; then, it uses the statistical information of the static file. Gradually as the user enters more texts, the system saves and updates the information and gets adapted to the user's style of writing and brings up more appropriate suggestions in the prediction list. searches the dynamic file and gives more In themselves, the parameter that varied in our experiments was the number of suggestions in the prediction list. It is assumed that the higher number of words in the suggestion list, the greater the chance of having the intended word among the suggestions; but it imposes a cognitive load on the user, because it takes the search time for the desired word longer and it is more likely that the user would miss the word they are looking for. Different users of word prediction systems may prefer different values for this parameter according to their type and level of disabilities. As it has been stated in section 4.1, Soede and Foulds (1986) which the most probable words would appear on the top of the list. Also, in our research we designed a word processor to be compatible w specifications such as having a right to left writing system to have the cursor in its right direction.",
"cite_spans": [
{
"start": 2249,
"end": 2272,
"text": "Soede and Foulds (1986)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm",
"sec_num": "6.1."
},
{
"text": "To evaluate our system, t p research (Woods, 1996; Fazly, 2002) : Keystroke Saving (KSS): The percentage of keystrokes that the user saves by usi word prediction system. A higher value for keystroke saving implies a better performance. Hit Rate (HR): The percentage of correct words that a without entering any letters of the next word. A higher hit rate implies a better performance. Keystroke until Prediction (KuP): The average numb enters for each word before it appears in the prediction list. A lower value for this measure implies a better performance.",
"cite_spans": [
{
"start": 37,
"end": 50,
"text": "(Woods, 1996;",
"ref_id": null
},
{
"start": 51,
"end": 63,
"text": "Fazly, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perfor",
"sec_num": "6.3."
},
{
"text": "To test our s th corpus was 11960 words and contained 46637 characters without considering space as a character. The reason of not considering space is that after selecting any words a space will be entered automatically and the result is having a keystroke saving. On the other hand, to select a word from the list one of the Function Keys, F1 to F9, are required to be pressed to drag and drop the intended word to the text being typed. The result is that the keystroke which is saved by entering the automatic space would be lost. The virtual typist is a Visual C ++ program that reads in each text letter by let reading each letters, it determines what the correct prediction for the current position is. The prediction program then is called and a list of suggestions is returned to the user. The user searches the prediction list for the correct prediction. If it is found in the list, the user increases the amount of correct predictions by the predictor. The correctly predicted word is then completed and the user continues to read the rest of the text. The gained results are presented in table 1 for 1, 5 and 9 numbers of suggestions: Based on ta n percentage of KSS, and HR; and decreasing KuP. The highest KSS is achieved when the numbers of suggestions are 9. The 57.57% KSS means for each 100 characters that the user is required to type to enter a text segment, more than half of the text is entered by the system, and the rest by the user. 24.42% of words appeared in the prediction list before entering any letters of the next word. On average 1.65 keystrokes were needed to be pressed by the user to type any words on the system. There is no valid average word length for Persian, but based on a sampling method from our Persian corpus, the average length is 3.91.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7."
},
{
"text": "We conducted d on their subjects (genres) in the newspaper. Each text segment equally contained 1000 characters, without considering space. Then each text was given to the virtual typist one by one. The results are available in table 2. Using a development set, we found that by using 9 numbers of suggestions we gained the highest KSS. Therefore our final setup uses the same 9 numbers of suggestions as its default. By comparing the results, we observed when K decreases; and vice versa. This observation shows that there is a one-to-one correspondence between KSS and HR but they are quite contrary to KuP. Some subjects (genres) such as Education achieved the highest KSS, the highest HR and the lowest KuP. But Music achieved the lowest KSS, the lowest HR, and the highest KuP. In general, we saw based on the sequence of words in different genres, it has differen effects on the gained results. It seems that the texts on the subjects of Thought, Sports News, Political News, and Education which gained the keystroke saving of more than 70%, have more words and sequences of words in common, and the words are more predictable as a result. It means the dependency of words with each other being collocated is high. But the texts on the genres of Music, Literature, Arts and Literature, Science and Culture, and Social News which gained the keystroke saving of less than 60% have some words that are not available in the lexicon of the program and/or the sequence of the words that come together on these genres are rare and less predictable consequently. Of course by adapting the system for a special purpose, a better gained as it was described in section 6.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8."
},
{
"text": "We have designe w knowledge this is the first attempt for the language. Using such a system saved a great number of keystrokes; and it led to reduction of user's effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9."
},
{
"text": "Our future work is a th word to the available word in the lexicon of the system, adding syntactic and later semantic information of the Persian language to the system to make predictions more appropriate syntactically and semantically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Wor",
"sec_num": "10."
},
{
"text": "Assi, S.M. (2004 P Technology and Its Disciplines (WITID), Kish Island, Iran, Feb. 24-26, 2004, pp. 85-94. Banerjee, S. and T. Pedersen. 2003 ",
"cite_spans": [
{
"start": 11,
"end": 33,
"text": "(2004 P Technology and",
"ref_id": null
},
{
"start": 34,
"end": 106,
"text": "Its Disciplines (WITID), Kish Island, Iran, Feb. 24-26, 2004, pp. 85-94.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bibliography",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "dyslexia",
"authors": [
{
"first": "R",
"middle": [],
"last": "Barner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Heldner",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sullivan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Wretling",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of Fonetik '97 Conference",
"volume": "4",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "dyslexia.\" In Barner, R., Heldner, M., Sullivan, K., and Wretling, P., editors, Proceedings of Fonetik '97 Conference, Volume 4, pp. 17-20.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "If word rediction can help, which program do you ) oundations of Statistical Natural Language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carlberger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carlberger",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Agnuson",
"suffix": ""
},
{
"first": "M",
"middle": [
"S"
],
"last": "Hunnicutt",
"suffix": ""
},
{
"first": "S",
"middle": [
"E"
],
"last": "",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Nishiyama",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tam",
"suffix": ""
},
{
"first": "J",
"middle": [
"; L A"
],
"last": "Marshall",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Pickering",
"suffix": ""
},
{
"first": "A",
"middle": [
"F"
],
"last": "Arnott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Newell",
"suffix": ""
}
],
"year": 1985,
"venue": "Word ompletion Utilities. Master dissertation. Prediction in omputational Processing of the Persian wedish rammar for Word Prediction",
"volume": "",
"issue": "",
"pages": "23--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlberger, A. and J. Carlberger and T. agnuson and M.S. Hunnicutt and S.E. e of Syntax in Word ompletion Utilities. Master dissertation. Prediction in omputational Processing of the Persian wedish rammar for Word Prediction. Stockholm: . Martin. (2000) Speech and anguage Processing: An Introduction to Novak (2001) \"If word rediction can help, which program do you ) oundations of Statistical Natural Language and M. Johansson. 001) \"Efficacy of the word prediction ntais and R. Nishiyama and . Tam and P. Marshall. (2001) \"Word cueing oulds (1986) \"Dilemma of rediction in communication aids and mental in, A.L. and J.A. Pickering and J. L. Arnott, nd A. F. Newell (1985) \"PAL: An effort tactic Pre-Processing Single-Word Prediction for Disabled People. M Palazuelos-Cagigas and S.A. Navarro. (1997b) \"Profet, a new generation of word prediction: An evaluation study.\" Copestake, A., Langer, S. and Palazuelos-Cagigas S., editors, Natural Language Processing for Communication aids, In Proceedings of a workshop sponsored by ACL, Madrid, Spain, pp 23-28.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural Language Processing, Computational Linguistics, and Speech Recognition",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gustavii",
"suffix": ""
},
{
"first": "; A S G Uppsala University",
"middle": [],
"last": "Pettersson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavii, E. and E Pettersson (2003) A S G Uppsala University Jurafsky, D. and J.H L Natural Language Processing, Computational Linguistics, and Speech Recognition. New Jersey: Prentice-Hall.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "F Processing",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, C.D., and H. Sch\u00fctze. (1999 F Processing. The MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "algorithm in WordQ TM",
"authors": [
{
"first": "T",
"middle": [],
"last": "Nantais",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Shein",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 24 th Annual Conference on Technology and Disability",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nantais, T. and F. Shein (2 algorithm in WordQ TM .\" In Proceedings of the 24 th Annual Conference on Technology and Disability, RESNA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Na C for persons with writing difficulties: WordQ",
"authors": [
{
"first": "F",
"middle": [],
"last": "Shein",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "The16 th Annual International Conference on Technology and Persons with Disabilities",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shein, F. and T. Na C for persons with writing difficulties: WordQ.\" The16 th Annual International Conference on Technology and Persons with Disabilities, California State University at Northridge, Los Angeles, CA, March.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Swiff a efficient portable communication aid and keyboard emulator",
"authors": [
{
"first": "M",
"middle": [],
"last": "Soede",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 9 th Annual Conference on Rehabilitation Technology",
"volume": "",
"issue": "",
"pages": "197--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soede, M. and R.A. F p load.\" In Proceedings of the 9 th Annual Conference on Rehabilitation Technology, 357- 359. Swiff a efficient portable communication aid and keyboard emulator.\" In Proceedings of the 8 th Annual Conference on Rehabilitation Technology, pp. 197-199.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "The architecture of our algorithm suggestions. In our system, the sorting order of words in the list is based on the frequency of use in ith the Persian mance Measures hree standard erformance metrics have been used in our ng the ppear in the suggestion list er of keystrokes that the user ystem, test corpus was given to e simulated typist. The length of the test ter. After",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "dding a spell-checker to e system to replace various spellings of a ) \"Persian language and IT\" In roceedings of the 2 nd Workshop on Information esign, implementation and use of the Ngram 1990) I know what you mean\". Special Children, pp. er, A. and T. Magnuson and J. Carlberger nd H. Wachtmeister and S.Hunnicutt. (1997a)",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "experimentally identified the number of suggestions. In our work, we selected the",
"content": "<table><tr><td/><td>Training Corpus</td><td/></tr><tr><td/><td>Extracting Statistics</td><td/></tr><tr><td/><td>N-gram Statistics</td><td>Developing Corpus</td></tr><tr><td>Updating,</td><td>Computing</td><td>Computing Lambda</td></tr><tr><td>Adding new word</td><td>Probability</td><td>Value</td></tr><tr><td/><td>Prediction</td><td>Setting</td></tr><tr><td/><td>Simulated Typist</td><td>Test Corpus</td></tr><tr><td/><td>Test Result</td><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "",
"content": "<table><tr><td/><td/><td/><td/><td>result would be</td></tr><tr><td/><td/><td/><td/><td>d, implemented and tested a</td></tr><tr><td/><td/><td/><td/><td>ord predictor for Persian. To the best of our</td></tr><tr><td colspan=\"2\">: KSS, HR</td><td>and K</td><td>orma</td><td>sur</td></tr><tr><td>ifferent genres</td><td colspan=\"2\">of test co</td><td colspan=\"2\">1000 c aracter</td></tr><tr><td colspan=\"5\">SS increases, HR increases, and KuP</td></tr><tr><td/><td/><td/><td/><td>t</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}