ACL-OCL / Base_JSON /prefixJ /json /J06 /J06-3001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J06-3001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:01:02.195173Z"
},
"title": "Orthographic Errors in Web Pages: Toward Cleaner Web Corpora",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Ringlstetter",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Klaus",
"middle": [
"U"
],
"last": "Schulz",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Stoyan",
"middle": [],
"last": "Mihov",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Since the Web by far represents the largest public repository of natural language texts, recent experiments, methods, and tools in the area of corpus linguistics often use the Web as a corpus. For applications where high accuracy is crucial, the problem has to be faced that a non-negligible number of orthographic and grammatical errors occur in Web documents. In this article we investigate the distribution of orthographic errors of various types in Web pages. As a by-product, methods are developed for efficiently detecting erroneous pages and for marking orthographic errors in acceptable Web documents, reducing thus the number of errors in corpora and linguistic knowledge bases automatically retrieved from the Web.",
"pdf_parse": {
"paper_id": "J06-3001",
"_pdf_hash": "",
"abstract": [
{
"text": "Since the Web by far represents the largest public repository of natural language texts, recent experiments, methods, and tools in the area of corpus linguistics often use the Web as a corpus. For applications where high accuracy is crucial, the problem has to be faced that a non-negligible number of orthographic and grammatical errors occur in Web documents. In this article we investigate the distribution of orthographic errors of various types in Web pages. As a by-product, methods are developed for efficiently detecting erroneous pages and for marking orthographic errors in acceptable Web documents, reducing thus the number of errors in corpora and linguistic knowledge bases automatically retrieved from the Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The automated analysis of large corpora has many useful applications (Church and Mercer 1993) . Suitable language repositories can be used for deriving models of a given natural language, as needed for speech recognition (Ostendorf, Digalakis, and Kimball 1996; Jelinek 1997; Chelba and Jelinek 2002) , language generation (Oh and Rudickny 2000) , and text correction (Kukich 1992; Amengual and Vidal 1998; Strohmaier et al. 2003b) . Other corpus-based methods determine associations between words (Grefenstette 1992; Dunning 1993; Lin et al. 1998) , which yields a basis for computing thesauri, or dictionaries of terminological expressions and multiword lexemes (Gaizauskas, Demetriou, and Humphreys 2000; Grefenstette 2001) .",
"cite_spans": [
{
"start": 69,
"end": 93,
"text": "(Church and Mercer 1993)",
"ref_id": null
},
{
"start": 221,
"end": 261,
"text": "(Ostendorf, Digalakis, and Kimball 1996;",
"ref_id": "BIBREF29"
},
{
"start": 262,
"end": 275,
"text": "Jelinek 1997;",
"ref_id": "BIBREF18"
},
{
"start": 276,
"end": 300,
"text": "Chelba and Jelinek 2002)",
"ref_id": "BIBREF5"
},
{
"start": 323,
"end": 345,
"text": "(Oh and Rudickny 2000)",
"ref_id": "BIBREF28"
},
{
"start": 382,
"end": 406,
"text": "Amengual and Vidal 1998;",
"ref_id": "BIBREF0"
},
{
"start": 407,
"end": 431,
"text": "Strohmaier et al. 2003b)",
"ref_id": null
},
{
"start": 498,
"end": 517,
"text": "(Grefenstette 1992;",
"ref_id": "BIBREF14"
},
{
"start": 518,
"end": 531,
"text": "Dunning 1993;",
"ref_id": "BIBREF7"
},
{
"start": 532,
"end": 548,
"text": "Lin et al. 1998)",
"ref_id": "BIBREF25"
},
{
"start": 664,
"end": 707,
"text": "(Gaizauskas, Demetriou, and Humphreys 2000;",
"ref_id": "BIBREF11"
},
{
"start": 708,
"end": 726,
"text": "Grefenstette 2001)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "From multilingual texts, translation lexica can be generated (Gale and Church 1991; Kupiec 1993; Kumano and Hirakawa 1994; Boutsis, Piperidis, and Demiros 1999; Grefenstette 1999) . The analysis of technical texts is used to automatically build dictionaries of acronyms for a given field (Taghva and Gilbreth 1999; Yeates, Bainbridge, and Witten 2000) , and related methods help to compute dictionaries that cover the special vocabulary of a given thematic area (Strohmaier et al. 2003a) . In computer-assisted language learning (CALL), mining techniques for corpora are used to create individualized and user-centric exercises for grammar and text understanding (Schwartz, Aikawa, and Pahud 2004; Brown and Eskenazi 2004; Fletcher 2004a) .",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "(Gale and Church 1991;",
"ref_id": "BIBREF12"
},
{
"start": 84,
"end": 96,
"text": "Kupiec 1993;",
"ref_id": "BIBREF24"
},
{
"start": 97,
"end": 122,
"text": "Kumano and Hirakawa 1994;",
"ref_id": "BIBREF23"
},
{
"start": 123,
"end": 160,
"text": "Boutsis, Piperidis, and Demiros 1999;",
"ref_id": "BIBREF2"
},
{
"start": 161,
"end": 179,
"text": "Grefenstette 1999)",
"ref_id": "BIBREF15"
},
{
"start": 288,
"end": 314,
"text": "(Taghva and Gilbreth 1999;",
"ref_id": "BIBREF36"
},
{
"start": 315,
"end": 351,
"text": "Yeates, Bainbridge, and Witten 2000)",
"ref_id": "BIBREF39"
},
{
"start": 462,
"end": 487,
"text": "(Strohmaier et al. 2003a)",
"ref_id": "BIBREF34"
},
{
"start": 663,
"end": 697,
"text": "(Schwartz, Aikawa, and Pahud 2004;",
"ref_id": "BIBREF32"
},
{
"start": 698,
"end": 722,
"text": "Brown and Eskenazi 2004;",
"ref_id": "BIBREF4"
},
{
"start": 723,
"end": 738,
"text": "Fletcher 2004a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "By Zipf's law, most words, phrases, and specific grammatical constructions have a very low frequency. Furthermore, the number of text genres and special thematic areas that come with their own picture of language is large. This explains that most of the aforementioned applications can only work when built on top of huge heterogeneous corpora. Since the Web represents by far the largest public repository for natural language texts, and since Web search engines such as Google offer simple access to pages where language material of a given orthographic, grammatical, or thematic kind is found, many recent experiments and technologies use the Web as a corpus (Kehoe and Renouf 2002; Morley, Renouf, and Kehoe 2003; Kilgarriff and Grefenstette 2003; Resnik and Smith 2003; Way and Gough 2003; Fletcher 2004b) .",
"cite_spans": [
{
"start": 662,
"end": 685,
"text": "(Kehoe and Renouf 2002;",
"ref_id": "BIBREF19"
},
{
"start": 686,
"end": 717,
"text": "Morley, Renouf, and Kehoe 2003;",
"ref_id": "BIBREF27"
},
{
"start": 718,
"end": 751,
"text": "Kilgarriff and Grefenstette 2003;",
"ref_id": null
},
{
"start": 752,
"end": 774,
"text": "Resnik and Smith 2003;",
"ref_id": "BIBREF30"
},
{
"start": 775,
"end": 794,
"text": "Way and Gough 2003;",
"ref_id": "BIBREF38"
},
{
"start": 795,
"end": 810,
"text": "Fletcher 2004b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "One potential problem for Web-based corpus linguistics is caused by the fact that words and phrases occurring in Web pages are sometimes erroneous. Typing errors represent one widespread phenomenon. Many Web pages, say, in English, are written by non-native speakers, or by persons with very modest language competence. As a consequence, spelling errors and grammatical bugs result. The character sets that are used for writing Web pages are often not fully adequate for the alphabet of a given language, which represents another systematic source for inaccuracies. Furthermore, a small number of texts found in the Web is obtained via optical character recognition (OCR), which may again lead to garbled words. As a consequence of these and other error sources, the Web contains a considerable number of \"bad\" pages with language material that is inappropriate for corpus construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In one way or the other, all the aforementioned applications are affected by these inadequacies. While the problem is probably not too serious for approaches that merely collect statistical information about given language items, the construction of dictionaries and related linguistic knowledge bases-which are, after all, meant to be used in different scenarios of automated language processing-becomes problematic if too many erroneous entries are retrieved from Web pages. Obviously, in computer-assisted language learning it is a principal concern that words and phrases from the Web that are presented to the user are error free.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In discussions we found that problems resulting from erroneous language material in Web pages for distinct applications are broadly acknowledged (see also Section 4.4 of Kilgarriff and Grefenstette [2003] ). Still, to the best of our knowledge, a serious analysis of the frequency and distribution of orthographic errors in the Web is missing, and no general methods have been developed that help to detect and exclude pages with too many erroneous words. In this article we first report on a series of experiments that try to answer the following questions:",
"cite_spans": [
{
"start": 170,
"end": 204,
"text": "Kilgarriff and Grefenstette [2003]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "What are important types of orthographic errors found in Web pages?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "2. How frequent are errors of a given kind? For a given error level (percentage of erroneous tokens) \u03c4, which percentage of Web pages exceeds error level \u03c4?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "3. How do these figures depend on the language, on the thematic area, and on the genre of the Web pages that are considered? How do these figures depend on the document format of the Web pages that are considered?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "We then look at the problem indicated above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "Which methods help to automatically detect Web pages with many orthographic errors? Which methods help to mark orthographic errors found in Web pages?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "To answer questions 1-3, we retrieved and analyzed a collection of large English and German corpora from the Web, using suitable queries to Web search engines. In our error statistics we wanted to distinguish between (1) \"general\" Web pages collected without any specific thematic focus on the one hand and Web pages from specific thematic areas on the other hand, and (2) between Web pages written in HTML and Web documents written in PDF. To cover the first difference, for both languages we retrieved two general corpora as well as a number of corpora for specific thematic areas. All these corpora only contain HTML pages. A parallel series of general corpora was collected that are composed of PDF documents. Details are provided in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "Special Vocabulary. Web pages often contain tokens that do not belong to the standard vocabulary of the respective language. Typical categories are, for example, special names, slang, archaic language, expressions from foreign languages, and special expressions from computer science/programming. Classification and detection of special vocabulary is outside the scope of the present article. Since sometimes a clear separation between special vocabulary and errors is difficult, we briefly come back to this problem in Section 5.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "Proper Errors. Focusing on garbled standard vocabulary, tokens may be seriously damaged in an \"unexplainable\" way. Most of the remaining errors can be assigned to one of the four classes mentioned above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "r typing errors (i.e., errors caused by a confusion of keys when typing a document), r spelling errors (\"cognitive\" errors resulting from insufficient language competence), r errors resulting from inadequate character encoding, and r OCR errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "In order to estimate the number of errors of a given kind in the corpora, special error dictionaries were built. These dictionaries, which only list garbled words of a given language that do not accidentally represent correct words, try to cover a high number of the conventional errors of each type that are typically found in Web pages and other documents. Section 3 motivates the use of error dictionaries for error detection. Details of the construction of the error dictionaries are discussed in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "In Section 5, we estimate the number of orthographic errors in the corpora that remain undetected because they do not occur in the error dictionaries. We also estimate the percentage of correct tokens of the corpora that are erroneously treated as errors since they appear in the error dictionaries. Our results show that the number of tokens of a text that appear in the error dictionaries can be considered as a lower approximation of the number of real orthographic errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "In Section 6, we describe the distribution of orthographic errors of the types distinguished above in the general test corpora, counting occurrences of entries of the error dictionaries. Section 7 summarizes the most important differences that arise when using PDF corpora, or corpora for special thematic areas. Section 8 presents various results that illuminate the relationship between the error rate of a document and its genre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "In our experiments we observed in all corpora a rich spectrum of error rates, ranging from perfect documents to a small number of clearly unacceptable pages. This motivates the design of filters that efficiently recognize and reject pages with an error rate beyond a user-specified threshold. The construction of appropriate filters is described in Section 9, where we also demonstrate the effect of using these filters, comparing the figures obtained in Section 6 with the corresponding figures for filtered corpora. Filters work surprisingly well due to a Zipf-like distribution of error frequencies in Web pages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "In Section 10, we present two experiments that exemplify how the methods developed in the article may in fact help to improve corpus-based methods. The general question of how deeply distinct methods from computational linguistics based on Web corpora are affected by orthographic errors in Web pages and to what extent the methods developed in the article help to remedy these deficiencies are too complex to be discussed here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The main insights and contributions are summarized in the Conclusion (Section 11) where we also comment on future work and on some practical difficulties one has to face when collecting and analyzing large corpora from the Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The basis for the evaluations described below is a collection of corpora, each composed of Web pages retrieved with Web search engines (Google/AllTheWeb). In order to study how specific features of a language might influence the distribution of orthographic errors, all corpora were built up in two variants. The English and German variant, respectively, contain Web pages that were classified as English and German Web pages by the search engine. As described above, for both languages we collected general corpora with Web pages without any thematic focus and, in addition, corpora that cover five specific thematic areas to be described below. Statements on the \"representativeness\" of corpora derived from the Web are notoriously difficult. The composition of corpora retrieved with Web search engines depends on the kind of queries that are used, on the ranking mechanisms of the engine, and on the details of the collection strategy. We mainly concentrated on simple queries and straightforward collection strategies. Still, the large number of subcorpora and pages that were evaluated should guarantee that accidental results are avoided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "2."
},
{
"text": "In a first attempt, we tried to obtain a general German HTML corpus using the meaningless query der die das, i.e., the three German definite articles. However, queries of this and a similar form did not lead to satisfactory results: As a consequence of Google's ranking mechanism, which prefers \"authorities\" (Brin and Page 1998) , mainly portals of big organizations, companies, and others were retrieved. These pages are often dominated by graphical elements. Portions of text are usually small and carefully edited, which means that orthographic errors are less frequent than in other \"less official\" pages.",
"cite_spans": [
{
"start": 309,
"end": 329,
"text": "(Brin and Page 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Web Corpora",
"sec_num": "2.1"
},
{
"text": "To achieve a more realistic scenario we randomly generated quintuples, each collecting five terms of the 10,000 top frequent German words. We used Google to retrieve 10 pages per query (quintuple) until we obtained 1,000 pages. A considerable number of the URLs were found to be inactive. After conversion to ASCII and a preliminary analysis of error rates with methods described below, some of the remaining pages were found to contain very large lists of general keywords, including many orthographic errors. Apparently these lists and errors were only added to improve the ranking of the page in search engines, even for ill-formed queries. We excluded these pages. The remaining documents represent the \"primary\" general German HTML corpus. Since we wanted to know how results depend on the peculiarities of the selected set of pages, a second series of queries of the same type was sent to Google to retrieve a \"secondary\" general German HTML corpus with a completely disjoint set of pages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Web Corpora",
"sec_num": "2.1"
},
{
"text": "Similar procedures were used to obtain a primary and a secondary general English HTML corpus, a general German PDF corpus, and a general English PDF corpus. The translation from PDF to ASCII was found to be error prone, in particular for German documents (cf. Gartner 2003) . Due to this process, some converted PDF documents were seriously damaged. Since we focus on errors in original Web pages (as opposed to converted versions of such pages), these files were excluded as well. We found these pages when computing error rates based on error dictionaries as described in Sections 6 and 7.",
"cite_spans": [
{
"start": 260,
"end": 273,
"text": "Gartner 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Web Corpora",
"sec_num": "2.1"
},
{
"text": "The number of Web pages and the number of normal tokens (i.e., tokens composed of standard letters only) in the resulting six corpora are shown in Table 1 . Numbers 1and (2) stand for the primary and secondary corpora, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "General Web Corpora",
"sec_num": "2.1"
},
{
"text": "We looked at the thematic areas \"Middle Ages,\" \"Holocaust,\" \"Fish,\" \"Mushrooms,\" and \"Neurology.\" The given selection of topics tries to cover scientific areas as well as history and hobbies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web Corpora for Specific Thematic Areas",
"sec_num": "2.2"
},
{
"text": "Simple Crawl. A first series of corpora was collected by sending a query with 25 \"terminological\" keywords mechanically found in a small corpus of the given area to the AllTheWeb search engine and collecting the answer set. For example, the queries mushrooms mushroom pine edible harvesting morels harvested harvesters dried chanterelle matsutake poisonous flavor chanterelles caps fungi drying stuffing humidity varieties boletes recipes spores conifers pickers Table 1 Number of Web pages, number of normal tokens (tokens composed of standard letters only), and sizes in megabytes of the \"general\" corpora. Numbers (1) and (2) refer to primary and secondary corpora, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 470,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Web Corpora for Specific Thematic Areas",
"sec_num": "2.2"
},
{
"text": "Web pages Normal tokens Size (MB)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General corpora",
"sec_num": null
},
{
"text": "English HTML (1) 829 7,900,337 disorder disorders anxiety self hallucinations delusions anatomy cortex delusion neuroscience disturbance conscious psychotic stimulus hallucination unconsciously receptors cognitive psychoanalytic unconscious consciously stimuli ego schizophrenia impairment were respectively used for collecting the corpora Mushrooms E and Neurology E. The ranking mechanism of AllTheWeb prefers pages containing hits for several keywords of a disjunctive query. Since this form of corpus construction is straightforward, not all pages in the resulting corpora belong to the given thematic area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General corpora",
"sec_num": null
},
{
"text": "Refined Crawl. We wanted to see how results are affected when using less naive crawling methods. For the three areas \"Fish,\" \"Mushrooms,\" and \"Neurology,\" the secondary corpora were retrieved using the following refined procedure: Starting from a small tagged seed corpus for the given domain, we mechanically extracted terminological open compounds for English (Sornlertlamvanich and Tanaka 1996; Smadja and McKeown 1990) and compound nouns for German. Examples are amino group, action potential, defense mechanism (English, neurology), truffle species, morel areas, harvesting tips (English, mushrooms), Koffeinstoffwechsel, and Eisenkonzentration (German, neurology). Each of these expressions was sent as a query to Google. From each answer set we collected a maximum of 30 top-ranked hits (many answer sets were smaller). For each document in the resulting corpus, the similarity with the seed corpus was controlled, using a cosine measure (in practice, almost all documents passed the similarity filter).",
"cite_spans": [
{
"start": 362,
"end": 397,
"text": "(Sornlertlamvanich and Tanaka 1996;",
"ref_id": null
},
{
"start": 398,
"end": 422,
"text": "Smadja and McKeown 1990)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General corpora",
"sec_num": null
},
{
"text": "Our method can be considered as a variant of Baroni and Bernardini's (2004) and leads to corpora with a strong thematic focus. The statistics for all thematic corpora are summarized in Table 2 . Numbers (1) and (2) stand for corpora crawled with the simple and the refined crawling strategy, respectively. The numbers indicate one interesting effect: Documents in the thematic corpora obtained with the refined crawling strategy turned out to be typically rather short. Since we only used the 30 top-ranked documents for each single query, this probably points to a special feature of Google's ranking mechanism. A manual inspection of hundreds of documents for both the simple and the refined crawl did not lead to additional insights.",
"cite_spans": [
{
"start": 45,
"end": 75,
"text": "Baroni and Bernardini's (2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "General corpora",
"sec_num": null
},
{
"text": "For detecting orthographic errors of a particular type in texts, two naive base methods may be distinguished.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Detection",
"sec_num": "3."
},
{
"text": "A representative list of errors of the respective type is created and manually checked. Each token of the text appearing in the list represents an error (lower approximation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "A spell checker or a large-scale dictionary is used to detect \"suspicious\" words (error candidates). For each such token W we manually check if W really represents an error and we determine its type (upper approximation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "For large corpora, both methods have serious deficiencies. With Method 1, only a small percentage of all errors is detected. On this basis, it is difficult to estimate the real number of errors. When using Method 2, the number of tokens that have to be manually Table 2 Selected topics and statistics of English (E) and German (G) corpora for specific thematic areas. Numbers (1) and (2) refer to corpora crawled with the simple and the refined strategy, respectively. checked becomes too large. In practice, a large number of error candidates represent correct tokens. This is mainly due to special names and other types of nonstandard vocabulary found in Web pages, as mentioned in the introduction. We decided to use a third strategy, which can be considered as a synthesis and compromise between the above two approaches. As a starting point, we took standard dictionaries of English, D(English); German, D(German); French, D(French); and Spanish, D(Spanish); and a dictionary of geographic entities, D(Geos); a dictionary of proper names, D(Names); and a dictionary of abbreviations and acronyms, D(Abbreviations). 1 The number of entries in the dictionaries is described in Table 3 . The German dictionary contains compound nouns, which explains the large number of entries.",
"cite_spans": [
{
"start": 1120,
"end": 1121,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 262,
"end": 269,
"text": "Table 2",
"ref_id": null
},
{
"start": 1180,
"end": 1187,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "From these standard dictionaries, we derived special error dictionaries that were used in the experiments described later. First, for each of the four error types mentioned above we manually collected a number of general patterns that \"explain\" possible mutations from correct words to erroneous entries. In a second step, these patterns were used to garble the words of the given background dictionaries. Third, garbled words that were found to correspond to correct words (entries of the above dictionaries) were excluded (filtering step). Collecting the remaining erroneous strings, we obtained large error dictionaries for each type of orthographic error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Experiments described in Section 5 show that our error dictionaries cover the major part of all orthographic errors occurring in the English and German Web pages. At 1 These dictionaries are nonpublic. They have been built up at the Centre for Information and Language Processing (CIS) during the last two decades (Maier-Meyer 1995; Guenthner 1996) . Each entry comes with a frequency value that describes the number of occurrences in a 1.5-terabyte subcorpus of the Web from 1999. Dictionaries for French and Spanish were included to improve the filtering step. Suitable dictionaries for other languages were not available. the same time, the number of tokens that are erroneously treated as errors due to the unavoidable incompleteness of the filtering step remains acceptable. On this basis, an estimate of the number of conventional orthographic errors occurring in Web pages is possible, counting the number of occurrences of entries of the error dictionaries. 2 Before we comment on these points, we describe the construction of the error dictionaries in more detail. In the remainder of the article, by D conv we denote the union of all the conventional dictionaries listed above.",
"cite_spans": [
{
"start": 314,
"end": 332,
"text": "(Maier-Meyer 1995;",
"ref_id": "BIBREF26"
},
{
"start": 333,
"end": 348,
"text": "Guenthner 1996)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "For the construction of error dictionaries, the most important error patterns for each type of error were determined. For typing errors and errors caused by character encoding problems, error patterns were obtained analytically. For spelling errors and Optical Character Recognition (OCR) errors, important mutation patterns were collected empirically. As a general rule, all error dictionaries were restricted to entries of length >4. Many tokens of length \u22644 occurring in texts represent acronyms, special names, and abbreviations, and it is difficult to mechanically distinguish between this special kind of vocabulary and errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of Error Dictionaries",
"sec_num": "4."
},
{
"text": "Typing errors can be partitioned into transpositions, deletions, substitutions, and insertions. Transpositions of two letters occur if two keys are hit in the wrong order. Deletions result if a key is not properly pushed down. Substitutions occur if a neighbor key is pressed down instead of the intended one. Horizontal and vertical shifts of fingers may be distinguished. If a finger hits the middle between two keys, a neighbor key may be pressed in addition to the intended one. The wrong letter may occur before or after the correct letter. Transpositions, deletions, substitutions, and insertions cover most of the typing errors discussed in the literature (Kukich 1992). We ignored homologous errors, that is, substitutions that are traced back to a confusion of the left and right hand. Since there are many possible positions for both hands, this kind of error leads to large confusion sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries for Typing Errors",
"sec_num": "4.1"
},
{
"text": "Since we did not find other patterns in the texts, only mutation variants that are exclusively composed of standard letters (as opposed to digits and other special symbols) were taken into account. Furthermore, since typing errors in general do not affect the first letter of a word, 3 we left this letter unmodified. We analyzed the number of mutated variants of a given word. Both for the American and for the German keyboard we have approximately 16l variants for a word of length l. This shows that the above patterns for typing errors are very productive. It is not possible to garble all the words of our background dictionary for constructing the error dictionaries. For the generation of the dictionary of English typing errors, D err (English,typing), we took the 100,000 entries of the English background dictionary with the highest frequency. Applying the above mutation patterns we generated 10,785,675 strings. After removal of duplicates and deletion of words in D conv (filtering step), we obtained 9, 427, 051 entries for the dictionary D err (English,typing).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries for Typing Errors",
"sec_num": "4.1"
},
{
"text": "The same procedure was used for creating the dictionary of German typing errors, D err (German,typing). Since the average length of German words is large, we obtained 13, 656, 866 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries for Typing Errors",
"sec_num": "4.1"
},
{
"text": "English. In order to find the most characteristic patterns for English spelling errors, a bootstrapping procedure was used to compute an initial list of errors. We started with the misspelled English words verry, noticable, arguement, and inteligence. For each term we retrieved 20 Web documents. After conversion to ASCII we computed the list of all normal tokens occurring in these documents. The resulting list was sorted by frequency, and words in D conv were filtered out. After a manual selection of new errors with high Google counts, the procedure was iterated until we did not find new erroneous words with high frequency. During the bootstrapping procedure, we also found Web pages that listed some \"common misspelled words\" of English. The most frequent errors mentioned in these lists were also added. Table 4 presents some strings that were found with a large number of Google hits. 4 Most of the errors that we found can be traced back to a rule set partially described in Table 5 . The full rule set contains 95 rules. We applied each rule to D(English), introducing one error at the first possible position, for each entry of the appropriate form. As a result we obtained a list with 1,223,128 garbled strings. After applying the standard filtering procedure, we obtained the dictionary D err (English,spell) of English spelling errors with 1,202,997 entries.",
"cite_spans": [
{
"start": 896,
"end": 897,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 814,
"end": 821,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 987,
"end": 994,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Error Dictionaries for Spelling Errors",
"sec_num": "4.2"
},
{
"text": "Similarly as for English, we built an initial error list. Bootstrapping was started with the misspelled German terms n\u00e4hmlich, addresse, resourcen, and vorraus. Table 6 shows some of the resulting German words, the misspelled variant, and the number of Google hits of the garbled version. From the initial error list, we obtained a set of 65 rules partially described in Table 7 . We applied these rules to D(German), introducing one error for each entry of the appropriate form. Each rule was applied to each entry using the first possible position for mutation. For example, for the lexical entry Adresse of the German standard dictionary we obtained the following error terms: adrese, ahdresse, adrehsse, addresse, adrresse. As a result we obtained a list with 19, 265, 271 strings. The large size is mainly caused by the rules for reduplication of consonants, which are not restricted by word context. The filtering procedure led to an error dictionary with 18, 970, 716 entries. ",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 6",
"ref_id": "TABREF4"
},
{
"start": 371,
"end": 378,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "German.",
"sec_num": null
},
{
"text": "As a starting point we used a list of typical OCR errors that we found in a corpus with 200 pages of OCR output (Ringlstetter 2003) . Error types are shown in Table 8 . Table 7 Rule set (incomplete) for the generation of German spelling errors. The symbol\u02c6t means that t is not the preceding letter. Table 8 List of typical OCR errors.",
"cite_spans": [
{
"start": 112,
"end": 131,
"text": "(Ringlstetter 2003)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 8",
"ref_id": null
},
{
"start": 169,
"end": 176,
"text": "Table 7",
"ref_id": null
},
{
"start": 300,
"end": 307,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Dictionaries for OCR Errors",
"sec_num": "4.3"
},
{
"text": "dd \u2192 d Kuddelmuddel \u2192 Kudelmuddel mm \u2192 m Kommando \u2192 Komando Special rules for deletion of consonants mn \u2192 m Kolumne \u2192 Kolum\u00eb ah \u2192\u00e4\u00e4hnlich \u2192\u00e4nlich Deletion of vowels ie \u2192 i ziemlich \u2192 zimlich aa \u2192 a Aal \u2192 Al Substitution of consonants nt \u2192 nd eigentlich \u2192 eigendlich rd \u2192 rt Standard \u2192 Standart Substitution of vowels a \u2192 e Empf\u00e4nger \u2192 Empfenger era \u2192 ara Temperatur \u2192 Temparatur Insertion/reduplication of consonants [aeiou\u00e4\u00f6\u00fc] \u2194 [aeiou\u00e4\u00f6\u00fc]h viel \u2192 viehl [aeiou\u00e4\u00f6\u00fc]k \u2192 [aeiou\u00e4\u00f6\u00fc]ck direkt \u2192 direckt \u03ba \u2208 {d,f,l,n,m,p,r,t} \u2192 \u03ba\u03ba Gro\u00dfbritannien \u2192 Gro\u00dfbrittannien tz \u2192 tz Schweiz \u2192 Schweitz Insertion of vowels i \u2192 ie Maschine \u2192 Maschiene Shifting au \u2192 a\u00fc\u00e4u\u00dferst \u2192 a\u00fc\u00dferst llel \u2192 lell parallel \u2192 paralell",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deletion of doubled consonants",
"sec_num": null
},
{
"text": "Character substitutions Character merges Character splits",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deletion of doubled consonants",
"sec_num": null
},
{
"text": "l \u2192 i r n \u2192 m m \u2192 rn i \u2192 l r i \u2192 n n \u2192 ri g \u2192 q c l \u2192 d\u00fc \u2192 ii o \u2192 p l \u2192 t v \u2192 y y \u2192 v o \u2192 c e \u2192 c l \u2192 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deletion of doubled consonants",
"sec_num": null
},
{
"text": "English. The error dictionary D err (English,ocr) was generated by applying to the entries of D(English) the transformation rules listed in Table 8 . The transformation of D(English) with its 315, 300 entries led to a list of 1,697,189 entries. The filtering procedure where we erase words from D conv led to the error dictionary D err (English, ocr) with 1, 532, 741 entries. Table 9 shows some of the most frequent English words, the transformation result, and the number of Google hits of the garbled variant.",
"cite_spans": [
{
"start": 336,
"end": 350,
"text": "(English, ocr)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 8",
"ref_id": null
},
{
"start": 377,
"end": 384,
"text": "Table 9",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Deletion of doubled consonants",
"sec_num": null
},
{
"text": "German. When scanning German texts, vowels\u00e4,\u00f6, and\u00fc are often replaced by their counterparts a, o, u. However, even more frequently, this kind of replacement occurs as the result of a character encoding problem (see below). Since we wanted to avoid having our statistics for OCR errors being heavily overloaded with errors caused by character encoding problems, we did not add these patterns to the list of typical OCR errors for German texts. This means that we applied to D(German) the transformation rules mentioned in Table 8 . The transformation of D(German) with its 2, 235, 136 entries led to a list of 11, 623, 989 strings. After filtering, we obtained the error dictionary D err (German,ocr) with 10, 608, 635 entries. Table 10 shows some frequent German words, the transformation result, and the number of Google hits of the garbled variant. ",
"cite_spans": [],
"ref_spans": [
{
"start": 522,
"end": 529,
"text": "Table 8",
"ref_id": null
},
{
"start": 728,
"end": 736,
"text": "Table 10",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Deletion of doubled consonants",
"sec_num": null
},
{
"text": "In character sets used for the encoding of Web pages, often the German letters\u00c4,\u00d6,\u00dc, a,\u00f6,\u00fc, and \u00df (\"sharp s\") are not available. In many of these cases, vocals are replaced, following the substitution scheme (e-transformation):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "A \u2192 Ae,\u00d6 \u2192 Oe,\u00dc \u2192 Ue,\u00e4 \u2192 ae,\u00f6 \u2192 oe,\u00fc \u2192 ue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "In other Web pages, the aforementioned vocals are replaced using the following scheme:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "A \u2192 A,\u00d6 \u2192 O,\u00dc \u2192 U,\u00e4 \u2192 a,\u00f6 \u2192 o,\u00fc \u2192 u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "This transformation, which is typically found in Web pages written by non-native speakers of German, will be called -transformation. Table 11 shows some transformed terms of the top 1,000 German words and gives the number of Google hits for correct and incorrect spellings. The right-hand side of the table gives the corresponding numbers for PDF documents. The numbers show that misspellings caused by e-transformation are a widespread phenomenon. Note that the quality of PDF corpora is much better in this respect.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Table 11",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "When applying the e-or -transformation, letter \u00df is typically replaced by ss (s-transformation). For two reasons, the distinction between \u00df and ss is a delicate matter. Since the Swiss spelling is ss, a string representing an erroneous German word may be a correct Swiss word. To make things even more complicated, the correct spelling of many German words has been changed during the so-called \"Rechtschreibereform\" some years ago, which affected the selection between \u00df and ss (e.g., Mi\u00dfverst\u00e4ndnis became Missverst\u00e4ndnis). Still (and unofficially), the old spelling variant is broadly used. In what follows, a token written with ss that is officially written with \u00df is treated as an error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "We built two error dictionaries respectively representing errors introduced via e-transformation and -transformation. All vowels of the form\u00e4,\u00f6,\u00fc (or upper-case variants) in the German dictionary were replaced by their images under the respective transformation. Letter \u00df occurring in the entries was categorically replaced by ss. For the e-transformation we obtained a list of 436, 198 strings. The filtering procedure led to an error dictionary D err (German, enc-e) with 432, 987 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "Applying the -transformation and the usual filtering step, we generated the error dictionary D err (German,enc-) with 407, 013 entries. A considerable number of wellformed words was generated and filtered out. The rules of German morphology yield a partial explanation: For so-called strong verbs some paradigmatic forms only differ by a mutation of vowels (m\u00f6chte-mochte).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "An extra error dictionary D err (German,enc-s) was built by replacing \u00df by ss in German dictionary entries without occurrences of vocals\u00c4,\u00d6,\u00dc,\u00e4,\u00f6,\u00fc. The dictionary has 42, 340 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Dictionaries with Erroneous Character Encoding of German Words",
"sec_num": "4.4"
},
{
"text": "Using the union of all error dictionaries for both languages, we constructed the maximal error dictionaries D err (English,all) and D err (German,all) . Table 12 summarizes the sizes of all error dictionaries.",
"cite_spans": [
{
"start": 114,
"end": 127,
"text": "(English,all)",
"ref_id": null
},
{
"start": 138,
"end": 150,
"text": "(German,all)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Table 12",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summary and Maximal Error Dictionaries",
"sec_num": "4.5"
},
{
"text": "Before we analyze the number of tokens in the corpora that represent entries of the error dictionaries, we comment on the limitations of this kind of analysis. Obviously, not all orthographic errors of a given type occur in the respective error dictionary (underproduction). On the other hand, some tokens classified as errors by the error dictionary might in fact be correct words (overproduction) due to the incompleteness of the final filtering step in the construction of the error dictionaries. From the construction of the error dictionaries we may expect that incompleteness/underproduction is mainly caused by r missing patterns for spelling errors and OCR errors, and r the fact that we do not seriously damage words when constructing the error dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Overproduction and Underproduction",
"sec_num": "5."
},
{
"text": "For both English and German, to estimate under/overproduction of the error dictionaries, the primary general HTML corpus was split into four subclasses. The class Best contains all documents where the number of hits (tokens representing entries of the maximal error dictionary) per 1,000 tokens is \u22641. For class Good (Bad, Worst, respectively), the number of hits per 1,000 tokens is 1-5 (5-10, >10, respectively). The number of documents in each class is found in Tables 13 and 14.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Overproduction and Underproduction",
"sec_num": "5."
},
{
"text": "To estimate underproduction of the English error dictionaries, the English general HTML corpus was split into subfiles, each containing 300 tokens. We then randomly selected such subfiles and analyzed the proper errors found in these portions. Since we wanted to avoid an unbalanced selection where most errors are from the document class Worst, a maximum of three errors from each subfile was used for the analysis. Error candidates were found with the help of a spell checker and using our standard dictionaries as a second control. Slang and special vocabulary were not used for the statistics. We also excluded errors where two words were merged. We found that most of the latter errors were caused by the conversion process from HTML to ASCII. Each candidate was manually controlled; in difficult cases we consulted Merriam-Webster Online. We continued the search until 1,000 proper errors were isolated. From these, 624 (62.4%) turned out to be entries of the maximal English error dictionary. Table 13 refines these statistics and shows the number of errors and the percentage of errors found in the error dictionary for the four quality classes of documents. As a tendency, recall of the error dictionary is better in \"bad\" documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 1000,
"end": 1008,
"text": "Table 13",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Estimating Underproduction",
"sec_num": "5.1"
},
{
"text": "The same procedure was used for German and confirmed this tendency. From 1,000 errors in the German general HTML corpus, 638 (63.80%) were found in the maximal German error dictionary. The statistics for the four quality classes of documents is presented in Table 14 .",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Table 14",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Estimating Underproduction",
"sec_num": "5.1"
},
{
"text": "In our first experiment with English texts we found that a considerable number of hits corresponded to special names introduced in the documents. Many of these names are artificial (e.g., Hitty). In order to avoid all difficulties with special names we decided to restrict the error analysis in English texts to words starting with a lowercase letter. In each of the four classes, 1,000 hits of this form were randomly selected. We then manually checked which of these tokens represent correct words, reading contexts and consulting Merriam-Webster Online in difficult cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Overproduction",
"sec_num": "5.2"
},
{
"text": "The results are presented in Table 15 and show a clear tendency. The percentage of proper errors is larger in documents with a large number of hits. In the class Worst, 95% of all hits are proper errors; in the class Best, only 60% of the hits represent orthographic errors. Most of the remaining hits could be assigned to one of the following categories: correct standard expressions (missing entries of the standard dictionaries), names and geographic expressions, foreign language expressions, archaic and literary word forms, and abbreviations. The number of hits in each category is found in Table 15 . The large number of standard words among the hits in the class Best is caused by an incompleteness of our English dictionary, which does not always contain both the British and the American spelling variants. In the German general HTML corpus, where we could not restrict the experiment to tokens starting with a lowercase letter, a more shallow picture is obtained (Table 16 ). For the classes Best (61% proper errors), Good (62% proper errors), and Worst (88% proper errors), results are similar to the English case and confirm the above-mentioned general tendency. Due to the large number of names, foreign language expressions, and archaic/literary word forms that are found in class Bad, we here have only 56% proper errors. The results show that overproduction could be considerably reduced when filtering error dictionaries with better standard dictionaries for geographic entities, personal names, foreign language expressions, and archaic and literary word forms. ",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Table 15",
"ref_id": "TABREF3"
},
{
"start": 597,
"end": 605,
"text": "Table 15",
"ref_id": "TABREF3"
},
{
"start": 974,
"end": 983,
"text": "(Table 16",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Estimating Overproduction",
"sec_num": "5.2"
},
{
"text": "From the above percentages we obtain a naive estimate for the ratio between the real number of errors and the number of hits of the error dictionaries, which is presented in Table 17 . The results show that the number of hits can be seen as a lower approximation of the real number of errors. The ratio between both numbers is larger for English. It does not differ dramatically between the distinct quality classes. However, since both over-and underproduction are larger for \"good\" documents, error estimates for these classes come with a larger degree of uncertainty.",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 182,
"text": "Table 17",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Summary So Far",
"sec_num": "5.3"
},
{
"text": "The above analysis turned out to be much more time-consuming and difficult than it might appear. One problem is caused by the fact that nonstandard vocabulary and errors do not represent disjoint categories. Orthographic errors are sometimes \"abused\" as slang expressions. A separation between archaic/foreign language expressions and orthographic errors is often only possible when taking the sentence context into account. These and other examples explain that demarcation issues are sometimes difficult to solve. The construction of special dictionaries for slang, foreign language expressions, special names, and archaic word forms represents an important step for future work. Using these dictionaries in the filtering step of the construction of the error dictionaries, overproduction may probably be reduced in a significant way. Furthermore, these dictionaries should help to detect Web pages with nonstandard vocabulary of a particular type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficulties",
"sec_num": "5.4"
},
{
"text": "Overproduction of the maximal error dictionary in the German general HTML corpus. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 16",
"sec_num": null
},
{
"text": "We define the error rate of a text with respect to an error dictionary D err as the average number of entries of D err that are found among 1,000 tokens of the text. In this section we describe the distribution of error rates for all types of errors in the general HTML corpora. Experiments for other corpora are summarized in the following section. The results of the previous section indicate that the error rate represents a reasonable lower approximation for the real number of errors per 1,000 tokens in the document. Incompleteness of the rule sets for generating spelling errors and OCR errors should be kept in mind. Recall that in English documents, only words starting with a lowercase letter are taken into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distribution of Orthographic Errors in the General HTML Corpora",
"sec_num": "6."
},
{
"text": "In the first test, we consider orthographic errors, that is, errors of arbitrary type. Accordingly, error rates for documents are computed with respect to the maximal error dictionaries. For a coarse survey, as in the previous section, we distinguish four quality classes Best, Good, Bad, Worst, respectively, covering pages with error rates in the intervals [0, 1), [1, 5), [5, 10) , and [10, \u221e).",
"cite_spans": [
{
"start": 375,
"end": 378,
"text": "[5,",
"ref_id": null
},
{
"start": 379,
"end": 382,
"text": "10)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distribution of Error Rates for Orthographic Errors",
"sec_num": "6.1"
},
{
"text": "English. The histograms in Figure 1 show the percentage of documents in each class in the primary (left-hand side) and secondary (right-hand side) English corpora. Remarkably, the differences between the two corpora are almost negligible. In both cases, most documents belong to class Best; only a small percentage of documents belongs to classes Bad and Worst. Table 18 presents the average error rate for various document classes. As to the length of documents in the corpora, drastic differences exist. We did not find a correlation between document length and error rates, with the following eye-catching exception: small (larger) documents of an excellent quality tend to have an error rate 0 (close to 0, but >0). 5 In order to avoid a dominating influence of long documents, we simply computed the arithmetic mean of all error rates obtained for the single documents. The class Best 80% collects 80% of all documents with lowest error rate, and similarly for the class Best 90%.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 362,
"end": 370,
"text": "Table 18",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Distribution of Error Rates for Orthographic Errors",
"sec_num": "6.1"
},
{
"text": "Note that a significant difference exists between the average rate for all documents (2.47, 2.24, respectively) and the means for the Best 80% classes (0.67, 0.68, respectively). These numbers point to an effect that will be found again in other figures and experiments: The large majority of all documents in the corpora have a very good quality. Yet, at the \"bad end\" of the spectrum we find a considerable number of unacceptable documents with a very large number of errors. The phenomenon becomes even more apparent in Figure 2 (left diagram) where we depict the error rates of all documents. In what follows we often describe mean error rates for all documents and for the class Best 80%. When comparing distinct corpora, the two values help to see if deviations concern the class of all documents or if they are rather caused by a small number of \"bad\" documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 531,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distribution of Error Rates for Orthographic Errors",
"sec_num": "6.1"
},
{
"text": "Note also that all corresponding average error rates obtained for the primary and secondary corpora are almost identical. This gives at least some evidence to the conjec- ture that for corpora crawled with similar queries and collection strategies, error rates will not differ too much. As we see next, the situation for the German corpora is more complex. Figure 3 shows the percentage of documents in each class of the primary (left-hand side) and secondary (right-hand side) German corpora. A large number of documents belongs to class Good. We now find a larger difference between the primary and secondary corpora. Several phenomena might be responsible. As mentioned above, for the German corpora we did not restrict the analysis to tokens starting with a lowercase letter. Hence, documents with many names can cause special effects and lead to differences between corpora. Second, errors caused by encoding of special characters represent an important extra source for errors in German documents where numbers may vary from one corpus to another. This is seen in Table 20 where we analyze all error types occurring in the primary and secondary German corpus. The means for e-transformation are 0.62 for the primary corpus and 1.40 for the secondary corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 1067,
"end": 1078,
"text": "in Table 20",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distribution of Error Rates for Orthographic Errors",
"sec_num": "6.1"
},
{
"text": "The average error rates obtained for distinct documents classes of the German corpus, which are presented in Table 19 , show that r for all classes we have more errors than in the English documents, and r for different corpora, sometimes nontrivial deviations must be expected. A more detailed picture of the error rates in the primary German corpus is given in Figure 2 (right diagram). The two curves of the figure show that despite the aforementioned differences between English and German, basic features of the error rate distribution are very similar.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Table 19",
"ref_id": "TABREF5"
},
{
"start": 362,
"end": 370,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "German. The histogram in",
"sec_num": null
},
{
"text": "Typographic Errors. The most widespread subclass of errors found in the corpora are typographic errors. For the primary English corpus, as many as 2.31 of 2.47 hits (93.5%) can be classified as typing errors. 6 The percentage is lower in the German corpus (2.15/3.86, 55.7%) where e-transformation, -transformation, and s-transformation represent additional important sources for errors (see below). In absolute numbers, error rates for typographic errors observed in the two corpora are similar.",
"cite_spans": [
{
"start": 209,
"end": 210,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Rates for Particular Error Classes",
"sec_num": "6.2"
},
{
"text": "The histograms in Figure 4 show the percentage of documents with error rates for typographic errors in the four intervals [0, 1), [1, 5), [5, 10) , and [10, \u221e) for the primary and secondary English corpora (upper diagrams of Figure 4 ) and the corresponding German corpora (lower diagrams of Figure 4) . Note again the close similarity between the two English corpora. The detailed distribution curves, which are similar to the curves obtained for orthographic errors in Figure 2 , are omitted. Figure 5 show that the error rates found in the primary English corpus (mean 0.39) are similar to the ones found in the primary German corpus (mean 0.45). The results presented in Section 5.1 indicate that our error dictionaries for spelling errors are incomplete. Hence the real number of spelling errors is probably larger. We also computed error rates for spelling errors in the secondary corpora; results are presented in Table 20 . The tendency observed earlier for orthographic errors was confirmed: the difference between the two English corpora (mean 0.39 versus mean 0.38) is negligible; for the two German corpora, the difference is larger (mean 0.45 versus mean 0.58). Figure 6 show that most documents do not contain any OCR errors. Of course this result is not surprising. Probably not all errors that contribute to the two diagrams are really caused by wrong character recognition. Although some of the documents with the highest errors were explicitly marked to contain scanned ",
"cite_spans": [
{
"start": 138,
"end": 141,
"text": "[5,",
"ref_id": null
},
{
"start": 142,
"end": 145,
"text": "10)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 4",
"ref_id": null
},
{
"start": 225,
"end": 233,
"text": "Figure 4",
"ref_id": null
},
{
"start": 292,
"end": 301,
"text": "Figure 4)",
"ref_id": null
},
{
"start": 471,
"end": 479,
"text": "Figure 2",
"ref_id": null
},
{
"start": 495,
"end": 503,
"text": "Figure 5",
"ref_id": "FIGREF2"
},
{
"start": 921,
"end": 929,
"text": "Table 20",
"ref_id": null
},
{
"start": 1175,
"end": 1183,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Error Rates for Particular Error Classes",
"sec_num": "6.2"
},
{
"text": "Typographic errors: the percentage of documents in the four quality classes in the general English (upper part) and German (lower part) HTML corpora. Quality classes refer to error rates for typographic errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "text, it is natural to assume that the total number of such documents in the corpus is very small. Ambiguous error types might explain some of the errors found in Figure 6 ; see the discussion below. As a matter of fact, the number of OCR errors will grow when analyzing corpora with many OCRed pages. e-transformation and -transformation. Figures 7 and 8 show some interesting differences between the use of both transformations in German Web pages. In the primary German corpus, e-transformation errors are concentrated in a small class of documents (documents with rank >480) where we have a nontrivial number of occurrences, leading to a mean error rate of 0.62. The mean error rate for -transformation is much smaller (0.19). Still, there are more documents containing an -transformation error. This indicates that e-transformation is applied more systematically. The small plateau in Figure 7 is generated by some portion of text that was found in several documents. The error rates that arise when applying e-transformation in a completely systematic way are typically larger. In the corpus we found some documents of this kind; since the rates are too high, these documents are not depicted in the figure. We also looked for e-and -transformation errors in the documents of the English general HTML corpus. These errors, which mutate German words, only occur in a small number of English documents. Whereas German writers strongly prefer the e-transformation in situations where the correct characters are not available, we find a clear preference for the -transformation in the English documents. s-Transformation. Figure 9 shows the distribution of error rates for s-transformation in the primary German general HTML corpus. Since the corpus contains some Swiss documents, where \"\u00df\" is categorically written \"ss\" (cf. Section 4.4), the high mean (0.76) has to be relativized. Table 20 summarizes the error rates of all types of errors in the general HTML corpora. The numbers show that not all errors can be traced back to a unique error type.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 6",
"ref_id": "FIGREF3"
},
{
"start": 340,
"end": 355,
"text": "Figures 7 and 8",
"ref_id": "FIGREF4"
},
{
"start": 890,
"end": 898,
"text": "Figure 7",
"ref_id": "FIGREF4"
},
{
"start": 1624,
"end": 1632,
"text": "Figure 9",
"ref_id": null
},
{
"start": 1886,
"end": 1894,
"text": "Table 20",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "Distribution of error rates for s-transformation in the primary German general HTML corpus. Mean 0.76. One document with error rate 11.46 is omitted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "For both languages, the large majority of all documents has a small number of orthographic errors. On the other hand, at the \"bad end\" of the spectrum, a considerable number of unacceptable documents with high error rates is found. Mean values for error rates are strongly influenced by the latter documents; the average error rate for the Best 80% class is usually much lower. The latter rate should also be considered when comparing results obtained for two corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary So Far",
"sec_num": "6.3"
},
{
"text": "Phenomena observed in English corpora seem to be more stable than those for German: Results obtained for the primary and the secondary English general HTML corpus are almost identical. Differences between the two German corpora may partially be explained by names occurring in texts and by special character encoding problems. Table 20 illustrates this effect, showing the mean error rates for all error types in the primary and secondary HTML corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 335,
"text": "Table 20",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summary So Far",
"sec_num": "6.3"
},
{
"text": "The most important error class are typographic errors. In the German documents, e-transformation and s-transformation represent another typical error source. Whereas the number of spelling errors is significant, OCR errors do not play an essential role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary So Far",
"sec_num": "6.3"
},
{
"text": "Interestingly, the basic form of the distribution curves in Figure 2 is found again in all corresponding curves for other error types and other corpora (see also Figures 14 and 15) ; although the absolute numbers for error rates and details are of course distinct. The close similarity of all distribution curves is striking and gives some evidence to the assumption that relevant features of the error rate distribution are stable, regardless of the corpora that are investigated.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 2",
"ref_id": null
},
{
"start": 162,
"end": 181,
"text": "Figures 14 and 15)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Summary So Far",
"sec_num": "6.3"
},
{
"text": "We summarize the error rates that we found in PDF corpora and in corpora for special thematic fields. In Figures 14 and 15 , we present a small selection of distribution curves for error rates. Similarities of the distribution curves mentioned in the previous section should also be noted. Figure 10 presents the mean error rates for distinct error types found in the general PDF and (primary) HTML corpus for English. The results show that PDF documents in general have a better quality than HTML documents. Whereas we have a mean error rate of 2.47 for orthographic errors in the HTML documents, the corresponding mean is only 1.38 for PDF. For the Best 80% documents the means are 0.67 (HTML) and 0.38 (PDF).",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 122,
"text": "Figures 14 and 15",
"ref_id": "FIGREF0"
},
{
"start": 290,
"end": 299,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Differences for Special Corpora",
"sec_num": "7."
},
{
"text": "In principle, the same tendency was observed in the documents of the parallel German corpora. However, special effects polluted the picture. As we mentioned in Section 2.1, the conversion of the German PDF documents to ASCII is very error prone. Although we excluded all converted documents that were obviously garbled by the conversion, we also found in the remaining documents examples of errors that were caused by the conversion process. In this sense, the error rates in the original PDF documents are probably smaller. Mean error rates are 2.15 (HTML) versus 2.04 (PDF) for typographic errors, 0.45 versus 0.41 for spelling errors, 0.13 versus 0.09 for OCR errors, 0.62 versus 0.07 for e-transformation errors, and 0.19 versus 0.16 for -transformation errors. Since the conversion tool categorically replaces letter \"\u00df\" by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distribution of Orthographic Errors in the General PDF Corpora",
"sec_num": "7.1"
},
{
"text": "PDF versus HTML: mean error rates for distinct error types in the general corpora (English). Black rectangles describe mean error rates for the Best 80% subclass.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 10",
"sec_num": null
},
{
"text": "\"ss\", a very high number of s-transformation errors led to the effect that the overall mean error rate for the German PDF (3.95) is even larger than the rate for the German HTML (3.86). Figure 11 describes the average error rates for orthographic errors and spelling errors in the English corpora. In almost all thematic areas, mean error rates are larger than the corresponding means in the general corpora; the differences are significant and remarkable. With a mean error rate of 2.05 (0.30) for orthographic (spelling) errors, the English Neurology corpus is very clean and represents an exception. For the Fish corpus, even the mean error rate for the Best 80% subclass is 2.72. We conjecture that corpora that are collected without a special thematic focus often contain a large number of \"professional\" and carefully edited Web pages. Web pages for special thematic areas",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 195,
"text": "Figure 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Figure 10",
"sec_num": null
},
{
"text": "Thematic corpora versus general corpora: mean error rates for orthographic errors and spelling errors in distinct English corpora. All results refer to the primary thematic corpora crawled with the simple strategy (cf. Section 2.2). Black rectangles represent mean error rates for the Best 80% subclass.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "are perhaps less \"publicity oriented.\" Furthermore, as a rule of thumb, documents in thematic fields related to hobbies (e.g. Fish) contain more orthographic errors than documents in scientific fields (Holocaust, Neurology) . Corpora with a focus on history seem to occupy a middle position.",
"cite_spans": [
{
"start": 201,
"end": 223,
"text": "(Holocaust, Neurology)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "In the German corpora we have the means for orthographic/spelling error rates presented in Table 21 ; numbers in brackets refer to the Best 80% subclass. The second column shows that, by and large, the ranking order for thematic areas induced by mean error rates observed in the English corpora is found again in the German part. The German corpus Neurology, with its high error rate, does not follow this line. The high means for the Best 80% subclasses in the German corpora are remarkable and show that the low quality is not caused by a small number of bad documents. Table 22 summarizes the differences for the English corpora retrieved with the simple strategy on the one hand and the corpora retrieved with the refined strategy on the other hand. Numbers represent average error rates for the corpora. Numbers in brackets refer to the Best 80% subclass.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Table 21",
"ref_id": "TABREF13"
},
{
"start": 572,
"end": 580,
"text": "Table 22",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "Surprisingly, all corpora crawled with the refined strategy always have a better (smaller) average error rate than those retrieved with the simple strategy, pointing to a significant difference between the two types of collection strategies. An analysis of the document genres found in the two types of corpora presented in Section 8 offers a good explanation; see Table 26 . Figures 14 and 15 show that the corpora crawled with the refined strategy have a large number of documents with error rate 0. This special effect is caused by the large number of short documents that are obtained. For example, the mean length of all the documents with error rate 0 in the corpus Fish E (2) is 322 (number of lowercase normal tokens), whereas the average length of the documents in the corpus Fish E (1) is 14,196 (cf. Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Table 26",
"ref_id": "TABREF4"
},
{
"start": 376,
"end": 393,
"text": "Figures 14 and 15",
"ref_id": "FIGREF0"
},
{
"start": 811,
"end": 818,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Differences between the Two Crawling Strategies",
"sec_num": "7.3"
},
{
"text": "The relative order between the three thematic fields was not affected by the crawling strategy. For both crawls, the Neurology corpus achieves the best results, followed by Mushrooms and Fish. The excellent quality of the Best 80% classes obtained with the refined crawl are remarkable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differences between the Two Crawling Strategies",
"sec_num": "7.3"
},
{
"text": "For the German variant of the corpora, as Table 23 shows, a more shallow picture is obtained. For two thematic areas, the simple crawl even leads to lower error rates, although the difference is small. The ranking order between the three thematic areas obtained from the two crawls is not the same. Figure 14 presents the error rates for orthographic errors in the English HTML corpora Fish, Mushrooms, and Neurology, comparing the simple strategy (left-hand side diagrams) with the refined strategy (right-hand side diagrams). Figure 15 gives the error rates for spelling errors in the German HTML corpora Fish, Mushrooms, and Neurology, again comparing the simple and the refined strategies.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Table 23",
"ref_id": "TABREF1"
},
{
"start": 299,
"end": 308,
"text": "Figure 14",
"ref_id": "FIGREF0"
},
{
"start": 528,
"end": 537,
"text": "Figure 15",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Differences between the Two Crawling Strategies",
"sec_num": "7.3"
},
{
"text": "PDF corpora were found to have lower error rates. Corpora covering pages from nonscientific thematic areas often have higher error rates than corpora crawled without a fixed thematic focus. Error rates in the corpora are influenced by the crawling strategy. For English texts, refined crawling strategies that collect pages with a strong thematic focus seem to produce better corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary So Far",
"sec_num": "7.4"
},
{
"text": "Classifying Web documents by genre (Kessler, Nunberg, and Sch\u00fctze 1997; Finn and Kushmerick 2003; Dimitrova et al. 2003) represents one possible way to improve Web search techniques. Web-based corpus linguistics may benefit from these techniques since they enable a better control of the kind of language material that is added to a collection. In this section we want to see which kind of correlation exists between the error rates observed in a document and its genre. After manual inspection of",
"cite_spans": [
{
"start": 35,
"end": 71,
"text": "(Kessler, Nunberg, and Sch\u00fctze 1997;",
"ref_id": "BIBREF20"
},
{
"start": 72,
"end": 97,
"text": "Finn and Kushmerick 2003;",
"ref_id": "BIBREF8"
},
{
"start": 98,
"end": 120,
"text": "Dimitrova et al. 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Rates and Document Genre",
"sec_num": "8."
},
{
"text": "Zipf curves with logarithmic frequencies for English (upper diagram, 1,175,894 entries) and German (lower diagram, 454,709 entries) ranked error lists. The diagrams respectively illustrate the frequency of particular orthographic errors in English and German Web pages from a 1.4-terabyte subcorpus of the Web.",
"cite_spans": [
{
"start": 53,
"end": 87,
"text": "(upper diagram, 1,175,894 entries)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "hundreds of Web pages, we decided to use the following set of document genres for our investigations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "r The class Prof contains all Web pages with professional texts from organizations, enterprises, and administrations. Also, scientific texts, professional literature, and fiction are added to this class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "r The class Priv contains private homepages and texts written from a personal point of view. A clue term is the personal pronoun I. Texts of this form may dominate in a Web page run by an organization. In this case, the page was classified as Priv.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "r The class Chat contains chat and related collections of private statements and contributions such as guest books, mailing lists, and so forth. r The class Junk contains documents where the language is \"polluted,\" for example, by large lists of erroneous keywords, lists of foreign language expressions, dominating subparts only consisting of program code, archaic language, and so forth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "r The class Other contains all other documents. In practice we tried to assign to each document one of the above four classes, and most documents in the class Other are (almost) empty files.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "Even with this small number of classes, separation issues were sometimes difficult to address. We did not introduce finer subclasses since we expected that the number of ambiguous and problematic cases would be multiplied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "Our experiments on document genre were restricted to English corpora. We looked at the primary general English HTML corpus and on the English corpora for the domains Fish, Mushrooms, and Neurology. For each of the latter three domains, both the corpus obtained with the simple crawling strategy and the corpus retrieved with the refined crawl were taken into account. Hence, a total of 7 corpora were investigated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12",
"sec_num": null
},
{
"text": "For each corpus, all documents in the classes Worst and Bad were manually classified, assigning one of the classes Prof, Priv, Chat, Junk, or Other to the document. From the classes Good and Best, 100 documents were randomly selected and classified in the same way. Table 24 presents the classification results for the primary English general HTML corpus. Not surprisingly, classes Chat and Junk dominate at the bad end of the quality spectrum, whereas class Prof dominates for good documents. The same tendency was found for all corpora, although the percentage of Prof documents in distinct quality classes was often larger. To add one further typical example, Table 25 presents the result for the corpus Fish E (1) retrieved with the simple crawling strategy. Note that even for the Bad class, 50.62% of the documents are of type Prof.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 274,
"text": "Table 24",
"ref_id": "TABREF2"
},
{
"start": 663,
"end": 671,
"text": "Table 25",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Genre Distribution of the Four Quality Classes",
"sec_num": "8.1"
},
{
"text": "Distribution of error rates for arbitrary orthographic errors in the 6 English HTML corpora: Fish E (1) and Fish E (2) (upper diagrams), Mushrooms E (1) and Mushrooms E (2) (middle), and Neurology E (1) and Neurology E (2) (bottom diagrams). Letters (1) (diagrams on the left-hand side) refer to corpora retrieved with the simple crawling strategy. Letters (2) (diagrams on the right-hand side) stand for the refined crawling strategy. From the refined crawl (right-hand sides) a large number of documents without any error hit is obtained. Corpora crawled with the refined strategy typically contain a large number of short documents (cf. Sections 2.2 and 7.3), and short documents of good quality often have an error rate 0. A comparison along the vertical dimension illuminates differences between the three thematic areas: corpora Fish E contain more errors than corpora Mushrooms E, which contain more errors than the corpora Neurology E. Mean error rates are 7.08/3.39 [Fish E (1)/Fish E (2)]; 4.10/2.58 [Mushrooms E (1)/Mushrooms E (2)]; and 2.05/1.77 [Neurology E (1)/Neurology E (2)]. In the diagrams, some documents with high error rates are omitted to simplify scaling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 14",
"sec_num": null
},
{
"text": "Distribution of error rates for spelling errors in the 6 German HTML corpora: Fish G (1) and Fish G (2) (upper diagrams), Mushrooms G (1) and Mushrooms G (2) (middle), and Neurology G (1) and Neurology G (2) (bottom diagrams). Letters (1) (diagrams on the left-hand side) refer to corpora retrieved with the simple crawling strategy. Letters (2) (diagrams on the right-hand side) stand for the refined crawling strategy. The latter strategy leads to a large number of short documents without any hits in the error dictionaries. See the discussion in Section 7.3. Similarly as for English HTML, corpora Fish G contain more errors than corpora Mushrooms G, which contain more errors than the corpora Neurology G. Mean error rates are 1.35/1.00 [Fish G (1)/Fish G (2)]; 0.78/0.76 [Mushrooms G (1)/Mushrooms G (2)]; and 0.51/0.47 [Neurology G (1)/Neurology G (2)]. In the diagrams, some documents with high error rates are omitted to simplify scaling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 15",
"sec_num": null
},
{
"text": "Genre distribution of the four quality classes for the primary general English HTML corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 24",
"sec_num": null
},
{
"text": "English HTML (1) Worst (%) Bad (%) Good (%) Best (%) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 24",
"sec_num": null
},
{
"text": "The analysis of genres presented in Table 26 illuminates an important difference between the thematic corpora retrieved with the simple and the refined crawling strategy:",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Table 26",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Genre Distribution: Simple Crawl versus Refined Crawl",
"sec_num": "8.2"
},
{
"text": "In the latter corpora, the percentage of documents of type Chat and Junk is lower; differences are significant. At the same time, corpora retrieved with the refined strategy contain more documents of type Prof. We conjecture that the open compounds that were used in the queries for the refined crawl (cf. Section 2.2) represent a kind of \"highlevel language expressions\" that are typically used in a professional or scientific context. With the above background, it is not surprising that the refined crawling strategy leads to better error rates. Table 27 presents estimates for the mean error rates obtained for the distinct document genres in the seven corpora. These numbers represent estimates since not all documents Table 26 Composition of corpora retrieved with the simple (1) and the refined (2) crawling strategies. The refined strategy (2) helps to avoid documents of type Chat and Junk, attracting documents of type Prof at the same time.",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 557,
"text": "Table 27",
"ref_id": "TABREF17"
},
{
"start": 724,
"end": 732,
"text": "Table 26",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Genre Distribution: Simple Crawl versus Refined Crawl",
"sec_num": "8.2"
},
{
"text": "Crawls Fish E Fish E Mushr. E Mushr. E Neur. E Neur. E (1) (%) (2) (%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Rates for Genres",
"sec_num": "8.3"
},
{
"text": "( of the classes Good and Best were classified. In all corpora, the mean error rate for class Prof is better than the rate for class Priv, which is better than the rate for class Chat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Rates for Genres",
"sec_num": "8.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1) (%) (2) (%) (1) (%)",
"eq_num": "(2)"
}
],
"section": "Error Rates for Genres",
"sec_num": "8.3"
},
{
"text": "The results indicate that the error rate of a document might be an interesting feature for genre classification: High error rates typically point to documents of the genres Junk and Chat; excellent error rates typically point to documents of type Prof. Results for the Neurology corpora indicate that \"scientific Chat/Junk\" may come with low error rates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Rates for Genres",
"sec_num": "8.3"
},
{
"text": "An obvious correlation exists between the genre of a document and its error rate. Error rates might be used as one feature for genre classification. The analysis of genres helps one to understand the differences between corpora retrieved with distinct crawling strategies and the error rates observed in the corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary So Far",
"sec_num": "8.4"
},
{
"text": "The figures seen in the previous sections show that corpora collected from the Web typically contain a non-negligible number of documents with an unacceptable number of orthographic errors. We now turn to the question of how to use error dictionaries for recognizing and filtering Web pages with a high percentage of errors, thus excluding them from the corpus construction process. The question of what should be considered as a \"high percentage\" has to be answered for each application. Generally speaking we would like to exclude at least those documents that are found at the right end of the diagrams presented in the previous sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering Methods",
"sec_num": "9."
},
{
"text": "By a filter, we mean a pair F = D, \u03c1 consisting of an error dictionary, D, and a filter threshold, \u03c1. The filter rejects a text document (Web page) T iff the average number of entries of D that are found among 1,000 tokens of T exceeds \u03c1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "As a matter of fact, we may use the maximal error dictionaries for filtering. For some applications, small error dictionaries, which occupy less space and are easier to handle, may be advantageous. The results presented below show that when one uses a more rigid filter threshold \u03c1, the filtering effect achieved with \"small\" error dictionaries is very similar to the effect when using the maximal error dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "With an obvious interpolation, this observation supports the assumption that the incompleteness of our maximal error dictionaries does not seriously reduce their filtering capacities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "Since error dictionaries are necessarily incomplete in the sense that not all possible errors can be covered, it is natural to ask if filters of the above-mentioned form can work. We shall see below that even filters with small error dictionaries are useful. The reason is that the frequency of orthographic errors in the Web follows a Zipf-like 7 distribution. Since a relatively small number of erroneous tokens already covers a substantial number of all error occurrences, it should not be surprising that even small error dictionaries help to identify pages with many errors. In Figure 12 , we show the logarithmic frequencies of errors in a 1.4-terabyte subcorpus retrieved from the Web in 1999 (\"Web-in-a-box\"). The upper diagram shows the distribution of all errors from the maximal English error dictionary, D err (English,all), in English Web pages. Only errors with at least two occurrences are covered. Similarly the lower diagram shows the distribution of errors from D err (German,all) in German Web pages.",
"cite_spans": [],
"ref_spans": [
{
"start": 583,
"end": 592,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Distribution of Error Frequencies",
"sec_num": "9.1"
},
{
"text": "Suppose we are given a collection of Web pages, C. We may fix a user-defined threshold \u00b5 in terms of the average number of errors per 1,000 tokens that we are willing to accept in a document to be added to our corpus. A document where the average number of errors per 1,000 tokens does not exceed \u00b5 is called acceptable, other documents are called unacceptable. In practice, since we cannot count real errors, a token is considered erroneous if and only if it occurs in one of our error dictionaries. In Section 5, we have seen that the number of entries of the error dictionary found in a text yields a lower approximation for the real number of errors. In terms of information retrieval, acceptable documents can be considered as relevant documents that we would like to retrieve for \"query\" \u00b5. To extend this analogy, we define the answer set of a filter F w.r.t. C as the set of all documents in C that are passed by F. With these notions we may now define the parameters' precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Filter Scenario",
"sec_num": "9.2"
},
{
"text": "Let \u00b5, C, and F as above. The precision of F with respect to \u00b5 and C is the percentage of acceptable documents in the answer set of F. The recall of F with respect to \u00b5 and C is the number of acceptable documents in the answer set of F divided by the number of all acceptable documents in C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "In the remainder of the section, we define and evaluate filters for the English and German general HTML corpora, which are denoted C E and C G , respectively. We consider three user-defined thresholds: \u00b5 = 10, \u00b5 = 5, and \u00b5 = 1. The first bound is meant to exclude a small number of documents with an extraordinary number of orthographic errors. The second bound is more ambitious. The third bound might be used in situations where high accuracy is needed and we want to retrieve only documents with a negligible number of orthographic errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "We define a hierarchy of filters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Filter Construction",
"sec_num": "9.3"
},
{
"text": "F 1 = D 1 , \u03c1 1 , F 2 = D 2 , \u03c1 2 , F 3 = D 3 , \u03c1 3 , . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Filter Construction",
"sec_num": "9.3"
},
{
"text": "Filters F k with higher index k generally lead to better results. On the negative side, they are more complex in terms of the number of entries of D k . In the following description we generally assume that a user-defined threshold \u00b5 has been fixed. For simplicity, we refer to the construction of filters for the English corpus, C E . The same construction was used, mutatis mutandis, for C G . All filters are computed automatically on the basis of training data. For training, two inputs were used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Filter Construction",
"sec_num": "9.3"
},
{
"text": "Ranked error list. We computed a list of all entries of the maximal English error dictionary, D err (English,all) , that occur at least twice in the corpus Web-in-a-box (cf. Section 9.1). The list was ordered by descending frequency of occurrence, as in Figure 12 . The resulting ranked error list contains 1, 175, 894 entries.",
"cite_spans": [
{
"start": 100,
"end": 113,
"text": "(English,all)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 254,
"end": 263,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "2. Training corpus. The corpus C E was randomly split into a training subcorpus (427 documents) and a test subcorpus (407 documents). 8 From the training corpus, all documents were excluded that did not contain at least five distinct errors from the ranked error list, leaving 384 documents.",
"cite_spans": [
{
"start": 134,
"end": 135,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "The error dictionary D k for filter F k was defined as the minimal initial segment S of the ranked error list such that each unacceptable document in the training corpus contains at least k distinct entries of the segment S. The threshold \u03c1 k was defined as the minimal average number of occurrences of entries of D k per 1,000 tokens in an unacceptable document of the training corpus. These entries need not be distinct. Clearly, with the given threshold we achieve a precision of 100% on the training corpus. The philosophy behind this selection of a threshold is simple: We do not want to add any unacceptable document to the corpus to be built. Precision is much more important than recall, as long as a substantial number of documents is retrieved. As a matter of fact, we cannot expect a 100% precision on the test corpus. However, our results show that the loss of precision is not significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Filters.",
"sec_num": null
},
{
"text": "In what follows we consider the three user-defined thresholds \u00b5 = 10, \u00b5 = 5, and \u00b5 = 1. For each of the filters F 1 = (D 1 , \u03c1 1 ), . . . , F 5 = (D 5 , \u03c1 5 ), as defined earlier, Table 28 shows Table 28 Evaluation of filters F k , 1 \u2264 k \u2264 5, for English general HTML corpus, user-defined threshold \u00b5 = 10 (top), \u00b5 = 5 (middle), and \u00b5 = 1 (bottom). the size of the filter dictionary D k (second column), the filter threshold \u03c1 k (third column), and the precision and recall values achieved with the filter on the training and test corpora (columns 4, 5, 6, 7).",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Table 28",
"ref_id": null
},
{
"start": 195,
"end": 203,
"text": "Table 28",
"ref_id": null
}
],
"eq_spans": [],
"section": "Filtering Results for English General HTML Corpus",
"sec_num": "9.4"
},
{
"text": "|D k | \u03c1 k P Train (%) R Train (%) P Test (%) R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering Results for English General HTML Corpus",
"sec_num": "9.4"
},
{
"text": "Baselines. When treating the complete test corpus as a \"naive\" answer set (recall 100%), we obtain 1. for \u00b5 = 10, a precision of 94.76%, corresponding to 380 acceptable and 21 unacceptable documents, 2. for \u00b5 = 5, a precision of 87.28%, corresponding to 350 acceptable and 51 unacceptable documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering Results for English General HTML Corpus",
"sec_num": "9.4"
},
{
"text": "3. for \u00b5 = 1, a precision of 57.10%, corresponding to 229 acceptable and 172 unacceptable documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering Results for English General HTML Corpus",
"sec_num": "9.4"
},
{
"text": "For \u00b5 = 10, with a precision (recall) of 99.40% (87.63%) on the test corpus, the filter F 3 represents a good compromise between size and quality. Precision is almost optimal. The answer set for the filter contains only one unacceptable document with an error rate of 13.24, which is very close to the threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering Results for English General HTML Corpus",
"sec_num": "9.4"
},
{
"text": "For \u00b5 = 5, using the filter F 3 we obtain a precision (recall) of 97.47% (97.42%). An inspection of the nine unacceptable documents in the answer set of the filter shows that they come very close to the bound \u00b5 = 5. Note that error dictionaries D 1 , D 2 , and D 3 are larger than the corresponding dictionaries for the threshold k = 10 due to the larger number of unacceptable documents in the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering Results for English General HTML Corpus",
"sec_num": "9.4"
},
{
"text": "For \u00b5 = 1, using the filter F 3 we obtain a precision (recall) of 97.02% (85.58%). There are six unacceptable documents in the answer set, all with an error rate below 2. The numbers in Table 28 show how a more rigid (smaller) filter threshold compensates for the reduced size of error dictionaries essentially without sacrificing precision and with a modest loss of recall. To illustrate the effect of filtering, yet from another perspective, Figure 13 presents the distribution of error rates (number of entries from the maximal English error dictionary D err (English,all) per 1,000 tokens) in the answer set and in the set of documents rejected by the filter F 3 constructed for the user-defined threshold \u00b5 = 5. The filter was evaluated on the test subcorpus. The figure shows that almost all documents passed (rejected) by the filter have an error frequency below (beyond) 5 errors per 1,000 tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Table 28",
"ref_id": null
},
{
"start": 444,
"end": 453,
"text": "Figure 13",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Filtering Results for English General HTML Corpus",
"sec_num": "9.4"
},
{
"text": "For computing the ranked error list, a list with the frequencies of 18, 624, 436 tokens in German Web pages was used. Via intersection with the list of all entries of the maximal German error dictionary, D err (German,all), we obtained a ranked error list with 454, 709 entries. The training and test corpora contain 314 and 308 documents, respectively, from the German general HTML corpus. Since the results are similar to the English case, we only point to some differences. Frequencies decrease more rapidly in the German ranked error list, as may be seen in Figure 12 . In the German list, the top-ranked part is dominated by e/ -transformation errors and errors where the letter \u00df is replaced by ss. The 10 top-ranked entries and their frequencies are shown in Table 29 . This special class of frequent errors leads to small filter dictionaries. For example, the filter dictionary for \u00b5 = 10, k = 5 has 16,277 entries, and the dictionary for \u00b5 = 5, k = 5 has 127,023 entries. On the other hand, the recall values achieved with the dictionaries in general are lower than in the English case.",
"cite_spans": [],
"ref_spans": [
{
"start": 562,
"end": 571,
"text": "Figure 12",
"ref_id": "FIGREF0"
},
{
"start": 766,
"end": 774,
"text": "Table 29",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Filtering Results for the German General HTML Corpus",
"sec_num": "9.5"
},
{
"text": "Obviously, the methods described above are very useful for all corpus tools that visually present linguistic data from Web pages (words, n-grams, concordances, phrases, sentences, aligned bilingual material, etc.) to the user. Filters help to exclude inappropriate pages. In the remaining data, tokens that represent entries of the error dictionaries can be marked. Depending on the application, the system may then decide to suppress this material or to add a warning when presenting it. In the remainder of this section, two case studies are presented that demonstrate the usefulness of filtering techniques and error dictionaries in distinct applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Applications",
"sec_num": "10."
},
{
"text": "It has often been observed that fixed handcrafted dictionaries only have a modest coverage when applied to new texts and corpora. 9 Still, for various text processing tasks, dictionaries with high coverage are needed. The generation of crawled dictionaries that collect the vocabulary of appropriate Web pages is one way to obtain a better coverage. As a matter of fact, the quality of these dictionaries suffers from orthographic errors in the analyzed pages. Using the above filters helps to reduce the number of errors that are imported. In order to further improve a crawled dictionary, we may either eliminate all words that represent entries of the error dictionaries, or we may mark these words for a manual inspection. In what follows we report on an experiment in the area of lexical text correction where these techniques improved:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "1. the quality of crawled dictionaries by avoiding erroneous entries, 2. the accuracy of lexical text correction achieved with these dictionaries, using a high-level text correction system (Strohmaier et al. 2003a (Strohmaier et al. , 2003b .",
"cite_spans": [
{
"start": 189,
"end": 213,
"text": "(Strohmaier et al. 2003a",
"ref_id": "BIBREF34"
},
{
"start": 214,
"end": 240,
"text": "(Strohmaier et al. , 2003b",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Correction Strategy. Ignoring details, we used the following correction strategy 10 : For each token 11 of the input text, the most similar words are retrieved from the dictionary as a set of correction candidates. In many cases the token will be found in the dictionary and represents a correction candidate with optimal similarity. Based on (1) the similarity between text token and correction candidate and (2) the frequency of the correction candidate in a corpus, each candidate receives a score. If the score of the best candidate exceeds a given threshold \u03c4, the token is replaced by this candidate. In the other case, the token is left unmodified. A good balance between similarity and frequency information in the score is obtained via training. The threshold, which is also optimized via training, guarantees that the input token is only replaced if additional confidence is available that the best correction candidate in fact represents the corrected version of the token. In the experiment described below, the system was trained on a corpus for the domain Mushrooms. The evaluation corpus is from the domain Fish. Hence, the two corpora are disjoint and cover distinct thematic areas. More details on the correction system can be found in Strohmaier et al. (2003b) .",
"cite_spans": [
{
"start": 1253,
"end": 1278,
"text": "Strohmaier et al. (2003b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Garbled Input Text for Correction. We collected 10 texts from the domain Fish, all containing a nontrivial number of errors. Texts were retrieved from the Web, using queries to Google with spelling errors, such as fish anglers infomation realy. We checked that the texts do not contain paragraphs that are also found in the documents of the corpora Fish E introduced in Section 2.2. The concatenation of the 10 texts was used as input to the text correction system. For the evaluation, a corrected version of the full text was manually created. The full text contains 17,697 tokens of which 418 (2.36%) were found to be erroneous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Background Dictionaries for Correction. As a baseline, a first crawled dictionary D crawl with 505,652 entries was built, collecting all words from the documents in the corpus Fish E (1). A second dictionary D +F crawl used only those pages that were not rejected by the filter for threshold \u00b5 = 2, based on the maximal English error dictionary D err (English,all ) ; see below. With an extended filtered crawl, even better coverage and accuracy results would probably be possible.",
"cite_spans": [
{
"start": 351,
"end": 363,
"text": "(English,all",
"ref_id": null
}
],
"ref_spans": [
{
"start": 364,
"end": 365,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Evaluation Results. We then compared the lexical coverage (percentage of tokens of the correct version of the input text found in the dictionary) and correction accuracy (percentage of correct tokens after automated correction) for each of the three dictionaries. The results are presented in Table 30 . The accuracy of the input text is 97.64%. The fifth column gives the improvement in accuracy, taking the input text as a baseline. The last column mentions the number of erroneous tokens in the text that are found in the respective error dictionary.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 301,
"text": "Table 30",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Note that the use of the filtered corpus leads to a measurable improvement in correction accuracy. The second step in which we eliminate all entries of the error dictionaries in the correction dictionary leads to an additional gain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Overproduction and Underproduction of the Error Dictionary. As mentioned above, 418 tokens of the input text represented proper errors. From these, 254 (60.77%) turned out to be entries of the maximal English error dictionary D err (English,all) . Note that this value for underproduction is very compatible with our estimates in Section 5. Remarkably, only seven of the correct tokens of the input text occurred in the error dictionary.",
"cite_spans": [
{
"start": 232,
"end": 245,
"text": "(English,all)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Analyzing the Effect of Using Filters and Error Dictionaries. The most important error source in the correction process are erroneous tokens of the text that-by accidentrepresent entries of the crawled dictionaries. Using the above strategy, these false friends are only replaced by another word w of the correction dictionary if overwhelming frequency information is available that leads to a preference of w after computing the balanced score for similarity and frequency. The dictionary D crawl contains 262 of the 418 erroneous tokens of the text. The dictionary D +F crawl , which collects the vocabulary of filtered pages, contains only 92 erroneous tokens. After eliminating all entries of the maximal error dictionary, the new dictionary D +F+ED crawl contains only 49 false friends. Note that the latter tokens represent errors not contained in the error dictionary. A very interesting additional number is the following: when eliminating in D crawl all (English,all) , the resulting dictionary contains 105 erroneous tokens of the text. This shows that the filtering step eliminates 56 (= 105 \u2212 49) erroneous tokens of the text that are not found in the error dictionary and proves that a two-step procedure-first using filters for crawling pages, then eliminating entries of error dictionaries afterwards-leads to optimal results.",
"cite_spans": [
{
"start": 963,
"end": 976,
"text": "(English,all)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Correction with Crawled Dictionaries",
"sec_num": "10.1"
},
{
"text": "Parallel texts represent an important resource for automatic acquisition of bilingual dictionaries. Since only a small number of large parallel corpora are available, which are moreover specialized both with respect to form and contents, the Web represents an important archive for mining parallel texts (Resnik and Smith 2003) . When building up bilingual dictionaries for machine translation, or when presenting parallel phrases to users, correctness is an important issue. Hence, it is interesting to see how error dictionaries help to reduce errors in parallel corpora. Our methods can be applied to any kind of parallel corpus. For our experiments we used the freely available Europarl corpus. 13 The corpus covers the proceedings of the European Parliament 1996-2001 in 11 official languages of the European Union. We only analyzed the English and German versions of the parallel texts. The 488 documents in the corpus are of an excellent quality. Our goal was to find English and German texts with a nontrivial number of errors (if any) and to detect these errors. Since the overproduction of error dictionaries in very accurate texts is high, the problem is challenging. The maximal error dictionaries for the two languages were used to determine the error rate of each document. Table 31 shows the twenty documents with the highest error rates for both the English and the German subcollection of the corpora. Columns 4 and 5 describe the number of tokens that represent entries of the respective error dictionary and the number of real errors among these hits. The results show that when analyzing very accurate texts, the error rate is not always a safe indicator for a corresponding number of real errors. Still, the experiment isolates 246 real errors, only looking at 40 documents. When collecting translation correspondences, we may simply discard all phrases/sentences with a hit in an error dictionary, together with their aligned counterparts. Many translation pairs with errors will be avoided. Given the length of the documents, the number of hits of the error dictionaries is small, hence the loss of recall is not essential. In this way our methods may help to improve the generation of translation data even from collections of very accurate parallel texts.",
"cite_spans": [
{
"start": 304,
"end": 327,
"text": "(Resnik and Smith 2003)",
"ref_id": "BIBREF30"
},
{
"start": 699,
"end": 701,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1288,
"end": 1296,
"text": "Table 31",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Generating Translation Data from Parallel Corpora",
"sec_num": "10.2"
},
{
"text": "In this article we investigated the distribution of orthographic errors of distinct types in the English and German Web. Experiments based on a variety of very large error dictionaries showed that Web corpora typically contain a non-negligible number of pages with an unacceptable number of orthographic errors. Typing errors represent the most important subclass. In German Web pages, errors resulting from character encoding problems represent another important category. In our experiments, PDF documents were found to contain less orthographic errors than HTML documents, and corpora covering specific thematic areas were found to contain more errors than collections of \"general\" pages without such a focus. Some differences were remarkable; in particular, our corpora for special thematic areas related to hobbies contain many pages with a high number of orthographic errors. We also found that mean error rates are influenced by the collection strategy. Specific crawling strategies help to avoid chat and junk while attracting professional documents. Since document genre and error rates are correlated, refined crawling strategies may help to reduce mean error rates. Error dictionaries, even subdictionaries of modest size, can be used as filters that help to detect and eliminate pages with many orthographic errors. Filters with userdefined thresholds work well for both languages. Obviously, the possibility of deleting pages with many orthographic errors and of marking all entries of error dictionaries in the remaining documents opens a wide range of interesting applications in distinct areas of corpus linguistics. To exemplify possible applications we showed how to improve the quality of Web-crawled dictionaries for text correction. With these filtered dictionaries, higher values for correction accuracy were obtained than with those directly obtained from Web crawls. In a second experiment, we showed how error dictionaries may be used to improve the automated collection of translation correspondences, avoiding translation pairs with orthographic errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11."
},
{
"text": "Going beyond corpus linguistics, it might be interesting to design (special modes of) Web search engines where the error rate of a given document is used as one parameter in the ranking of answers. In many search scenarios, answer documents with a large number of orthographic errors appear to be less reliable, and the user might wish to concentrate on \"professional\" or carefully edited Web pages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11."
},
{
"text": "In our practical work we found that the collection and analysis of very large Web corpora is difficult for many reasons. For example, it is not clear how to treat pages with artificial vocabulary that is only introduced to obtain a better ranking. We learned that often these junk lists are intensionally enriched with many orthographic errors to obtain a better ranking, in particular for erroneous queries. In our experiments, some of these pages were found immediately, looking at error rates, and excluded. Later, when inspecting documents for genre classification, other less eye-catching examples were found. Some portions of text occurred in several documents. The conversion of Web pages into ASCII represents a potential source for new errors. In particular the conversion of German PDF documents to ASCII turned out to be very error prone. Nonstandard vocabulary (special names, foreign language expressions, archaic language, programming code, slang, etc.) is another source that makes various pages inappropriate for corpus construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11."
},
{
"text": "One step for future work is the development of special dictionaries for frequent foreign language expressions, archaic language, programming code, and slang. Special dictionaries for these expressions would not only help to detect and exclude pages with a high amount of nonstandard vocabulary, but they could also be used as additional filters in the construction of error dictionaries. The results in Section 5.2 indicate that the overproduction of our error dictionaries could be reduced in a significant way by eliminating entries that represent expressions of the earlier-mentioned type. As a matter of fact, new types of spelling errors were found during the experiments described earlier. It might be interesting to enlarge the error dictionaries for spelling errors, taking the new patterns into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11."
},
{
"text": "We also found that enlarged error dictionaries that store with each garbled entry the correct word from which it was derived are very useful for error correction. In contrast to our first intuitions, the number of ambiguities arising from this correction strategy is small, and the predictive power of enlarged error dictionaries is high. More details on text correction with error dictionaries will be given in a forthcoming paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11."
},
{
"text": "Note that we do not capture false friends, that is, garbled strings that accidentally represent correct words of the dictionary. Detection of false friends is known to be notoriously difficult and outside the scope of this article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A phenomenon often discussed in the literature; see, for example, Kukich (1992), page 388f. 4 It is well-known that the number of Google hits for a phrase can vary from one day to the next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This explains the special effect seen inFigures 14 and 15where the refined crawl produces many short documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recall that the error type of a garbled token may be ambiguous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Zipf's law describes the frequency of words in large corpora. It states that the i-th most frequent word appears as many times as the most frequent one divided by i \u03b8 , for some constant \u03b8 \u2265 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The distinct sizes of both corpora seem to indicate that the random selection was not perfectly balanced. We ignored this problem, which does not influence the construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Kukich (1992) describes an experiment byWalker and Amsler (1986): \"Nearly two thirds (61%) of the words in the Merriam-Webster Seventh Collegiate Dictionary did not appear in an eight million word corpus of New York Times news wire text, and, conversely, almost two-thirds (64%) of the words in the text were not in the dictionary.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To simplify evaluation, a fully automated variant of text correction was considered. 11 In what follows, by a token, we always mean a token composed of standard letters only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other filter thresholds for \u00b5 = 1, 0.5, and 0 were also tested and led to very similar accuracy values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The corpus, which was also used byKoehn, Och, and Marcu (2003), is available at http://www.isi.edu/koehn/europarl/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank the anonymous referees of Computational Linguistics. Their remarks and suggestions helped to improve the contents and presentation of the article. Special thanks to Annette Gotscharek and Uli Reffle for all their help.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Efficient error-correcting viterbi parsing",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Amengual",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Carlos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Transactions on PAMI",
"volume": "20",
"issue": "10",
"pages": "1--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amengual, Juan Carlos and Enrique Vidal. 1998. Efficient error-correcting viterbi parsing. IEEE Transactions on PAMI, 20(10):1-109.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BootCaT: Bootstrapping corpora and terms from the web",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of LREC 2004",
"volume": "",
"issue": "",
"pages": "1313--1316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, Marco and Silvia Bernardini. 2004. BootCaT: Bootstrapping corpora and terms from the web. In Proceedings of LREC 2004, pages 1313-1316, Lisbon.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating translation lexica from multilingual texts",
"authors": [
{
"first": "Sotiris",
"middle": [],
"last": "Boutsis",
"suffix": ""
},
{
"first": "Stelious",
"middle": [],
"last": "Piperidis",
"suffix": ""
},
{
"first": "Iason",
"middle": [],
"last": "Demiros",
"suffix": ""
}
],
"year": 1999,
"venue": "Applied Artificial Intelligence",
"volume": "13",
"issue": "6",
"pages": "583--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boutsis, Sotiris, Stelious Piperidis, and Iason Demiros. 1999. Generating translation lexica from multilingual texts. Applied Artificial Intelligence, 13(6):583-606.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Page",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "30",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brin, Sergey and Lawrence Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30:107-117.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Retrieval of authentic documents for reader-specific lexical practice",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the InSTIL/ICALL2004 Symposium on Computer Aided Language Learning",
"volume": "",
"issue": "",
"pages": "25--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, Jonathan and Maxine Eskenazi. 2004. Retrieval of authentic documents for reader-specific lexical practice. In Proceedings of the InSTIL/ICALL2004 Symposium on Computer Aided Language Learning, pages 25-28, Venice.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Introduction to the special issue on computational linguistics using large corpora",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": ";",
"middle": [
"W"
],
"last": "",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of Sixth European Conference on Speech Communication and Technology (EUROSPEECH'99)",
"volume": "19",
"issue": "",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelba, Ciprian and Frederick Jelinek. 2002. Recognition performance of a structured language model. In Proceedings of Sixth European Conference on Speech Communication and Technology (EUROSPEECH'99), pages 1567-1570, Budapest. Church, Kenneth W. and Robert L. Mercer. 1993. Introduction to the special issue on computational linguistics using large corpora. Computational Linguistics, 19(1):1-24.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "User assessment of a visual Web genre classifier",
"authors": [
{
"first": "Maya",
"middle": [],
"last": "Dimitrova",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Kushmerick",
"suffix": ""
},
{
"first": "Petia",
"middle": [],
"last": "Radeva",
"suffix": ""
},
{
"first": "Joan",
"middle": [
"Jose"
],
"last": "Villanueva",
"suffix": ""
}
],
"year": 2003,
"venue": "Third International Conference on Visualization, Imaging, and Image Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrova, Maya, Nicholas Kushmerick, Petia Radeva, and Joan Jose Villanueva. 2003. User assessment of a visual Web genre classifier. In Third International Conference on Visualization, Imaging, and Image Processing, Malaga.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Accurate models for the statistics of surprise and coincidence",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, Ted. 1993. Accurate models for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61-74.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning to classify documents according to genre",
"authors": [
{
"first": "Aidan",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Kushmerick",
"suffix": ""
}
],
"year": 2003,
"venue": "IJCAI-03 Workshop on Computational Approaches to Text Style and Synthesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finn, Aidan and Nicholas Kushmerick. 2003. Learning to classify documents according to genre. In IJCAI-03 Workshop on Computational Approaches to Text Style and Synthesis, Acapulco. Journal of the American Society for Information Science and Technology (in press).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Facilitating the compilation and dissemination of ad-hoc web corpora",
"authors": [
{
"first": "William",
"middle": [
"H"
],
"last": "Fletcher",
"suffix": ""
}
],
"year": 2004,
"venue": "Corpora and Language Learners, number 17 in Studies in Corpus Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fletcher, William H. 2004a. Facilitating the compilation and dissemination of ad-hoc web corpora. In Guy Aston, Silvia Bernardini, and Dominic Stewart, editors, Corpora and Language Learners, number 17 in Studies in Corpus Linguistics. John Benjamins Publishing Company, Amsterdam.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Making the web more useful as a source for linguistic corpora",
"authors": [
{
"first": "William",
"middle": [
"H"
],
"last": "Fletcher",
"suffix": ""
}
],
"year": 2002,
"venue": "Corpus Linguistics in North America",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fletcher, William H. 2004b. Making the web more useful as a source for linguistic corpora. In U. Connor and T. Upton, editors, Corpus Linguistics in North America 2002. Rodopi, Amsterdam.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Term recognition in biological science journal articles",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Demetriou",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Humphreys",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Workshop on Computational Terminology for Medical and Biological Applications, 2nd International Conference on Natural Language Processing (NLP-2000)",
"volume": "",
"issue": "",
"pages": "37--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaizauskas, Robert, George Demetriou, and Kevin Humphreys. 2000. Term recognition in biological science journal articles. In Proceedings of the Workshop on Computational Terminology for Medical and Biological Applications, 2nd International Conference on Natural Language Processing (NLP-2000), pages 37-44, Patras.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Identifying word correspondences in parallel texts",
"authors": [
{
"first": "William",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of Fourth DARPA Workshop on Speech and Natural Language",
"volume": "",
"issue": "",
"pages": "152--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, William A. and Kenneth W. Church. 1991. Identifying word correspondences in parallel texts. In Proceedings of Fourth DARPA Workshop on Speech and Natural Language, pages 152-157, Pacific Grove, CA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Extraktion von semantischer Information aus Layout-orientierten Daten. Master's thesis",
"authors": [
{
"first": "Hans-J\u00fcrgen",
"middle": [],
"last": "Gartner",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gartner, Hans-J\u00fcrgen. 2003. Extraktion von semantischer Information aus Layout-orientierten Daten. Master's thesis, Technical University of Graz.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Use of syntactic context to produce term association lists for text retrieval",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, Gregory. 1992. Use of syntactic context to produce term association lists for text retrieval. In Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 89-97, Copenhagen.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The WWW as a resource for example-based MT tasks",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, Gregory. 1999. The WWW as a resource for example-based MT tasks. Paper presented at ASLIB \"Translating and the Computer\" conference, London.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Very large lexicons",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics in the Netherlands 2000: Selected Papers from the Eleventh CLIN Meeting, Language and Computers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, Gregory. 2001. Very large lexicons. In Computational Linguistics in the Netherlands 2000: Selected Papers from the Eleventh CLIN Meeting, Language and Computers, Amsterdam.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Electronic lexica and corpora research at CIS",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Guenthner",
"suffix": ""
}
],
"year": 1996,
"venue": "International Journal of Corpus Linguistics",
"volume": "1",
"issue": "2",
"pages": "287--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guenthner, Franz. 1996. Electronic lexica and corpora research at CIS. International Journal of Corpus Linguistics, 1(2):287-301.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Statistical Methods for Speech Recognition",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, Frederick. 1997. Statistical Methods for Speech Recognition. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "WebCorp: Applying the web to linguistics and linguistics to the Web",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Kehoe",
"suffix": ""
},
{
"first": "Antoinette",
"middle": [],
"last": "Renouf",
"suffix": ""
}
],
"year": 2002,
"venue": "Poster Proceedings of the 11th International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kehoe, Andrew and Antoinette Renouf. 2002. WebCorp: Applying the web to linguistics and linguistics to the Web. In Poster Proceedings of the 11th International World Wide Web Conference, WWW02, Honolulu.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automated detection of text genre",
"authors": [
{
"first": "Brett",
"middle": [],
"last": "Kessler",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Nunberg",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Meeting of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "32--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kessler, Brett, Geoffrey Nunberg, and Hinrich Sch\u00fctze. 1997. Automated detection of text genre. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Meeting of the European Chapter of the Association for Computational Linguistics, pages 32-38, Madrid.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Kukich, Karen. 1992. Techniques for automatically correcting words in texts",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL)",
"volume": "24",
"issue": "",
"pages": "377--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL), Edmonton. Kukich, Karen. 1992. Techniques for automatically correcting words in texts. ACM Computing Surveys, 24(4):377-439.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Building a MT dictionary from parallel texts based on linguistic and statistical information",
"authors": [
{
"first": "Akira",
"middle": [],
"last": "Kumano",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Hirakawa",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics (COLING'94)",
"volume": "",
"issue": "",
"pages": "76--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumano, Akira and Hideki Hirakawa. 1994. Building a MT dictionary from parallel texts based on linguistic and statistical information. In Proceedings of the 15th International Conference on Computational Linguistics (COLING'94), pages 76-81, Kyoto.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An algorithm for finding noun phrase correspondences in bilingual corpora",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics (ACL'93)",
"volume": "",
"issue": "",
"pages": "17--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupiec, Julian. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics (ACL'93), pages 17-22, Columbus, OH.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extracting classification knowledge of internet documents with mining term associations: A semantic approach",
"authors": [
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Chi-Sheng",
"middle": [],
"last": "Shian-Hua",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "Jan-Ming",
"middle": [],
"last": "Chang Chen",
"suffix": ""
},
{
"first": "Ming-Tat",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Yueh-Ming",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 21st International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "241--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Shian-Hua, Chi-Sheng Shih, Meng Chang Chen, Jan-Ming Ho, Ming-Tat Ko, and Yueh-Ming Huang. 1998. Extracting classification knowledge of internet documents with mining term associations: A semantic approach. In Proceedings of the 21st International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 241-249, Melbourne, Australia.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Lexikon und automatische Lemmatisierung",
"authors": [
{
"first": "Petra",
"middle": [],
"last": "Maier-Meyer",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maier-Meyer, Petra. 1995. Lexikon und automatische Lemmatisierung. Ph.D. thesis, CIS, University of Munich.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Linguistic research with the XML/RDF aware WebCorp tool",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Morley",
"suffix": ""
},
{
"first": "Antoinette",
"middle": [],
"last": "Renouf",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Kehoe",
"suffix": ""
}
],
"year": 2003,
"venue": "Poster Proceedings of the 12th International World Wide Web Conference, WWW03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morley, Barry, Antoinette Renouf, and Andrew Kehoe. 2003. Linguistic research with the XML/RDF aware WebCorp tool. In Poster Proceedings of the 12th International World Wide Web Conference, WWW03, Budapest.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Stochastic language generation for spoken dialogue systems",
"authors": [
{
"first": "Alice",
"middle": [
"H"
],
"last": "Oh",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"I"
],
"last": "Rudickny",
"suffix": ""
}
],
"year": 2000,
"venue": "ANLP/NAACL 2000 Workshop on Conversational Systems",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oh, Alice H. and Alexander I. Rudickny. 2000. Stochastic language generation for spoken dialogue systems. In ANLP/NAACL 2000 Workshop on Conversational Systems, pages 27-32, Seattle.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "From HMMs to segment models: A unified view of stochastic modeling for speech recognition",
"authors": [
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Vassilios",
"middle": [
"V"
],
"last": "Digalakis",
"suffix": ""
},
{
"first": "Owen",
"middle": [
"A"
],
"last": "Kimball",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE Transactions Speech and Audio Processing",
"volume": "4",
"issue": "5",
"pages": "360--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ostendorf, Mari, Vassilios V. Digalakis, and Owen A. Kimball. 1996. From HMMs to segment models: A unified view of stochastic modeling for speech recognition. IEEE Transactions Speech and Audio Processing, 4(5):360-378.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The web as a parallel corpus",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics -Special Issue on the Web as Corpus",
"volume": "29",
"issue": "3",
"pages": "349--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, Philip and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics -Special Issue on the Web as Corpus, 29(3):349-380.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "OCR-Korrektur und Bestimmung von Levenshtein-Gewichten. Master's thesis",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Ringlstetter",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ringlstetter, Christoph. 2003. OCR- Korrektur und Bestimmung von Levenshtein-Gewichten. Master's thesis, LMU, University of Munich.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dynamic language learning tools",
"authors": [
{
"first": "Lee",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Takako",
"middle": [],
"last": "Aikawa",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Pahud",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the InSTIL/ICALL2004 Symposium on Computer Aided Language Learning",
"volume": "",
"issue": "",
"pages": "107--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz, Lee, Takako Aikawa, and Michel Pahud. 2004. Dynamic language learning tools. In Proceedings of the InSTIL/ICALL2004 Symposium on Computer Aided Language Learning, pages 107-110, Venice.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Virach and Hozumi Tanaka. 1996. The automatic extraction of open compounds from text corpora",
"authors": [
{
"first": "Frank",
"middle": [
"A"
],
"last": "Smadja",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1143--1146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, Frank A. and Kathleen R. McKeown. 1990. Automatically extracting and representing collocations for language generation. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, pages 252-259, Pittsburgh, PA. Sornlertlamvanich, Virach and Hozumi Tanaka. 1996. The automatic extraction of open compounds from text corpora. In Proceedings of the 16th Conference on Computational Linguistics, pages 1143-1146, Copenhagen.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Lexical postcorrection of OCR-results: The web as a dynamic secondary dictionary?",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Strohmaier",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Ringlstetter",
"suffix": ""
},
{
"first": "Klaus",
"middle": [
"U"
],
"last": "Schulz",
"suffix": ""
},
{
"first": "Stoyan",
"middle": [],
"last": "Mihov",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh International Conference on Document Analysis and Recognition (ICDAR 03)",
"volume": "",
"issue": "",
"pages": "1133--1137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strohmaier, Christian, Christoph Ringlstetter, Klaus U. Schulz, and Stoyan Mihov. 2003a. Lexical postcorrection of OCR-results: The web as a dynamic secondary dictionary? In Proceedings of the Seventh International Conference on Document Analysis and Recognition (ICDAR 03), pages 1133-1137, Edinburgh.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A visual and interactive tool for optimizing lexical postcorrection of OCR results",
"authors": [
{
"first": "W",
"middle": [
"I"
],
"last": "Madison",
"suffix": ""
},
{
"first": "Kazem",
"middle": [],
"last": "Taghva",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Gilbreth",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the IEEE Workshop on Document Image Analysis and Recognition, DIAR'03",
"volume": "1",
"issue": "",
"pages": "191--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "2003b. A visual and interactive tool for optimizing lexical postcorrection of OCR results. In Proceedings of the IEEE Workshop on Document Image Analysis and Recognition, DIAR'03, Madison, WI. Taghva, Kazem and Jeff Gilbreth. 1999. Recognizing acronyms and their definitions. International Journal on Document Analysis and Recognition, 1(4):191-198.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The use of machine-readable dictionaries in sublanguage analysis",
"authors": [
{
"first": "Donald",
"middle": [
"E"
],
"last": "Walker",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"A"
],
"last": "Amsler",
"suffix": ""
}
],
"year": 1986,
"venue": "Analyzing Language in Restricted Domains: Sublanguage Description and Processing",
"volume": "",
"issue": "",
"pages": "69--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker, Donald E. and Robert A. Amsler. 1986. The use of machine-readable dictionaries in sublanguage analysis. In Analyzing Language in Restricted Domains: Sublanguage Description and Processing. Lawrence Erlbaum, Hillsdale, NJ, pages 69-83.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "wEBMT: Developing and validating an example-based machine translation system using the world wide web",
"authors": [
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Nano",
"middle": [],
"last": "Gough",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics-Special Issue on the Web as Corpus",
"volume": "29",
"issue": "3",
"pages": "421--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Way, Andy and Nano Gough. 2003. wEBMT: Developing and validating an example-based machine translation system using the world wide web. Computational Linguistics-Special Issue on the Web as Corpus, 29(3):421-458.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Using compression to identify acronyms in text",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Yeates",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bainbridge",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Conference on Data Compression",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yeates, Stuart, David Bainbridge, and Ian H. Witten. 2000. Using compression to identify acronyms in text. In Proceedings of the Conference on Data Compression, page 582, Snowbird, UT.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Percentage of documents in the four quality classes for the primary (left-hand side) and secondary (right-hand side) English corpora. The four quality classes cover distinct error rates for orthographic errors.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Percentage of documents in the four quality classes for the primary (left-hand side) and secondary (right-hand side) German corpora. The four quality classes cover distinct error rates for orthographic errors.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Distribution of error rates for spelling errors in the primary English (left diagram, mean error rate 0.39) and German (right diagram, mean error rate 0.45) general HTML corpora. In the left (right) diagram, one document with error rate 14.95 (11.31) is omitted.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Distribution of error rates for OCR errors in the primary English (left diagram, mean error rate 0.06) and the German (right diagram, mean error rate 0.13) general HTML corpora.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Distribution of error rates for e-transformation in the primary German general HTML corpus. Mean: 0.62. Here 7 documents with error rates ranging from 13.16 to 34.10 are omitted.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "Distribution of error rates for -transformation in the primary German general HTML corpus. Mean 0.19.",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "Distribution of error rates in documents (passed/rejected) by the filter F 3 for threshold \u00b5 = 5 (English test corpus). The left (right) diagram describes the distribution of documents passed (rejected) by the filter. The average error rate for accepted (rejected) documents is 1.08 (16.81).",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>Dictionary</td><td>Number of entries</td></tr><tr><td>D(English)</td><td>315, 300</td></tr><tr><td>D(German)</td><td>2, 235, 136</td></tr><tr><td>D(French)</td><td>85, 895</td></tr><tr><td>D(Spanish)</td><td>69, 634</td></tr><tr><td>D(Geos)</td><td>195, 700</td></tr><tr><td>D(Names)</td><td>372, 628</td></tr><tr><td>D(Abbreviations)</td><td>2, 375</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Size of background dictionaries."
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Word</td><td colspan=\"2\">Google hits Transformation</td><td colspan=\"2\">Misspelled variant Google hits</td></tr><tr><td>accommodate</td><td>5,800,000</td><td>mm \u2192 m</td><td>accomodate</td><td>559,000</td></tr><tr><td>category</td><td>109,000,000</td><td>teg \u2192 tag</td><td>catagory</td><td>525,000</td></tr><tr><td>definitely</td><td>10,800,000</td><td>itely \u2192 ately</td><td>definately</td><td>1,270,000</td></tr><tr><td>independent</td><td>25,700,000</td><td>dent \u2192 dant</td><td>independant</td><td>523,000</td></tr><tr><td>millennium</td><td>10,500,000</td><td>nn \u2192 n</td><td>millenium</td><td>2,540,000</td></tr><tr><td>occurrence</td><td>4,640,000</td><td>rr \u2192 r</td><td>occurence</td><td>279,000</td></tr><tr><td>receive</td><td>57,000,000</td><td>ie \u2192 ei</td><td>recieve</td><td>1,260,000</td></tr><tr><td>recommend</td><td>31,400,000</td><td>mm \u2192 m</td><td>recomend</td><td>707,000</td></tr><tr><td>separate</td><td>26,300,000</td><td>ara \u2192 era</td><td>seperate</td><td>1,340,000</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Some frequently misspelled English words and the number of Google hits of the correct and misspelled forms."
},
"TABREF3": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Deletion of doubled consonants</td><td/><td/></tr><tr><td>cc</td><td>\u2192 c</td><td>occasionally</td><td>\u2192 ocasionally</td></tr><tr><td>nn</td><td>\u2192 n</td><td colspan=\"2\">drunkenness \u2192 drunkeness</td></tr><tr><td colspan=\"2\">Deletion of consecutive consonants</td><td/><td/></tr><tr><td>mn</td><td>\u2192 m</td><td>column</td><td>\u2192 colum</td></tr><tr><td>rh</td><td>\u2192 r</td><td>rhythm</td><td>\u2192 rythm</td></tr><tr><td colspan=\"2\">Deletion of doubled vowels</td><td/><td/></tr><tr><td>ee</td><td>\u2192 e</td><td>exceed</td><td>\u2192 exced</td></tr><tr><td>uu</td><td>\u2192 u</td><td>vacuum</td><td>\u2192 vacum</td></tr><tr><td>Deletion in vowel pair</td><td/><td/><td/></tr><tr><td>aison</td><td>\u2192 ason</td><td>liaison</td><td>\u2192 liason</td></tr><tr><td>ou</td><td>\u2192 o</td><td>mischievous</td><td>\u2192 mischievos</td></tr><tr><td>ievous</td><td colspan=\"3\">\u2192 evious mischievous \u2192 mischevious</td></tr><tr><td colspan=\"2\">Deletion of silent vowels</td><td/><td/></tr><tr><td>?ed</td><td>\u2192 ?d</td><td>maintained</td><td>\u2192 maintaind</td></tr><tr><td colspan=\"2\">Substitution of consonants</td><td/><td/></tr><tr><td>sede</td><td>\u2194 cede</td><td>supersede</td><td>\u2192 supercede</td></tr><tr><td>dent</td><td>\u2194 dant</td><td colspan=\"2\">independent \u2192 independant</td></tr><tr><td>Substitution of vowels</td><td/><td/><td/></tr><tr><td>itely</td><td>\u2192 ately</td><td>definitely</td><td>\u2192 definately</td></tr><tr><td>teg</td><td>\u2192 tag</td><td>category</td><td>\u2192 catagory</td></tr><tr><td colspan=\"2\">Insertion/reduplication of consonants</td><td/><td/></tr><tr><td colspan=\"2\">\u03ba \u2208 {c,d,f,l,n,m,p,r,s,t} \u2192 \u03ba\u03ba</td><td>always</td><td>\u2192 allways</td></tr><tr><td colspan=\"2\">Transposition of consonants</td><td/><td/></tr><tr><td>ght</td><td>\u2192 gth</td><td>right</td><td>\u2192 rigth</td></tr><tr><td>Transposition of vowels</td><td/><td/><td/></tr><tr><td>ie</td><td>\u2194 ei</td><td>believe</td><td>\u2192 beleive</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Rule set (incomplete) for the generation of English spelling errors with examples for each transformation class."
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>Word</td><td colspan=\"4\">Google hits Transformation Misspelled version Google hits</td></tr><tr><td>Weihnachten</td><td>5,450,000</td><td>ih \u2192 i</td><td>Weinachten</td><td>99,600</td></tr><tr><td>Adresse</td><td>8,040,000</td><td>d \u2192 dd</td><td>Addresse</td><td>676,000</td></tr><tr><td>Videothek</td><td>581,000</td><td>th \u2192 t</td><td>Videotek</td><td>18,300</td></tr><tr><td>Kamera</td><td>10,900,000</td><td>mm \u2192 m</td><td>Kammera</td><td>14,200</td></tr><tr><td>deshalb</td><td>8,330,000</td><td>s \u2192 ss</td><td>desshalb</td><td>33,900</td></tr><tr><td>ziemlich</td><td>2,970,000</td><td>i \u2192 ih</td><td>ziehmlich</td><td>48,900</td></tr><tr><td>ekelig</td><td>20,600</td><td>lig \u2192 lich</td><td>ekelich</td><td>17,200</td></tr><tr><td>n\u00e4mlich</td><td colspan=\"2\">1,620,000\u00e4 \u2192\u00e4h</td><td>n\u00e4hmlich</td><td>53,800</td></tr><tr><td>Maschine</td><td>1,840,000</td><td>i \u2192 ie</td><td>Maschiene</td><td>28,300</td></tr><tr><td>direkt</td><td>18,200,000</td><td>ek \u2192 eck</td><td>direckt</td><td>20,600</td></tr><tr><td>danach</td><td>5,100,000</td><td>n \u2192 nn</td><td>dannach</td><td>46,200</td></tr><tr><td>voraus</td><td>1,960,000</td><td>r \u2192 rr</td><td>vorraus</td><td>214,000</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Some frequently misspelled German words and the number of Google hits of the misspelled version."
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>Word</td><td colspan=\"3\">Transformation Garbled result Google hits</td></tr><tr><td>company</td><td>m\u2192rn</td><td>cornpany</td><td>1.220</td></tr><tr><td>from</td><td>m \u2192 rn</td><td>frorn</td><td>5,310</td></tr><tr><td>government</td><td>rn \u2192 m</td><td>governrnent</td><td>705</td></tr><tr><td>many</td><td>m \u2192 rn</td><td>rnany</td><td>541</td></tr><tr><td>market</td><td>m \u2192 rn</td><td>rnarket</td><td>282</td></tr><tr><td>more</td><td>m \u2192 rn</td><td>rnore</td><td>707</td></tr><tr><td>most</td><td>m \u2192 rn</td><td>rnost</td><td>1,540</td></tr><tr><td>only</td><td>y \u2192 v</td><td>onlv</td><td>4,080</td></tr><tr><td>said</td><td>d \u2192 cl</td><td>saicl</td><td>172</td></tr><tr><td>system</td><td>m \u2192 rn</td><td>systern</td><td>2,060</td></tr><tr><td>time</td><td>m \u2192 rn</td><td>tirne</td><td>2,090</td></tr><tr><td>will</td><td>ll \u2192 11</td><td>wi11</td><td>3,570</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Some members of the top 1,000 most frequent English words transformed by typical OCR error transformations and the number of Google hits of a garbled version."
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>Word</td><td colspan=\"3\">Transformation Garbled result Google hits</td></tr><tr><td>Dipl-Ing</td><td>l \u2192 i</td><td>Dipi-Ing</td><td>213</td></tr><tr><td colspan=\"2\">uber\u00fc \u2192 ii</td><td>iiber</td><td>2,360</td></tr><tr><td>vorne</td><td>rn \u2192 m</td><td>vome</td><td>1,110</td></tr><tr><td>davon</td><td>o \u2192 p</td><td>davpn</td><td>96</td></tr><tr><td>lager</td><td>g \u2192 q</td><td>laqer</td><td>164</td></tr><tr><td>ferner</td><td>rn \u2192 m</td><td>femer</td><td>841</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Some members of the top 1,000 most frequent German words transformed by typical OCR error transformations and the number of Google hits of a garbled version."
},
"TABREF7": {
"html": null,
"content": "<table><tr><td>Word</td><td>Norm.</td><td>Transf.</td><td colspan=\"4\">Percentage PDF norm. PDF transf. Percentage</td></tr><tr><td>f\u00fcr</td><td colspan=\"2\">19,000,000 5,140,000</td><td>27.05</td><td>4,050,000</td><td>30,900</td><td>0.76</td></tr><tr><td>uber</td><td colspan=\"2\">17,800,000 2,330,000</td><td>13.08</td><td>3,610,000</td><td>16,000</td><td>0.44</td></tr><tr><td colspan=\"2\">k\u00f6nnen 14,500,000</td><td>290,000</td><td>2.00</td><td>1,790,000</td><td>3,960</td><td>0.22</td></tr><tr><td>m\u00fcssen</td><td>7,420,000</td><td>177,000</td><td>2.38</td><td>1,090,000</td><td>2,060</td><td>0.18</td></tr><tr><td>w\u00e4re</td><td>3,500,000</td><td>173,000</td><td>4.94</td><td>590,000</td><td>631</td><td>0.11</td></tr><tr><td>f\u00fcnf</td><td>2,470,000</td><td>291,000</td><td>11.78</td><td>541,000</td><td>570</td><td>0.10</td></tr><tr><td>k\u00f6nnte</td><td>2,900,000</td><td>165,000</td><td>5.69</td><td>570,000</td><td>618</td><td>0.11</td></tr><tr><td>h\u00e4tten</td><td>815,000</td><td>43,100</td><td>5.28</td><td>234,000</td><td>315</td><td>0.13</td></tr><tr><td>daf\u00fcr</td><td>3,580,800</td><td>124,000</td><td>3.46</td><td>814,000</td><td>865</td><td>0.11</td></tr><tr><td>w\u00fcrde</td><td>3,770,000</td><td>162,000</td><td>4.30</td><td>601,000</td><td>693</td><td>0.11</td></tr><tr><td>Table 12</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Size of error dictionaries.</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Error dictionary</td><td>Entries</td><td colspan=\"2\">Error dictionary</td><td>Entries</td><td/></tr><tr><td colspan=\"2\">D err (English,typing)</td><td colspan=\"4\">9, 427, 051 D err (German,typing) 13, 656, 866</td><td/></tr><tr><td colspan=\"2\">D err (English,spell)</td><td colspan=\"3\">1, 202, 997 D err (German,spell)</td><td>18, 970, 716</td><td/></tr><tr><td colspan=\"2\">D err (English,ocr)</td><td colspan=\"3\">1, 532, 741 D err (German,ocr)</td><td>10, 608, 635</td><td/></tr><tr><td/><td/><td/><td colspan=\"2\">D err (German,enc-e)</td><td>432, 987</td><td/></tr><tr><td/><td/><td/><td colspan=\"2\">D err (German,enc-)</td><td>407, 013</td><td/></tr><tr><td/><td/><td/><td colspan=\"2\">D err (German,enc-s)</td><td>42, 340</td><td/></tr><tr><td colspan=\"2\">D err (English,all)</td><td colspan=\"3\">11, 884, 284 D err (German,all)</td><td>43, 688, 771</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Most frequent German words with vowels\u00e4,\u00f6,\u00fc; frequencies of correct spelling and frequency after applying e-transformation. Frequencies are counted in arbitrary Web pages (left part of the table) and in PDF documents in the Web."
},
"TABREF8": {
"html": null,
"content": "<table><tr><td>%</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Underproduction of the maximal error dictionary in the primary English general HTML corpus. Document class Documents Errors found Entries of error dict."
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>Document class</td><td>Best</td><td>Good</td><td>Bad</td><td>Worst</td></tr><tr><td>Hits</td><td colspan=\"3\">1,000 1,000 1,000</td><td>1,000</td></tr><tr><td>Percentage proper errors</td><td>72</td><td>86</td><td>89</td><td>95</td></tr><tr><td>Proper errors</td><td>722</td><td>856</td><td>894</td><td>952</td></tr><tr><td>Standard words</td><td>206</td><td>31</td><td>21</td><td>5</td></tr><tr><td>Personal names and geographic entities</td><td>23</td><td>35</td><td>24</td><td>27</td></tr><tr><td>Foreign language expressions</td><td>32</td><td>42</td><td>36</td><td>12</td></tr><tr><td>Archaic and literary word forms</td><td>9</td><td>28</td><td>1</td><td>1</td></tr><tr><td>Abbreviations</td><td>8</td><td>6</td><td>24</td><td>2</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Overproduction of the maximal error dictionary in the English general HTML corpus."
},
"TABREF10": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Document class</td><td colspan=\"2\">Best Good</td><td colspan=\"2\">Bad Worst</td></tr><tr><td>Hits</td><td/><td>1,000</td><td colspan=\"2\">1,000 1,000</td><td>1,000</td></tr><tr><td colspan=\"2\">Percentage proper errors</td><td>61</td><td>62</td><td>56</td><td>88</td></tr><tr><td colspan=\"2\">Proper errors</td><td>615</td><td>624</td><td>564</td><td>884</td></tr><tr><td colspan=\"2\">Standard words</td><td>126</td><td>123</td><td>47</td><td>3</td></tr><tr><td colspan=\"2\">Names and geos</td><td>201</td><td>147</td><td>193</td><td>49</td></tr><tr><td colspan=\"2\">Foreign language expressions</td><td>31</td><td>46</td><td>103</td><td>37</td></tr><tr><td colspan=\"2\">Archaic and literary word forms</td><td>18</td><td>44</td><td>82</td><td>24</td></tr><tr><td colspan=\"2\">Abbreviations</td><td>9</td><td>16</td><td>11</td><td>3</td></tr><tr><td>English</td><td/><td colspan=\"2\">German</td><td/></tr><tr><td>Best</td><td>0.72/0.5029 = 1.43</td><td>Best</td><td colspan=\"3\">0.61/0.4833 = 1.26</td></tr><tr><td>Good</td><td>0.86/0.6221 = 1.38</td><td>Good</td><td colspan=\"3\">0.62/0.5221 = 1.19</td></tr><tr><td>Bad</td><td>0.89/0.6753 = 1.32</td><td>Bad</td><td colspan=\"3\">0.56/0.6084 = 0.92</td></tr><tr><td>Worst</td><td>0.95/0.6693 = 1.42</td><td>Worst</td><td colspan=\"3\">0.88/0.7892 = 1.12</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Naive estimates of the ratio between the real number of errors and the number of hits of the error dictionaries for distinct quality classes."
},
"TABREF11": {
"html": null,
"content": "<table><tr><td colspan=\"7\">Document class Best Good Bad Worst Best 80% Best 90%</td><td>All</td></tr><tr><td>E (1)</td><td>0.30</td><td>2.31</td><td>8.83</td><td>23.23</td><td>0.67</td><td>1.06</td><td>2.47</td></tr><tr><td>E (2)</td><td>0.27</td><td>2.19</td><td>6.77</td><td>21.61</td><td>0.68</td><td>1.03</td><td>2.24</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Mean error rate for arbitrary orthographic errors in various document classes; results for the general English HTML corpus. Distribution"
},
"TABREF12": {
"html": null,
"content": "<table><tr><td colspan=\"7\">Document class Best Good Bad Worst Best 80% Best 90%</td><td>All</td></tr><tr><td>G (1)</td><td>0.41</td><td>2.61</td><td>7.30</td><td>15.15</td><td>1.89</td><td>2.58</td><td>3.86</td></tr><tr><td>G (2)</td><td>0.48</td><td>2.57</td><td>7.21</td><td>24.38</td><td>2.40</td><td>3.09</td><td>5.40</td></tr><tr><td>Table 20</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"8\">Mean of error rates for all error types in primary and secondary general HTML corpora.</td></tr><tr><td>Error type</td><td colspan=\"7\">Mean error rate Mean error rate Mean error rate Mean error rate</td></tr><tr><td/><td colspan=\"3\">English corpus</td><td colspan=\"4\">English corpus German corpus German corpus</td></tr><tr><td/><td colspan=\"2\">HTML (1)</td><td/><td>HTML (2)</td><td/><td>HTML (1)</td><td>HTML (2)</td></tr><tr><td>arbitrary</td><td/><td>2.47</td><td/><td>2.24</td><td/><td>3.86</td><td>5.40</td></tr><tr><td>typographic</td><td/><td>2.31</td><td/><td>2.03</td><td/><td>2.15</td><td>2.79</td></tr><tr><td>spelling</td><td/><td>0.39</td><td/><td>0.38</td><td/><td>0.45</td><td>0.58</td></tr><tr><td>OCR</td><td/><td>0.06</td><td/><td>0.07</td><td/><td>0.13</td><td>0.18</td></tr><tr><td>e-transformation</td><td/><td>0.003</td><td/><td>0.004</td><td/><td>0.62</td><td>1.40</td></tr><tr><td>-transformation</td><td/><td>0.02</td><td/><td>0.01</td><td/><td>0.19</td><td>0.24</td></tr><tr><td>s-transformation</td><td/><td>0.00003</td><td/><td>0.00</td><td/><td>0.76</td><td>0.96</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Mean error rate for arbitrary orthographic errors in various document classes; results for the general German HTML corpus."
},
"TABREF13": {
"html": null,
"content": "<table><tr><td>German</td><td colspan=\"2\">Orthographic errors Spelling errors</td></tr><tr><td>General PDF</td><td>3.95 (2.31)</td><td>0.41 (0.06)</td></tr><tr><td>Neurology G (1) (HTML)</td><td>6.94 (4.48)</td><td>0.51 (0.26)</td></tr><tr><td>General HTML (1)</td><td>3.86 (1.81)</td><td>0.45 (0.16)</td></tr><tr><td>Holocaust G (HTML)</td><td>4.97 (3.03)</td><td>0.50 (0.27)</td></tr><tr><td>Mushrooms G (1) (HTML)</td><td>7.91 (3.69)</td><td>0.78 (0.32)</td></tr><tr><td>Middle Ages G (HTML)</td><td>7.84 (4.30)</td><td>0.96 (0.38)</td></tr><tr><td>Fish G (1) (HTML)</td><td>9.34 (4.47)</td><td>1.35 (0.52)</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Mean error rates for orthographic errors and spelling errors in thematic German corpora."
},
"TABREF14": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Orthographic errors</td><td colspan=\"2\">Spelling errors</td></tr><tr><td/><td>(1)</td><td>(2)</td><td>(1)</td><td>(2)</td></tr><tr><td>English</td><td colspan=\"4\">Simple crawl Refined crawl Simple crawl Refined crawl</td></tr><tr><td>Fish E</td><td>7.08 (2.72)</td><td>3.39 (0.35)</td><td>0.98 (0.27)</td><td>0.47 (0)</td></tr><tr><td>Mushrooms E</td><td>4.10 (1.49)</td><td>2.58 (0.32)</td><td>0.52 (0.13)</td><td>0.50 (0)</td></tr><tr><td>Neurology E</td><td>2.05 (0.79)</td><td>1.77 (0.25)</td><td>0.30 (0.05)</td><td>0.26 (0)</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Dependency of mean error rates on the crawling strategy for distinct English thematic corpora."
},
"TABREF15": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Orthographic errors</td><td colspan=\"2\">Spelling errors</td></tr><tr><td/><td>(1)</td><td>(2)</td><td>(1)</td><td>(2)</td></tr><tr><td>German</td><td colspan=\"4\">Simple crawl Refined crawl Simple crawl Refined crawl</td></tr><tr><td>Fish G</td><td>9.34 (4.67)</td><td>7.71 (3.31)</td><td>1.35 (0.52)</td><td>1.00 (0.17)</td></tr><tr><td>Mushrooms G</td><td>7.91 (3.69)</td><td>8.51 (3.50)</td><td>0.78 (0.32)</td><td>0.76 (0.08)</td></tr><tr><td>Neurology G</td><td>6.94 (4.48)</td><td>7.08 (2.86)</td><td>0.51 (0.26)</td><td>0.47 (0.00)</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Dependency of mean error rates on the crawling strategy for distinct German thematic corpora."
},
"TABREF17": {
"html": null,
"content": "<table><tr><td>(%)</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Mean error rates (estimates) for distinct document genres in seven corpora."
},
"TABREF19": {
"html": null,
"content": "<table><tr><td>Entry of error list</td><td>Correct word</td><td>Error frequency</td></tr><tr><td>Universitaet</td><td>Universit\u00e4t</td><td>131,494</td></tr><tr><td>grossen</td><td>gro\u00dfen</td><td>107,904</td></tr><tr><td>koennen</td><td>k\u00f6nnen</td><td>107,730</td></tr><tr><td>knnen</td><td>k\u00f6nnen (kennen?)</td><td>87,167</td></tr><tr><td>heisst</td><td>hei\u00dft</td><td>76,667</td></tr><tr><td colspan=\"2\">andern\u00e4ndern (anderen?)</td><td>73,972</td></tr><tr><td>Gruss</td><td>Gru\u00df</td><td>51,721</td></tr><tr><td>ausser</td><td>au\u00dfer</td><td>42,410</td></tr><tr><td>waere</td><td>w\u00e4re</td><td>37,071</td></tr><tr><td>muessen</td><td>m\u00fcssen</td><td>35,864</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Top-ranked errors in German ranked error list and their frequencies."
},
"TABREF20": {
"html": null,
"content": "<table><tr><td/><td>crawl</td><td>with 269,079 entries was</td></tr><tr><td>computed.</td><td/></tr><tr><td>Note that we did not extend D +F crawl and D +F+ED crawl</td><td colspan=\"2\">by analyzing an additional set of</td></tr><tr><td colspan=\"3\">filtered Web pages. Hence, D +F crawl is in fact a subdictionary of D crawl , and similarly for D +F+ED crawl and D +F crawl . This explains why the coverage of D +F crawl (D +F+ED crawl ) is smaller than</td></tr><tr><td>the coverage of D crawl (D +F crawl</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": ").12 In this case, 324 documents passed the filter, whereas 186 were rejected. In this case we obtained 291,065 entries. Deleting in D +F crawl all words that represent entries of D err (English,all), a third dictionary D +F+ED"
},
"TABREF21": {
"html": null,
"content": "<table><tr><td colspan=\"6\">Dictionary Entries Coverage (%) Accuracy (%) \u00b1 (%) False friends</td></tr><tr><td>D crawl</td><td>505,652</td><td>99.08</td><td>98.45</td><td>0.81</td><td>262</td></tr><tr><td>D +F crawl</td><td>291,065</td><td>98.77</td><td>98.61</td><td>0.97</td><td>92</td></tr><tr><td>D +F+ED crawl</td><td>269,079</td><td>98.75</td><td>98.74</td><td>1.10</td><td>49</td></tr><tr><td colspan=\"3\">the entries that are found in D err</td><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Measuring the quality of distinct dictionaries for text correction. D crawl is produced by an unfiltered crawl, D +F crawl by a filtered crawl. For D +F+ED crawl , a filtered crawl is used and remaining entries of error dictionaries are eliminated."
},
"TABREF22": {
"html": null,
"content": "<table><tr><td>Documents</td><td colspan=\"5\">Tokens Error rate Hits Real errors Percentage</td></tr><tr><td>ep-96-09-20.txt (E)</td><td>9,945</td><td>1.31</td><td>13</td><td>2</td><td>15.38</td></tr><tr><td>ep-97-04-24.txt (E)</td><td>8,074</td><td>0.99</td><td>8</td><td>8</td><td>100.00</td></tr><tr><td>ep-97-09-19.txt (E)</td><td>3,230</td><td>0.93</td><td>3</td><td>0</td><td>0.00</td></tr><tr><td>ep-97-02-21.txt (E)</td><td>5,830</td><td>0.86</td><td>5</td><td>5</td><td>100.00</td></tr><tr><td>ep-99-01-28.txt (E)</td><td>5,347</td><td>0.75</td><td>4</td><td>0</td><td>0.00</td></tr><tr><td>ep-97-06-25.txt (E)</td><td>20,012</td><td>0.70</td><td>14</td><td>11</td><td>78.57</td></tr><tr><td>ep-96-07-19.txt (E)</td><td>4,383</td><td>0.68</td><td>3</td><td>3</td><td>100.00</td></tr><tr><td>ep-97-04-23.txt (E)</td><td>21,930</td><td>0.64</td><td>14</td><td>14</td><td>100.00</td></tr><tr><td>ep-97-12-04.txt (E)</td><td>9,463</td><td>0.63</td><td>6</td><td>6</td><td>100.00</td></tr><tr><td>ep-99-02-12.txt (E)</td><td>5,426</td><td>0.55</td><td>3</td><td>3</td><td>100.00</td></tr><tr><td>ep-00-03-29.txt (E)</td><td>22,252</td><td>0.54</td><td>12</td><td>12</td><td>100.00</td></tr><tr><td>ep-96-07-17.txt (E)</td><td>34,381</td><td>0.52</td><td>18</td><td>14</td><td>77.77</td></tr><tr><td>ep-99-03-10.txt (E)</td><td>31,509</td><td>0.51</td><td>16</td><td>0</td><td>0.00</td></tr><tr><td>ep-00-11-15.txt (E)</td><td>35,167</td><td>0.48</td><td>17</td><td>1</td><td>5.88</td></tr><tr><td>ep-97-04-10.txt (E)</td><td>16,653</td><td>0.48</td><td>8</td><td>6</td><td>75.00</td></tr><tr><td>ep-97-05-15.txt (E)</td><td>20,942</td><td>0.48</td><td>10</td><td>2</td><td>20.00</td></tr><tr><td>ep-97-10-20.txt (E)</td><td>8,601</td><td>0.46</td><td>4</td><td>4</td><td>100.00</td></tr><tr><td>ep-97-04-11.txt (E)</td><td>6,857</td><td>0.44</td><td>3</td><td>1</td><td>33.33</td></tr><tr><td>ep-99-01-15.txt (E)</td><td>9,193</td><td>0.43</td><td>4</td><td>0</td><td>0.00</td></tr><tr><td>ep-96-06-18.txt (E)</td><td>32,768</td><td>0.43</td><td>14</td><td>6</td><td>42.86</td></tr><tr><td colspan=\"2\">ep-03-01-13.txt (G) 15,926</td><td>2.57</td><td>41</td><td>2</td><td>4.89</td></tr><tr><td colspan=\"2\">ep-97-05-16.txt (G) 12,344</td><td>1.94</td><td>24</td><td>15</td><td>62.50</td></tr><tr><td colspan=\"2\">ep-02-09-02.txt (G) 14,845</td><td>1.62</td><td>24</td><td>1</td><td>4.16</td></tr><tr><td colspan=\"2\">ep-98-11-05.txt (G) 15,035</td><td>1.46</td><td>22</td><td>3</td><td>13.64</td></tr><tr><td>ep-99-01-28.txt (G)</td><td>6,798</td><td>1.32</td><td>9</td><td>0</td><td>0.00</td></tr><tr><td colspan=\"2\">ep-02-04-25.txt (G) 10,842</td><td>1.29</td><td>14</td><td>4</td><td>28.57</td></tr><tr><td colspan=\"2\">ep-97-10-02.txt (G) 13,650</td><td>1.25</td><td>17</td><td>9</td><td>52.94</td></tr><tr><td>ep-99-07-20.txt (G)</td><td>2,431</td><td>1.23</td><td>3</td><td>0</td><td>0.00</td></tr><tr><td colspan=\"2\">ep-00-03-15.txt (G) 34,904</td><td>1.20</td><td>42</td><td>31</td><td>73.81</td></tr><tr><td>ep-96-06-21.txt (G)</td><td>8,474</td><td>1.18</td><td>10</td><td>9</td><td>90.00</td></tr><tr><td>ep-96-06-17.txt (G)</td><td>9,408</td><td>1.17</td><td>11</td><td>2</td><td>18.18</td></tr><tr><td>ep-99-04-16.txt (G)</td><td>8,667</td><td>1.15</td><td>10</td><td>9</td><td>90.00</td></tr><tr><td>ep-96-04-19.txt (G)</td><td>8,694</td><td>1.15</td><td>10</td><td>2</td><td>20.00</td></tr><tr><td>ep-00-12-15.txt (G)</td><td>6,964</td><td>1.15</td><td>8</td><td>3</td><td>37.50</td></tr><tr><td>ep-00-09-08.txt (G)</td><td>4,374</td><td>1.14</td><td>5</td><td>0</td><td>0.00</td></tr><tr><td colspan=\"2\">ep-96-07-04.txt (G) 10,975</td><td>1.09</td><td>12</td><td>11</td><td>91.66</td></tr><tr><td colspan=\"2\">ep-01-04-05.txt (G) 26,941</td><td>1.08</td><td>29</td><td>20</td><td>68.96</td></tr><tr><td colspan=\"2\">ep-97-06-09.txt (G) 11,152</td><td>1.08</td><td>12</td><td>12</td><td>100.00</td></tr><tr><td colspan=\"2\">ep-97-07-14.txt (G) 11,180</td><td>1.07</td><td>12</td><td>5</td><td>41.66</td></tr><tr><td colspan=\"2\">ep-97-07-18.txt (G) 10,392</td><td>1.06</td><td>11</td><td>10</td><td>90.90</td></tr></table>",
"type_str": "table",
"num": null,
"text": "English (E) and German (G) documents of the Europarl corpora, sizes, error rates w.r.t. maximal English and German error dictionaries, numbers of hits of the error dictionaries, and numbers of real errors among hits."
}
}
}
}