| { |
| "paper_id": "U15-1002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:10:09.467084Z" |
| }, |
| "title": "Comparison of Visual and Logical Character Segmentation in Tesseract OCR Language Data for Indic Writing Scripts", |
| "authors": [ |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Biggs", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Security & Intelligence", |
| "location": { |
| "country": "South Australia" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Language data for the Tesseract OCR system currently supports recognition of a number of languages written in Indic writing scripts. An initial study is described to create comparable data for Tesseract training and evaluation based on two approaches to character segmentation of Indic scripts; logical vs. visual. Results indicate further investigation of visual based character segmentation language data for Tesseract may be warranted.", |
| "pdf_parse": { |
| "paper_id": "U15-1002", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Language data for the Tesseract OCR system currently supports recognition of a number of languages written in Indic writing scripts. An initial study is described to create comparable data for Tesseract training and evaluation based on two approaches to character segmentation of Indic scripts; logical vs. visual. Results indicate further investigation of visual based character segmentation language data for Tesseract may be warranted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The Tesseract Optical Character Recognition (OCR) engine originally developed by Hewlett-Packard between 1984 and 1994 was one of the top 3 engines in the 1995 UNLV Accuracy test as \"HP Labs OCR\" (Rice et al 1995) . Between 1995 and 2005 there was little activity in Tesseract, until it was open sourced by HP and UNLV. It was re-released to the open source community in August of 2006 by Google (Vincent, 2006) , hosted under Google code and GitHub under the tesseract-ocr project. 1 More recent evaluations have found Tesseract to perform well in comparisons with other commercial and open source OCR systems (Dhiman and Singh. 2013; Chattopadhyay et al. 2011; Heli\u0144ski et al. 2012; Patel et al. 2012; Vijayarani and Sakila. 2015) . A wide range of external tools, wrappers and add-on projects are also available including Tesseract user interfaces, online services, training and training data preparation, and additional language data.", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 213, |
| "text": "(Rice et al 1995)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 396, |
| "end": 411, |
| "text": "(Vincent, 2006)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 611, |
| "end": 635, |
| "text": "(Dhiman and Singh. 2013;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 636, |
| "end": 662, |
| "text": "Chattopadhyay et al. 2011;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 663, |
| "end": 684, |
| "text": "Heli\u0144ski et al. 2012;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 685, |
| "end": 703, |
| "text": "Patel et al. 2012;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 704, |
| "end": 732, |
| "text": "Vijayarani and Sakila. 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Originally developed for recognition of English text, Smith (2007) , Smith et al (2009) and Smith (2014) provide overviews of the Tesseract system during the process of development and internationalization. Currently, Tesseract v3.02 release, v3.03 candidate release and v3.04 development versions are available, and the tesseractocr project supports recognition of over 60 languages.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 66, |
| "text": "Smith (2007)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 69, |
| "end": 87, |
| "text": "Smith et al (2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 92, |
| "end": 104, |
| "text": "Smith (2014)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Languages that use Indic scripts are found throughout South Asia, Southeast Asia, and parts of Central and East Asia. Indic scripts descend from the Br\u0101hm\u012b script of ancient India, and are broadly divided into North and South. With some exceptions, South Indic scripts are very rounded, while North Indic scripts are less rounded. North Indic scripts typically incorporate a horizontal bar grouping letters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper describes an initial study investigating alternate approaches to segmenting characters in preparing language data for Indic writing scripts for Tesseract; logical and a visual segmentation. Algorithmic methods for character segmentation in image processing are outside of the scope of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As discussed in relation to several Indian languages by Govandaraju and Stelur (2009) , OCR of Indic scripts presents challenges which are different to those of Latin or Oriental scripts. Recently there has been significantly more progress, particularly in Indian languages (Krishnan et al 2014; Govandaraju and Stelur. 2009; Yadav et al. 2013) . Sok and Taing (2014) describe recent research in OCR system development for Khmer, Pujari and Majhi (2015) provide a survey of Odia character recognition, as do Nishad and Bindu (2013) for Malayalam.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 85, |
| "text": "Govandaraju and Stelur (2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 274, |
| "end": 295, |
| "text": "(Krishnan et al 2014;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 296, |
| "end": 325, |
| "text": "Govandaraju and Stelur. 2009;", |
| "ref_id": null |
| }, |
| { |
| "start": 326, |
| "end": 344, |
| "text": "Yadav et al. 2013)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 347, |
| "end": 367, |
| "text": "Sok and Taing (2014)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 430, |
| "end": 453, |
| "text": "Pujari and Majhi (2015)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 508, |
| "end": 531, |
| "text": "Nishad and Bindu (2013)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Except in cases such as Krishnan et al. (2014) , where OCR systems are trained for whole word recognition in several Indian languages, character segmentation must accommodate inherent characteristics such as non-causal (bidirectional) dependencies when encoded in Unicode. 2", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 46, |
| "text": "Krishnan et al. (2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Indic scripts are a family of abugida writing systems. Abugida, or alphasyllabary, writing systems are partly syllabic, partly alphabetic writing systems in which consonant-vowel sequences may be combined and written as a unit. Two general characteristics of most Indic scripts that are significant for the purposes of this study are that: \uf0b7 Diacritics and dependent signs might be added above, below, left, right, around, surrounding or within a base consonant. \uf0b7 Combination of consonants without intervening vowels in ligatures or noted by special marks, known as consonant clusters. The typical approach for Unicode encoding of Indic scripts is to encode the consonant followed by any vowels or dependent forms in a specified order. Consonant clusters are typically encoded by using a specific letter between two consonants, which might also then include further vowels or dependent signs. Therefore the visual order of graphemes may differ from the logical order of the character encoding. Exceptions to this are Thai, Lao (Unicode v1.0, 1991) and Tai Viet (Unicode v5.2, 2009) , which use visual instead of logical order. New Tai Lue has also been changed to a visual encoding model in Unicode v8.0 (2015, Chapter 16). Complex text rendering may also contextually shape characters or create ligatures. Therefore a Unicode character may not have a visual representation within a glyph, or may differ from its visual representation within another glyph.", |
| "cite_spans": [ |
| { |
| "start": 1028, |
| "end": 1048, |
| "text": "(Unicode v1.0, 1991)", |
| "ref_id": null |
| }, |
| { |
| "start": 1062, |
| "end": 1082, |
| "text": "(Unicode v5.2, 2009)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Indic scripts and Unicode encoding", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "As noted by White (2013) , Tesseract has no internal representations for diacritic marks. A typical OCR approach for Tesseract is therefore to train for recognition of the combination of characters including diacritic marks. White (2013) also notes that diacritic marks are often a common source of errors due to their small size and distance from the main character, and that training in a combined approach also greatly expands the larger OCR character set. This in turn may also increase the number of similar symbols, as each set of diacritic marks is applied to each consonant.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 24, |
| "text": "White (2013)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 225, |
| "end": 237, |
| "text": "White (2013)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tesseract", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "As described by Smith (2014) , lexical resources are utilised by Tesseract during two-pass classification, and de Does and Depuydt 2012found that word recall was improved for a Dutch historical recognition task by simply substituting the default Dutch Tesseract v3.01 word list for a corpus specific word list. As noted by White (2013), while language data was available from the tesseract-ocr project, the associated training files were previously available. However, the Tesseract project now hosts related files from which training data may be created.", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 28, |
| "text": "Smith (2014)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tesseract", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Tesseract is flexible and supports a large number of control parameters, which may be specified via a configuration file, by the command line interface, or within a language data file 3 . Although documentation of control parameters by the tesseract-ocr project is limited 4 , a full list of parameters for v3.02 is available 5 . White (2012) and Ibrahim (2014) describe effects of a limited number of control parameters.", |
| "cite_spans": [ |
| { |
| "start": 347, |
| "end": 361, |
| "text": "Ibrahim (2014)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tesseract", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Training Tesseract has been described for a number of languages and purposes (White, 2013; Mishra et al. 2012; Ibrahim, 2014; Heli\u0144ski et al. 2012) . At the time of writing, we are aware of a number of publically available sources for Tesseract language data supporting Indic scripts in addition to the tesseract-ocr project. These include Parichit 6 , BanglaOCR 7 (Hasnat et al. 2009a and 2009b; Omee et al. 2011) with training files released in 2013, tesseractindic 8 , and myaocr 9 . Their Tesseract version and recognition languages are summarised in Table 1 . These external projects also provide Tesseract training data in the form of TIFF image and associated coordinate 'box' files. For version 3.04, the tesseract-ocr project provides data from which Tesseract can generate training data.", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 90, |
| "text": "(White, 2013;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 91, |
| "end": 110, |
| "text": "Mishra et al. 2012;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 111, |
| "end": 125, |
| "text": "Ibrahim, 2014;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 126, |
| "end": 147, |
| "text": "Heli\u0144ski et al. 2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 365, |
| "end": 389, |
| "text": "(Hasnat et al. 2009a and", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 390, |
| "end": 396, |
| "text": "2009b;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 397, |
| "end": 414, |
| "text": "Omee et al. 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 555, |
| "end": 562, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tesseract and Indic scripts", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Sets of Tesseract language data for a given language may differ significantly in parameters including coverage of the writing script, fonts, number of training examples, or dictionary data. Smith (2014) 10and Smith et al 200911 provides results for Tesseract for two Indic scripts; Hindi 12 and Thai. Table 2 compares these error rates to those found by Krishnan et al. (2014) 13 . Additionally, the Khmer OCR project reports initial accuracy rates of 50-60% for Khmer OS Battambang font, 26pt (Tan, 2014) , and the Khmer OCR project 14 beta website provides a Khmer OCR web service based on the Tesseract OCR system that incorporates user feedback training. Hasnat et al. (2009a; 2009b) ", |
| "cite_spans": [ |
| { |
| "start": 354, |
| "end": 376, |
| "text": "Krishnan et al. (2014)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 377, |
| "end": 379, |
| "text": "13", |
| "ref_id": null |
| }, |
| { |
| "start": 494, |
| "end": 505, |
| "text": "(Tan, 2014)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 659, |
| "end": 680, |
| "text": "Hasnat et al. (2009a;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 681, |
| "end": 687, |
| "text": "2009b)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 301, |
| "end": 308, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tesseract and Indic scripts", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "As noted by White (2013) the approach of the tesseract-ocr project is to train Tesseract for recognition of combinations of characters including diacritics. For languages with Indic writing scripts, this approach may also include consonant-vowel combinations and consonant clusters with other dependent signs, and relies on character segmentation to occur in line with Unicode logical ordering segmentation points for a given segment of text. An advantage of this approach is that Unicode standard encoding is output by the OCR system. An alternate approach in developing a training set for Tesseract is to determine visual segmentation points within the writing script. This approach has been described and implemented in several external language data projects for Tesseract, including Parichit, BanglaOCR, and myaocr. Examples of logical and two possible approaches to visual segmentation for selected consonant groupings are shown in Figure 1 . A disadvantage of visual segmentation is that OCR text outputs may require re-ordering processing to output Unicode encoded text. Mishra et al. (2012) describe creating language data for Hindi written in Devanagari script that implemented a visual segmentation approach in which single touching conjunct characters are excluded from the training set. Therefore, Tesseract language data could be created that included only two or more touching conjunct characters, basic characters and isolated half characters. This had the effect of reducing the Tesseract training set 15 and language data size, and increasing recognition accuracy on a test set of 94 characters compared with the tesseract-ocr (Google) and Parichit language data as shown in Table 3 The implementation also included languagespecific image pre-processing to 'chop' the Shirorekha horizontal bar connecting characters within words. This was intended to increase the likelihood of Tesseract system segmentation occurring at these points. Examples of words including Shirorekha are shown in Figure 2 . An initial study was conducted to determine the potential of implementing a visual segmentation approach, compared to the logical segmentation approach in Tesseract for languages with Indic scripts. Languages written with Indic scripts that do not use the Shirorekha horizontal bar were 15 Defined in Tesseract the *.unicharset file within language data 16 It is not stated if text output re-ordering processing for Parichit recognition output was applied before accuracy was measured.", |
| "cite_spans": [ |
| { |
| "start": 1079, |
| "end": 1099, |
| "text": "Mishra et al. (2012)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 2304, |
| "end": 2306, |
| "text": "15", |
| "ref_id": null |
| }, |
| { |
| "start": 2371, |
| "end": 2373, |
| "text": "16", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 938, |
| "end": 946, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1693, |
| "end": 1701, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 2006, |
| "end": 2014, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Visual and logical character segmentation for Tesseract", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "considered. Re-ordering of OCR text outputs for visual segmentation methods is outside the scope of this study. The term glyph is used in this section to describe a symbol that represents an OCR recognition character, whether by logical or visual segmentation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Visual and logical character segmentation for Tesseract", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "This section describes ground truth and evaluation tools used, and the collection and preparation of glyph, Tesseract training, and OCR ground truth data. Three Indic languages were selected to estimate the potential for applying visual segmentation to further languages. Firstly, corpora were collected and analysed to compare glyphs found by each segmentation approach. Secondly, Tesseract recognition and layout accuracy was evaluated based on the coverage of those glyphs in the corpus. The accuracy of tesseract-ocr project v3.04 language data is also measured against the same ground truth data for a wider selection of Indic languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In order to estimate the number and distribution of glyphs in selected Indic languages, language specific corpora were sought. A web crawler was implemented using the crawler4j library 17 , which restricted the crawl domain to the seed URL. The boilerpipe library 18 was then used to extract textual content from each web page. For each language, a corpus was then collected by using the relevant Wikipedia local language top page as the seed for the crawler. The Lucene library 19 was used to index corpus documents. Language specific processing was implemented supporting grouping of consonantvowel combinations, consonant clusters and dependent signs into logical order glyphs. Additional processing to separate those groupings in line with the visual segmentation approach was also implemented.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Glyph data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Letters affected by visual segmentation in each language are shown in Table 4 . In Khmer, there could theoretically be up to three coeng (U+17D2) in a syllable; two before and one after a vowel. Clusters with coeng after a vowel were not additionally segmented in this implementation. The number of glyphs according to each segmentation approach was then extracted from the index for each language. Similarly, in Mala-yalam dependent vowels found between consonants in consonant ligatures were not segmented.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 70, |
| "end": 77, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Glyph data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Khmer The size of corpus and number of glyphs according to logical segmentation is given in Table 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 92, |
| "end": 100, |
| "text": "Table 5", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language Letters", |
| "sec_num": null |
| }, |
| { |
| "text": "\u17be \u17be \u17be \u17be \u17c2 \u17c3 [U+17BE -U+17C3,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language Letters", |
| "sec_num": null |
| }, |
| { |
| "text": "Text corpus ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language", |
| "sec_num": null |
| }, |
| { |
| "text": "Tesseract training data was prepared for each language using the paired sets of glyph data described in section 3.1. An application was implemented to automatically create Tesseract training data from each glyph data set, with the ability to automatically delete dotted consonant outlines displayed when a Unicode dependent letter or sign is rendered separately. The implemented application outputs multi-page TIFF format images and corresponding bounding box coordinates in the Tesseract training data format. 20 Tesseract training was completed using most recent release v3.02 according to the documented training process for Tesseract v3, excluding shapeclustering. The number of examples of each glyph, between 5 and 40 in each training set, was determined by relative frequency in the 20 Description of the training format and requirements can be found at https://github.com/tesseractocr/tesseract/wiki/TrainingTesseract corpus. A limited set of punctuation and symbols were also added to each set of glyph data, equal to those included in tesseract-ocr project language data. However, training text was not representative as recommended in documentation, with glyphs and punctuation randomly sorted.", |
| "cite_spans": [ |
| { |
| "start": 511, |
| "end": 513, |
| "text": "20", |
| "ref_id": null |
| }, |
| { |
| "start": 790, |
| "end": 792, |
| "text": "20", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tesseract training data", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "As dictionary data is utilised during Tesseract segmentation processing, word lists were prepared for each segmentation approach. As the separated character approach introduced a visual ordering to some consonant-vowel combinations and consonant clusters, word lists to be used in this approach were re-ordered, in line with the segmentation processing used for each language described in section 3.1. Word lists were extracted from the tesseract-ocr project v3.04 language data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dictionary data", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "OCR ground truth data was prepared in a single font size for each language in the PAGE XML format (Pletschacher and Antonacopoulos. 2010) using the application also described in section 3.1.2. The implementation segments text according to logical or visual ordering described in section 3.1.1, and uses the Java PAGE libraries 21 to output PAGE XML documents.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 137, |
| "text": "(Pletschacher and Antonacopoulos. 2010)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ground truth data", |
| "sec_num": "3.1.4" |
| }, |
| { |
| "text": "Text was randomly selected from documents within the web corpora described in section 3.1. Text segments written in Latin script were removed. Paired ground truth data were then generated. For each document image, two corresponding ground truth PAGE XML files were created according to logical and visual segmentation methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ground truth data", |
| "sec_num": "3.1.4" |
| }, |
| { |
| "text": "Tesseract v3.04 was used via the Aletheia v3 tool for production of PAGE XML ground truth described by . Evaluation was completed using the layout evaluation framework for evaluating PAGE XML format OCR outputs and ground truth described by Clausner et al. (2011) . Output evaluations were completed using the described Layout Evaluation tool and stored in XML format.", |
| "cite_spans": [ |
| { |
| "start": 241, |
| "end": 263, |
| "text": "Clausner et al. (2011)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.1.5" |
| }, |
| { |
| "text": "Results are presented in three sections; for tesseract-ocr language data, for web corpora glyph data per segmentation method, and for the comparable Tesseract language data per segmentation method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Measured layout success is a region correspondence determination. Results are given for glyph based count and area weighted arithmetic and harmonic mean layout success as calculated by the Layout Evaluation tool. Weighted area measures are based on the assumption that bigger areas regions are more important than smaller ones, while the weighted count only takes into account the error quantity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Recognition accuracy for selected tesseract-ocr project language data with Indic scripts is given in Table 6 . All glyphs are segmented in line with Unicode logical encoding standards; using a logical segmentation approach, except for Thai and Lao which are encoded with visual segmentation in Unicode.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 108, |
| "text": "Table 6", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tesseract-ocr language data", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "Measured Thai recognition accuracy is in line with the 79.7% accuracy reported by Smith (2014) . While Hindi accuracy is far less than the 93.6% reported by Smith (2014) , it is higher than the 73.3% found by Krishnan et al. (2014) . Measured recognition accuracy for Telugu is also higher than the 67.1% found by Krishnan et al. (2014) , although this may be expected for higher quality evaluation images. Measured Khmer recognition accuracy is in line with the 50-60% reported in Tan (2014) . Bengali results are within the 70-93% range reported by Hasnat et al. (2009a) , but are not directly comparable with the training approach used in BanglaOCR.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 94, |
| "text": "Smith (2014)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 157, |
| "end": 169, |
| "text": "Smith (2014)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 209, |
| "end": 231, |
| "text": "Krishnan et al. (2014)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 314, |
| "end": 336, |
| "text": "Krishnan et al. (2014)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 482, |
| "end": 492, |
| "text": "Tan (2014)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 551, |
| "end": 572, |
| "text": "Hasnat et al. (2009a)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tesseract-ocr language data", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "The number of glyphs and their occurrences in the collected language specific Wikipedia corpora are shown in Figure 4 . These are compared to the number of glyphs in the tesseract-ocr project language data recognition character set 22 , and the number of glyphs when visual order segmentation processing is applied to that character set. Visual segmentation can be seen to significantly reduce the number of glyphs for the same language coverage in each case. The logi- 22 Glyphs not within the local language Unicode range(s) are not included. cal glyphs in common and unique to tesseractocr and corpus based language data may be seen in Figure 3 . ", |
| "cite_spans": [ |
| { |
| "start": 470, |
| "end": 472, |
| "text": "22", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 109, |
| "end": 117, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 639, |
| "end": 647, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Web corpora glyphs by logical and visual segmentation", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "The total number of examples in the training data and size of the resulting Tesseract language data file with each approach (without dictionary data) is given in Table 7 . The tesseract-ocr language data sizes are not directly comparable as the training sets and fonts differ.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 162, |
| "end": 169, |
| "text": "Table 7", |
| "ref_id": "TABREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparable data for logical and visual segmentation", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "OCR recognition accuracy is given for each segmentation method in Table 7 . Recognition accuracy was found to be higher for visual segmentation in each language; by 3.5% for Khmer, 16.1% for Malayalam, and by 4.6% for Odia.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 66, |
| "end": 73, |
| "text": "Table 7", |
| "ref_id": "TABREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparable data for logical and visual segmentation", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "Logical segmentation accuracy shown in Table 7 was measured against the same ground truth data reported in section 3.2.1. However, as illustrated in Figure 4 , the coverage of glyphs in each set of language data differed greatly. In each case, the number of glyphs found in the collected corpus was significantly greater than in the tesseract-ocr recognition set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 149, |
| "end": 157, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparable data for logical and visual segmentation", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "Recognition accuracy for tesseract-ocr language data for Khmer and Malayalam was 12.2% and 13% higher respectively than for the corpus based logical segmentation language data when measured against the same ground truth. However the corpus based logical segmentation data for Odia achieved 12.2% higher recognition accuracy than tesseract-ocr language data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparable data for logical and visual segmentation", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "Dictionary data added to language data for each segmentation method was found to make no more than 0.5% difference to recognition or layout accuracy for either segmentation method. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparable data for logical and visual segmentation", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "Analysis of the collected glyph corpora and tesseract-ocr project language data has shown the visual segmentation significantly reduces the number of glyphs required for a Tesseract training set in each of the languages considered. When using comparative training and ground truth data, visual segmentation was also shown to reduce the size of Tesseract language data and increase recognition accuracy. The use of dictionary data was not found to significantly affect results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The implementation for visual segmentation of glyphs led to inconsistencies between similar visual components. For example, in Khmer it was observed that the visual representation of coeng (U+17D2) was commonly segmented by Tesseract as a separate glyph using tesseract-ocr and created language data, as illustrated for Khmer in Figure 5 . Further opportunities for visual segmentation were also not implemented, such as components of consonant clusters. A consistent and more sophisticated implementation of visual segmentation may further improve results. The Tesseract training data prepared from corpus based glyphs was intended to be comparable, but was not in line with recommendations for training Tesseract. Preparation of training data in line with recommendations may improve results. The effects of Tesseract configuration parameters were not investigated during this study and should also be explored per language. Further, while glyph recognition accuracy achieved for the visual segmentation language data for Khmer was lower than that of the tesseract-ocr project language data, the coverage of glyphs was far greater. A significant percentage of the glyphs in each training set were rare. Future work may examine the relationship between coverage of rare glyphs in language data and recognition accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 329, |
| "end": 337, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "While effort was made to estimate coverage of modern glyphs for each segmentation approach in each language, the web corpora collected may not be representative. In preparing training data for the proposed segmentation method, care must be taken to determine that isolated or combined characters in the training sets are rendered in the predicted way when combined with other characters. A further consideration when creating multi-font training data is that characters may be rendered significantly differently between fonts. Further, some scripts have changed over time. For example, Malayalam has undergone formal revision in the 1970s, and informal changes with computer-aided typesetting in the 1980s, and Devanagari has also modified specific characters during the last three decades.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Developing high accuracy, multi-font language data for robust, end-to-end processing for Tesseract was not within the scope of this study. Rather, the aim was an initial investigation of alternate approaches for logical compared to visual character segmentation in a selection of Indic writing scripts. Results in the limited evaluation domain indicate that the proposed visual segmentation method improved results in three languages. The described technique may potentially be applied to further Indic writing scripts. While recognition accuracy achieved for the reported languages remains relatively low, outcomes indicate that effort to implement language specific training data preparation and OCR output reordering may be warranted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The tesseract-ocr project repository was archived in August 2015. The main repository has moved from https://code.google.com/p/tesseract-ocr/ to https://github.com/tesseract-ocr", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Except in Thai, Lao, Tai Viet, and New Tai Lue", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Language data files are in the form <xxx>.traineddata 4 https://code.google.com/p/tesseractocr/wiki/ControlParams 5 http://www.sk-spell.sk.cx/tesseract-ocr-parameters-in-302version 6 https://code.google.com/p/Parichit/ 7 https://code.google.com/p/banglaocr/ 8 https://code.google.com/p/tesseractindic/ 9 https://code.google.com/p/myaocr/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/yasserg/crawler4j 18 https://github.com/kohlschutter/boilerpipe 19 https://lucene.apache.org/core/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The PAGE XML format and related tools have been developed by the PRImA Research Lab at the University of Salford, and are available from http://www.primaresearch.org/tools/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to acknowledge Google and the work of Mr. Ray Smith and contributors to the tesseract-ocr project.We would also like to acknowledge the PRImA Lab of the University of Salford for their work in developing the PAGE XML format and related software tools and applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Performance of document image OCR systems for recognizing video texts on embedded platform", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Chattopadhyay", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Sinha", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Biswas", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "International Conference on Computational Intelligence and Communication Networks (CICN)", |
| "volume": "", |
| "issue": "", |
| "pages": "606--610", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chattopadhyay, T., Sinha, P., and Biswas, P. Perfor- mance of document image OCR systems for recog- nizing video texts on embedded platform. Interna- tional Conference on Computational Intelligence and Communication Networks (CICN), 2011, pp. 606-610, Oct 2011", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Scenario Driven In-Depth Performance Evaluation of Document Layout Analysis Methods", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Clausner", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Pletschacher", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Antonacopoulos", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "proc. of the 11th International Conference on Document Analysis and Recognition (ICDAR2011)", |
| "volume": "", |
| "issue": "", |
| "pages": "1404--1408", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clausner, C., Pletschacher, S. and Antonacopoulos, A. 2014. Scenario Driven In-Depth Performance Evaluation of Document Layout Analysis Methods. In proc. of the 11th International Conference on Document Analysis and Recognition (ICDAR2011), Beijing, China, September 2011, pp. 1404-1408", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Efficient OCR Training Data Generation with Aletheia", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Clausner", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Pletschacher", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Antonacopoulos", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Short paper booklet of the 11th International Association for Pattern Recognition Workshop on Document Analysis Systems (DAS2014)", |
| "volume": "", |
| "issue": "", |
| "pages": "19--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clausner, C., Pletschacher, S. and Antonacopoulos, A. 2014. Efficient OCR Training Data Generation with Aletheia. Short paper booklet of the 11th In- ternational Association for Pattern Recognition Workshop on Document Analysis Systems (DAS2014), Tours, France, April 2014, pp. 19-20", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Lexiconsupported OCR of eighteenth century Dutch books: a case study", |
| "authors": [ |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "De Does", |
| "suffix": "" |
| }, |
| { |
| "first": "Katrien", |
| "middle": [], |
| "last": "Depuydt", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "proc. SPIE 8658, Document Recognition and Retrieval XX, 86580L", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1117/12.2008423" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "de Does, Jesse. and Depuydt, Katrien. 2012. Lexicon- supported OCR of eighteenth century Dutch books: a case study. In proc. SPIE 8658, Document Recognition and Retrieval XX, 86580L (February 4, 2013); doi:10.1117/12.2008423", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Tesseract Vs Gocr A Comparative Study", |
| "authors": [ |
| { |
| "first": "Singh", |
| "middle": [], |
| "last": "Dhiman", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "International Journal of Recent Technology and Engineering (IJRTE)", |
| "volume": "2", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dhiman and Singh. 2013. Tesseract Vs Gocr A Com- parative Study. International Journal of Recent Technology and Engineering (IJRTE): Vol 2, Issue 4, September 2013", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Guide to OCR for Indic scripts: document recognition and retrieval", |
| "authors": [ |
| { |
| "first": "Venugopal", |
| "middle": [], |
| "last": "Govindaraju", |
| "suffix": "" |
| }, |
| { |
| "first": "Srirangaraj", |
| "middle": [], |
| "last": "Setlur", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Govindaraju, Venugopal, and Srirangaraj Setlur. 2009. Guide to OCR for Indic scripts: document recognition and retrieval. London: Springer.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "An open source Tesseract based Optical Character Recognizer for Bangla script", |
| "authors": [ |
| { |
| "first": "Abul", |
| "middle": [], |
| "last": "Hasnat", |
| "suffix": "" |
| }, |
| { |
| "first": "Muttakinur", |
| "middle": [], |
| "last": "Chowdhury", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rahman", |
| "suffix": "" |
| }, |
| { |
| "first": "Mumit", |
| "middle": [], |
| "last": "Khan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "proc. Tenth International Conference on Document Analysis and Recognition (ICDAR2009)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hasnat, Abul., Chowdhury, Muttakinur Rahman. and Khan, Mumit. 2009a. An open source Tesseract based Optical Character Recognizer for Bangla script. In proc. Tenth International Conference on Document Analysis and Recognition (ICDAR2009), Catalina, Spain, July 26-29, 2009", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Integrating Bangla script recognition support in Tesseract OCR", |
| "authors": [ |
| { |
| "first": "Abul", |
| "middle": [], |
| "last": "Hasnat", |
| "suffix": "" |
| }, |
| { |
| "first": "Muttakinur", |
| "middle": [], |
| "last": "Chowdhury", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rahman", |
| "suffix": "" |
| }, |
| { |
| "first": "Mumit", |
| "middle": [], |
| "last": "Khan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "proc. Conference on Language Technology 2009 (CLT09)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hasnat, Abul., Chowdhury, Muttakinur Rahman. and Khan, Mumit. 2009b. Integrating Bangla script recognition support in Tesseract OCR. In proc. Conference on Language Technology 2009 (CLT09), Lahore, Pakistan, January 22-24, 2009", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Report on the comparison of Tesseract and ABBYY FineReader OCR engines. IM-PACT Report", |
| "authors": [ |
| { |
| "first": "Marcin", |
| "middle": [], |
| "last": "Heli\u0144ski", |
| "suffix": "" |
| }, |
| { |
| "first": "Mi\u0142osz", |
| "middle": [], |
| "last": "Kmieciak", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomasz", |
| "middle": [], |
| "last": "Parko\u0142a", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heli\u0144ski, Marcin., Kmieciak, Mi\u0142osz. and Parko\u0142a, Tomasz. 2012. Report on the comparison of Tes- seract and ABBYY FineReader OCR engines. IM- PACT Report.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Dhivehi OCR: Character Recognition of Thaana Script using Machine Generated Text and Tesseract OCR Engine", |
| "authors": [ |
| { |
| "first": "Ahmed", |
| "middle": [], |
| "last": "Ibrahim", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ibrahim, Ahmed. 2014. Dhivehi OCR: Character Recognition of Thaana Script using Machine Gen- erated Text and Tesseract OCR Engine, Edith Cowan University, Australia.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Towards a Robust OCR System for Indic Scripts, 11 th IAPR International Workshop on Document Analysis Systems (DAS2014)", |
| "authors": [ |
| { |
| "first": "Praveen", |
| "middle": [], |
| "last": "Krishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Naveen", |
| "middle": [], |
| "last": "Sankaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Ajeet", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Krishnan, Praveen., Sankaran, Naveen., Singh, and Ajeet Kumar. 2014. Towards a Robust OCR Sys- tem for Indic Scripts, 11 th IAPR International Workshop on Document Analysis Systems (DAS2014), Tours, France, 7 th -10 th April 2014.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Shirorekha Chopping Integrated Tesseract OCR Engine for Enhanced Hindi Language Recognition", |
| "authors": [ |
| { |
| "first": "Nitin", |
| "middle": [ |
| ";" |
| ], |
| "last": "Mishra", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Patvardhan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lakshimi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Vasantha", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarika", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "International Journal of Computer Applications", |
| "volume": "39", |
| "issue": "6", |
| "pages": "19--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mishra, Nitin; Patvardhan, C.; Lakshimi, Vasantha C.; and Singh, Sarika. 2012. Shirorekha Chopping In- tegrated Tesseract OCR Engine for Enhanced Hin- di Language Recognition, International Journal of Computer Applications, Vol. 39, No. 6, February 2012, pp. 19-23", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Malayalam OCR Systems -A Survey", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nishad", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Bindu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "International Journal of Computer Technology and Electronics Engineering", |
| "volume": "3", |
| "issue": "6", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nishad, A; and Bindu, K. 2013. Malayalam OCR Sys- tems -A Survey. International Journal of Computer Technology and Electronics Engineering, Vol. 3, No. 6, December 2013", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Odia Characters Recognition by Training Tesseract OCR Engine", |
| "authors": [ |
| { |
| "first": "Mamata", |
| "middle": [], |
| "last": "Nayak", |
| "suffix": "" |
| }, |
| { |
| "first": "Ajit", |
| "middle": [], |
| "last": "Nayak", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "proc. International Conference in Distributed Computing and Internet Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nayak, Mamata. and Nayak, Ajit Kumar. 2014. Odia Characters Recognition by Training Tesseract OCR Engine. In proc. International Conference in Distributed Computing and Internet Technology 2014 (ICDCIT-2014)", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A Complete Workflow for Development of Bangla OCR. International", |
| "authors": [ |
| { |
| "first": "Farjana", |
| "middle": [], |
| "last": "Omee", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yeasmin", |
| "suffix": "" |
| }, |
| { |
| "first": "Shiam", |
| "middle": [], |
| "last": "Himel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shabbir", |
| "suffix": "" |
| }, |
| { |
| "first": "Md", |
| "middle": [], |
| "last": "Bikas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Naser", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Computer Applications", |
| "volume": "21", |
| "issue": "9", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omee, Farjana Yeasmin., Himel, Shiam Shabbir. and Bikas, Md. Abu Naser. 2011. A Complete Work- flow for Development of Bangla OCR. Internation- al Journal of Computer Applications, Vol. 21, No. 9, May 2011", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Optical Character Recognition by Open Source OCR Tool Tesseract: A Case Study", |
| "authors": [ |
| { |
| "first": "Chirag", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dharmendra", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "International Journal of Computer Applications", |
| "volume": "55", |
| "issue": "10", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patel, Chirag., Patel, Atul and Patel, Dharmendra. 2012. Optical Character Recognition by Open Source OCR Tool Tesseract: A Case Study, Inter- national Journal of Computer Applications, Vol. 55, No. 10", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The PAGE (Page Analysis and Ground-Truth Elements) Format Framework", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Pletschacher", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Antonacopoulos", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "proc. of the 20th International Conference on Pattern Recognition (ICPR2010)", |
| "volume": "", |
| "issue": "", |
| "pages": "257--260", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pletschacher, S. and Antonacopoulos, A. 2010. The PAGE (Page Analysis and Ground-Truth Ele- ments) Format Framework. In proc. of the 20th In- ternational Conference on Pattern Recognition (ICPR2010), Istanbul, Turkey, August 23-26, 2010, IEEE-CS Press, pp. 257-260", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A Survey on Odia Character Recognition", |
| "authors": [ |
| { |
| "first": "Pushpalata", |
| "middle": [ |
| ";" |
| ], |
| "last": "Pujari", |
| "suffix": "" |
| }, |
| { |
| "first": "Babita", |
| "middle": [], |
| "last": "Majhi", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Journal of Emerging Science and Engineering (IJESE)", |
| "volume": "3", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pujari, Pushpalata; and Majhi, Babita. 2015. A Survey on Odia Character Recognition. International Journal of Emerging Science and Engineering (IJESE), Vol. 3, No. 4, February 2015", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Adapting the Tesseract Open Source OCR Engine for Multilingual OCR", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "V" |
| ], |
| "last": "Rice", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "R" |
| ], |
| "last": "Jenkins", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "A. ; D" |
| ], |
| "last": "Nartker", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Antonova", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rice, S.V., Jenkins, F.R. and Nartker, T.A. 1995. The Fourth Annual Test of OCR Accuracy, Technical Report 95-03, Information Science Research Insti- tute, University of Nevada, Las Vegas Smith, Ray D. Antonova and D. Lee. 2009 Adapting the Tesseract Open Source OCR Engine for Multi- lingual OCR, in proc. International Workshop on Multilingual OCR 2009, Barcelona, Spain, July 25, 2009", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "An overview of the Tesseract OCR Engine", |
| "authors": [ |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "proc. Of the 9 th International Conference on Document Analysis and Recognition (ICR-DAR2007)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Smith, Ray. 2007. An overview of the Tesseract OCR Engine, in proc. Of the 9 th International Conference on Document Analysis and Recognition (ICR- DAR2007), Curitiba, Paran\u00e1, Brazil, 2007", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Everything you always wanted to know about Tesseract. 11 th IAPR International Workshop on Document Analysis Systems (DAS2014)", |
| "authors": [ |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Smith, Ray. 2014. Everything you always wanted to know about Tesseract. 11 th IAPR International Workshop on Document Analysis Systems (DAS2014), Tours, France, 7 th -10 th April 2014. Tutorial slides available from https://drive.google.com/file/d/0B7l10Bj_LprhbUlI", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "UFlCdGtDYkE/view?pli=1 Last visited", |
| "authors": [], |
| "year": 2015, |
| "venue": "", |
| "volume": "11", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "UFlCdGtDYkE/view?pli=1 Last visited 11/9/2015", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Support Vector Machine (SVM) Based Classifier For Khmer Printed Character-set Recognition", |
| "authors": [ |
| { |
| "first": "Pongsametry", |
| "middle": [], |
| "last": "Sok", |
| "suffix": "" |
| }, |
| { |
| "first": "Nguonly", |
| "middle": [], |
| "last": "Taing", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Annual Summit and Conference (APSIPA)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sok, Pongsametry. and Taing, Nguonly. 2014. Sup- port Vector Machine (SVM) Based Classifier For Khmer Printed Character-set Recognition, Asia Pacific Signal and Information Processing Associa- tion, 2014, Annual Summit and Conference (APSIPA), 9-12 December, 2014, Siem Reap, city of Ankor Wat, Cambodia", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Khmer OCR: Convert Hard-Copy Khmer Text To Digital", |
| "authors": [ |
| { |
| "first": "Germaine", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Geeks in Cambodia", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tan, Germaine. 2014. Khmer OCR: Convert Hard- Copy Khmer Text To Digital. Geeks in Cambodia, November 18, 2014. http://geeksincambodia.com/khmer-ocr-convert- hard-copy-khmer-text-to-digital/", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Performance Comparison of OCR Tools", |
| "authors": [ |
| { |
| "first": "Sakila", |
| "middle": [], |
| "last": "Vijayarani", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Journal of UbiComp (IJU)", |
| "volume": "6", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vijayarani and Sakila. 2015. Performance Compari- son of OCR Tools. International Journal of UbiComp (IJU), Vol. 6, No. 3, July 2015", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Announcing Tesseract OCR", |
| "authors": [ |
| { |
| "first": "Luc", |
| "middle": [], |
| "last": "Vincent", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Google Code", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vincent, Luc. 2006. Announcing Tesseract OCR, Google Code, http://googlecode.blogspot.com.au/2006/08/announ cing-tesseract-ocr.html Last accessed 1/9/2015", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Training Tesseract for Ancient Greek OCR. The Eutypon, No. 28-29", |
| "authors": [ |
| { |
| "first": "Nick", |
| "middle": [], |
| "last": "White", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "White, Nick. 2013. Training Tesseract for Ancient Greek OCR. The Eutypon, No. 28-29, October 2013, pp. 1-11. http://ancientgreekocr.org/e29- a01.pdf Last visited 18/9/2015", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Optical Character Recognition for Hindi Language Using a Neural Network Approach", |
| "authors": [ |
| { |
| "first": "Divakar", |
| "middle": [ |
| ";" |
| ], |
| "last": "Yadav", |
| "suffix": "" |
| }, |
| { |
| "first": "Sonia", |
| "middle": [ |
| ";" |
| ], |
| "last": "Sanchez-Cuadrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "Morato", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Journal of Information Processing Systems", |
| "volume": "9", |
| "issue": "10", |
| "pages": "117--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yadav, Divakar; Sanchez-Cuadrado, Sonia; and Mor- ato, Jorge. 2013. Optical Character Recognition for Hindi Language Using a Neural Network Ap- proach. Journal of Information Processing Sys- tems, Vol. 9, No. 10, March 2013, pp. 117-140", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Comparison of logical and two possible visual segmentation approaches for selected characters", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Figure 2: Examples of Shirorekha in Devanagari and Gurmukhi scripts", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Coverage of logical glyphs between tesseract-ocr and corpus based language data", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Visual glyphs for Khmer as implemented", |
| "uris": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "num": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>Language</td><td colspan=\"2\">Ground truth</td><td colspan=\"2\">Error rate</td></tr><tr><td/><td colspan=\"2\">(million)</td><td/><td>(%)</td></tr><tr><td/><td>char</td><td colspan=\"2\">words char</td><td>word</td></tr><tr><td>Hindi *</td><td>-</td><td colspan=\"3\">0.39 26.67 42.53</td></tr><tr><td>Telugu *</td><td>-</td><td colspan=\"3\">0.2 32.95 72.11</td></tr><tr><td>Hindi **</td><td>2.1</td><td>0.41</td><td colspan=\"2\">6.43 28.62</td></tr><tr><td>Thai **</td><td>0.19</td><td colspan=\"3\">0.01 21.31 80.53</td></tr><tr><td>Hindi ***</td><td>1.4</td><td colspan=\"3\">0.33 15.41 69.44</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "text": "report on development of Bengali language data for Bang-laOCR, with 70-93% accuracy depending on image type.Omee et al. (2011) report up to 98% accuracy in limited contexts for BanglaOCR.Nayak and Nayak (2014) report on development10 Tesseract v3.03 or v3.04 11 Tesseract v3.00 12 Hindi and Arabic language data for Tesseract v3.02 used a standard conventional neural network character classifier in a 'cube' model. Although,Smith (2014) states that this model achieves ~50% reduction in errors on Hindi when run together with Tesseract's word recognizer, the training code is unmaintained and unutilised, and will be removed from future tesseract-ocr versions.13 Tesseract v3.02 14 The Khmer OCR project led by Mr. Danh Hong begun in 2012 is described by Mr. Ly Sovannra inTan (2014) and at http://www.khmertype.org of Odia language data with 98-100% recognition accuracy for isolated characters.", |
| "num": null |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "num": null |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "num": null |
| }, |
| "TABREF7": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "num": null |
| }, |
| "TABREF9": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "num": null |
| }, |
| "TABREF11": { |
| "content": "<table><tr><td>Language</td><td>Seg-</td><td>Recogni-</td><td colspan=\"4\">Mean overall layout success (%) Ground</td><td>Recognition</td></tr><tr><td/><td>menta-</td><td>tion accu-</td><td>Area</td><td/><td>Count</td><td>truth</td><td>glyphs</td></tr><tr><td/><td>tion</td><td>racy (%)</td><td colspan=\"2\">weighted</td><td>weighted</td><td>glyphs</td></tr><tr><td/><td/><td/><td>Arith.</td><td>Har</td><td>Arith. Har.</td><td/></tr><tr><td/><td/><td/><td/><td>.</td><td/><td/></tr><tr><td>Khmer</td><td>Logical Visual</td><td>41.0 44.5</td><td colspan=\"2\">92.8 91.9 92.9 92.3</td><td>83.6 80.5 86.9 85.8</td><td>556 677</td><td>5205 3965</td></tr><tr><td>Malayalam</td><td>Logical Visual</td><td>54.2 70.3</td><td colspan=\"2\">90.2 88.4 90.8 89.7</td><td>80.4 74.3 80.5 77.6</td><td>552 851</td><td>4237 1171</td></tr><tr><td>Odia</td><td>Logical Visual</td><td>75.9 80.5</td><td colspan=\"2\">94.8 94.4 95.1 94.7</td><td>88.2 86.4 91.5 90.8</td><td>864 1130</td><td>2491 1387</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "num": null |
| }, |
| "TABREF12": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "num": null |
| } |
| } |
| } |
| } |