ACL-OCL / Base_JSON /prefixU /json /U17 /U17-1015.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U17-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:11:26.010629Z"
},
"title": "OCR Post-Processing Text Correction using Simulated Annealing (OPTeCA)",
"authors": [
{
"first": "Gitansh",
"middle": [],
"last": "Khirbat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "gitansh.khirbat@unimelb.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the system details and results of team \"EOF\" from the University of Melbourne for the shared task of ALTA 2017, which addresses the problem of text correction for post-processed Optical Character Recognition (OCR) based systems. We developed a two stage system which first detects errors in the given OCR post-processed text with the help of a support vector machine trained using given training dataset, followed by rectifying the errors by employing a confidencebased mechanism using simulated annealing to obtain an optimal correction from a pool of candidate corrections. Our system achieved a F 1-score of 32.98% on the private leaderboard 1 , which is the best score among all the participating systems.",
"pdf_parse": {
"paper_id": "U17-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the system details and results of team \"EOF\" from the University of Melbourne for the shared task of ALTA 2017, which addresses the problem of text correction for post-processed Optical Character Recognition (OCR) based systems. We developed a two stage system which first detects errors in the given OCR post-processed text with the help of a support vector machine trained using given training dataset, followed by rectifying the errors by employing a confidencebased mechanism using simulated annealing to obtain an optimal correction from a pool of candidate corrections. Our system achieved a F 1-score of 32.98% on the private leaderboard 1 , which is the best score among all the participating systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The dawn of digital age on mankind has laid the foundation of connectivity, fostering access and exchange of information practically anywhere in the world. Information can be present in any form, the most common being textual and graphical documents. Capturing and curating documents such as magazines, newspapers, journals and scientific articles is the primary requirement for a digitized, inter-connected society. While most of the textual documents can be stored with a decipherable textual component, the story is not the same for graphical documents which can be a collection of images containing scans of a textual document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Optical Character Recognition (OCR) is the process of identifying typed, handwritten or printed textual characters within a document containing scanned images or photographs with the help of various image processing and pattern recognition techniques (Tappert et al., 1990) , (Gupta et al., 2007) . The text obtained by OCR systems often suffers from low accuracy owing to irregularities in images, poor scans or simply the nature of arrangement of letters in a word. For example, reading \"lwo\" instead of \"two\", \"ia\" instead of \"is\", \"m\" instead of \"rn\", to name a few. These erroneous characters severely hamper the quality and readability of a converted document. Identifying and rectifying such erroneous characters in every OCR-processed document manually is a tedious task due to the sheer volume of data. Consequently, a methodology is required to identify such OCR errors and rectify them in order to enforce standards of purity and quality of the archived data. This need has motivated the shared task of ALTA 2017 (Molla and Cassidy, 2017) . The task organizers have provided the original outputs of an OCR system together with their corrected version for scanned Australian publications from Trove database 2 . Using this data, the participants are asked to automatically identify and rectify the OCR errors for documents in a separate test dataset.",
"cite_spans": [
{
"start": 251,
"end": 273,
"text": "(Tappert et al., 1990)",
"ref_id": "BIBREF13"
},
{
"start": 276,
"end": 296,
"text": "(Gupta et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 1024,
"end": 1049,
"text": "(Molla and Cassidy, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considerable research has been conducted previously to automatically correct text obtained by OCR systems using machine learning. (Lund and Ringger, 2009) (Lund et al., 2011) (William B. Lund, 2013) (Lund et al., 2014) introduced various techniques to select the most appropriate correction among a pool of candidates. (Jones and Eisner, 1992) , (Kukich, 1992) demonstrated that OCR-generated errors are more diverse than handwriting errors. (Taghva and Stofsky, 2001 ) used extensive feature engineering to facilitate a robust candidate selection using a probabilistic model. Motivated by (Mei et al., 2016) , we adopt a two stage approach to solve this task. First, our system detects errors in the given OCR post-processed text with the help of a support vector machine (SVM) trained using given training dataset. This is followed by identifying a set of candidate words as corrections for each of the errors guided by allowing a limited number of character modifications. Finally, we rank the candidates by employing a confidence based mechanism using simulated annealing to obtain an optimal correction from the set of candidate corrections.",
"cite_spans": [
{
"start": 130,
"end": 154,
"text": "(Lund and Ringger, 2009)",
"ref_id": "BIBREF6"
},
{
"start": 155,
"end": 174,
"text": "(Lund et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 175,
"end": 198,
"text": "(William B. Lund, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 199,
"end": 218,
"text": "(Lund et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 319,
"end": 343,
"text": "(Jones and Eisner, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 346,
"end": 360,
"text": "(Kukich, 1992)",
"ref_id": "BIBREF4"
},
{
"start": 442,
"end": 467,
"text": "(Taghva and Stofsky, 2001",
"ref_id": "BIBREF12"
},
{
"start": 590,
"end": 608,
"text": "(Mei et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 describes the methodology in detail. Section 3 describes the experiments and results. Section 4 discusses the error analysis of the obtained results and Section 5 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "OCR post-processing text correction is a challenging and complex problem. The ever-growing vocabulary constrains it further. In order to solve this problem, we break it down into two sub-problems, namely, identification of erroneous terms from the post-processed OCR text, followed by rectification of the identified erroneous terms. The complete pipeline is shown in Figure 1 , with the explanation of each stage as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The first stage of our system is to detect the erroneous terms for a given document. It involves two components as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error detection",
"sec_num": "2.1"
},
{
"text": "The pre-processing module consists of tokenization of a given textual document, i.e. the original output of the OCR. We defined regular expression patterns which split a textual document on delimiters such as full-stop (.), comma (,), semi-colon (;), single quotes (',') or double quotes (\",\"). The tokens are considered for further processing as-is, i.e. without undergoing lemmatization. The primary reason to abstain from lemmatization is to preserve the original OCR words in order to rule out the scope of any character-based discrepancy. Additionally, care is taken to preserve the token order. The order is important as it dictates one of the features as defined in Section 2.1.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "2.1.1"
},
{
"text": "The next step is to classify each token in the document as being erroneous or free of any error. We train a SVM with radial basis function (RBF) kernel and made use of the following features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Classification",
"sec_num": "2.1.2"
},
{
"text": "\u2022 Presence of non alpha-numeric text within a word is one the strongest indicators of an erroneous word. These mainly include special symbols like '$', '#', '%' and punctuation marks like '!', '?', ';', ':', etc. For example, \"th?\" and \"Mr. Pat?rsom\" contain a punctuation mark '?'; \"***n\", \"JM**shopB\" contain a special symbol '*'. We created a dictionary of such special symbols as observed from the given training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Classification",
"sec_num": "2.1.2"
},
{
"text": "\u2022 The bigram frequency of a word should be greater than a frequency threshold that varies with different word length. A common word is less likely to be an error word. We adopted this feature from (Mei et al., 2016) .",
"cite_spans": [
{
"start": 197,
"end": 215,
"text": "(Mei et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Classification",
"sec_num": "2.1.2"
},
{
"text": "\u2022 A word is likely to be correct if this word with its context occurs in other places. We use a sliding window similar to (Mei et al., 2016) to construct n-gram contexts for a word. The frequency of one of the context in n-gram corpus should be greater than a frequency threshold.",
"cite_spans": [
{
"start": 122,
"end": 140,
"text": "(Mei et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Classification",
"sec_num": "2.1.2"
},
{
"text": "Using these features, we train a binary SVM classifier (Pedregosa et al., 2011) which classifies a word being erroneous (1) or not (0). The experimental details are mentioned in Section 3.2.",
"cite_spans": [
{
"start": 55,
"end": 79,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction and Classification",
"sec_num": "2.1.2"
},
{
"text": "The second stage of our system solves the problem of rectifying the erroneous words identified in the first stage. It consists of two major components, namely candidate search and candidate ranking as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error rectification",
"sec_num": "2.2"
},
{
"text": "In this module, for each erroneous word, a set of candidate corrections is recommended within a limited number of character modifications based on calculating minimum edit distance between the erroneous word and the candidate correction. We make use of Levenshtein's edit distance (Levenshtein, 1966) to calculate the minimum edit distance consisting of the standard three operations, namely, insertion, deletion or substitution. The threshold is chosen heuristically on the basis of experiments conducted.",
"cite_spans": [
{
"start": 281,
"end": 300,
"text": "(Levenshtein, 1966)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Search",
"sec_num": "2.2.1"
},
{
"text": "This module makes use of the output of previous module, i.e. a set of candidate corrections (w ci , Figure 1 : System pipeline i 2 W) for each erroneous term (w e ), to assign a score to each candidate correction using simulated annealing (SA) algorithm (Kirkpatrick et al., 1983) . SA requires an aperiodic Markov chain defined on a certain state space, and a cooling schedule to iteratively push the solution towards the optimum. In this module, the state space is set of all candidate corrections. We calculate a similarity score for each of the candidate corrections (w ci ) on the basis of the following three factors: 1. Minimum edit distance d(w ci , w e ) as calculated by Levenshtein's edit distance.",
"cite_spans": [
{
"start": 254,
"end": 280,
"text": "(Kirkpatrick et al., 1983)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "2. Normalized longest common subsequence (Allison and Dix, 1986) which takes into account the length of both the shorter and the longer string for normalization.",
"cite_spans": [
{
"start": 41,
"end": 64,
"text": "(Allison and Dix, 1986)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "nlcs(w ci , w e ) = 2 \u21e4 len(lcs(w ci , w e )) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "len(w ci ) + len(w e )",
"eq_num": "(1)"
}
],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "3. Normalized maximal consecutive longest common subsequence, which is a modification of aforementioned factor by limiting the common subsequences to be consecutive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "nmnlcs(w ci , w e ) = 2 \u21e4 len(mclcs(w ci , w e )) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "len(w ci ) + len(w e )",
"eq_num": "(2)"
}
],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "The final score is calculated as a weighted sum of these three factors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "score(w ci , w e ) = \u21b5 1 \u21e4 d(w ci , w e ) + \u21b5 2 \u21e4 nlcs(w ci , w e ) + \u21b5 3 \u21e4 nmnlcs(w ci , w e )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "where, \u21b5 1 , \u21b5 2 and \u21b5 3 are chosen heuristically. Next, we perturb the given candidate 26 times, i.e. for all the characters of English alphabet, in order to check which character returns the maximal score. This is followed by validating the presence of that candidate correction by Google Web n-gram corpus 3 . Finally, the candidates are ranked on the basis of this optimized score. The candidate having highest score is returned as the final suggested correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ranking",
"sec_num": "2.2.2"
},
{
"text": "The ALTA shared task is to rectify textual errors in OCR post-processed documents. We first describe the given dataset briefly, followed by experimental setup and results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "The shared task organizers obtained a corpus of approximately 8,000 Australian publications from Trove database. The corpus consists of original output of OCR system for each of the documents, along with their corrected versions. The organizers have provided 6,000 documents and their corrected versions as training dataset. 1,941 documents are provided as the test dataset, for which only the original output of the OCR system is provided. The details of data are given by (Molla and Cassidy, 2017) .",
"cite_spans": [
{
"start": 474,
"end": 499,
"text": "(Molla and Cassidy, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Stage 1 of our experiment pertains to error detection which classifies each word of the document to be either erroneous (1) or correct (0). In order to train a binary SVM classifier, we split the given training data into training and development datasets using 5-fold cross validation. The RBF kernel is employed to train the SVM. The test For the stage 2 subproblem of error rectification, first we select the threshold for the Levenshtein's edit distance by measuring the minimum edit distance between the words obtained from corrected and original documents provided in the training dataset. This helps in recommending the candidate corrections by allowing words for which minimum edit distance is less than or equal to the threshold. For candidate ranking, we initialize the score as 0 for each pair of (w ci , w e ) corresponding to an erroneous word. The value of temperature is initialized to 500 and cooling schedule is initialized to 0.8. Table 2 shows the five models that were used to render final results. The baseline model corresponds to ranking of candidate corrections on the basis of total score calculated in Section 2.2.2. SA 0.80 , SA 0.85 , SA 0.88 and SA 0.92 correspond to models trained using simulated annealing at the respective cooling schedules.",
"cite_spans": [],
"ref_spans": [
{
"start": 948,
"end": 955,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup and Results",
"sec_num": "3.2"
},
{
"text": "The trained model is used for predictions corresponding to the public leaderboard which contains 50% of the total data. Finally, at the end of the competition, the predictions are measured against the remaining 50% of data which corresponds to the private leaderboard. The results obtained by using the aforementioned features is shown in Table 2. Standard precision, recall and F1-score metrics are used to report the prediction results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Results",
"sec_num": "3.2"
},
{
"text": "Our system performs almost similarly on both public and private leaderboards, which indicates that the model is not overfitting. Table 1 indicates that a collective use of character-level features and contextual features leads to an increase in F1 score, even if it's a marginal increment. The recall of our error detection module is consistently low, which demonstrates the complexity of this sub-problem. Table 2 demonstrates that simulated annealing has proven to show an improvement of about 5% F 1 score over the baseline score-based model.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 1",
"ref_id": null
},
{
"start": 407,
"end": 414,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "What worked well: Our system was able to rectify some of the punctuation based errors like \"Collision ;\" ! \"Collision <next-word>\". We were also able to rectify certain typo-based errors like \"ofi\" ! \"of\". What did not work: Our system does not always return a correction when text containing a number is identified as an erroneous term. For example, in \"October 2fi\", the term \"2fi\" remains undetected. There are many other features which we could have tried like considering erroneous text location in the document, syntactic structure of sentences within the document and non-English text words, to name a few. However, given the limitation of time, it was not possible to incorporate these features. It would be interesting to expand this system by adding these features in future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "OCR post-processed text correction is an important and challenging problem that needs to be addressed to facilitate digitization. In this paper, we describe our participating system, which was based on a supervised classification method to detect erroneous words, followed by suggesting optimal corrections for each erroneous word with a confidence-based mechanism using simulated annealing. Our system was ranked the best with an F1-score of 32.98%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://www.kaggle.com/c/alta-2017challenge/leaderboard",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://trove.nla.gov.au Gitansh Khirbat. 2017. OCR Post-Processing Text Correction using Simulated Annealing (OPTeCA). In Proceedings of Australasian Language Technology Association Workshop, pages 119 123.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://catalog.ldc.upenn.edu/LDC2006T13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A bit-string longest-common-subsequence algorithm. Information Processing Letters",
"authors": [
{
"first": "Lloyd",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "Trevor",
"middle": [
"I"
],
"last": "Dix",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "23",
"issue": "",
"pages": "305--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lloyd Allison and Trevor I Dix. 1986. A bit-string longest-common-subsequence algorithm. Informa- tion Processing Letters, 23(5):305-310.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ocr binarization and image preprocessing for searching historical documents",
"authors": [
{
"first": "Maya",
"middle": [
"R"
],
"last": "Gupta",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [
"P"
],
"last": "Jacobson",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"K"
],
"last": "Garcia",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "40",
"issue": "",
"pages": "389--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maya R. Gupta, Nathaniel P. Jacobson, and Eric K. Garcia. 2007. Ocr binarization and image pre- processing for searching historical documents. Pat- tern Recogn., 40(2):389-397, February.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A probabilistic parser and its applications",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"M"
],
"last": "Jones",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1992,
"venue": "AAAI Workshop on Statistically-Based NLP Techniques",
"volume": "",
"issue": "",
"pages": "20--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark A Jones and Jason M Eisner. 1992. A probabilis- tic parser and its applications. In AAAI Workshop on Statistically-Based NLP Techniques, pages 20-27.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Optimization by simulated annealing. science",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Kirkpatrick",
"suffix": ""
},
{
"first": "Mario",
"middle": [
"P"
],
"last": "Daniel Gelatt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vecchi",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "220",
"issue": "",
"pages": "671--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Kirkpatrick, C Daniel Gelatt, Mario P Vecchi, et al. 1983. Optimization by simulated annealing. science, 220(4598):671-680.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Techniques for automatically correcting words in text",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Kukich",
"suffix": ""
}
],
"year": 1992,
"venue": "ACM Comput. Surv",
"volume": "24",
"issue": "4",
"pages": "377--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Kukich. 1992. Techniques for automatically correcting words in text. ACM Comput. Surv., 24(4):377-439, December.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Binary codes capable of correcting deletions, insertions, and reversals",
"authors": [
{
"first": "",
"middle": [],
"last": "Vladimir I Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "Soviet physics doklady",
"volume": "10",
"issue": "",
"pages": "707--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improving optical character recognition through efficient multiple system alignment",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"K"
],
"last": "Lund",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ringger",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 9th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL '09",
"volume": "",
"issue": "",
"pages": "231--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Lund and Eric K. Ringger. 2009. Improv- ing optical character recognition through efficient multiple system alignment. In Proceedings of the 9th ACM/IEEE-CS Joint Conference on Digital Li- braries, JCDL '09, pages 231-240, New York, NY, USA. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Progressive alignment and discriminative error correction for multiple ocr engines",
"authors": [
{
"first": "W",
"middle": [
"B"
],
"last": "Lund",
"suffix": ""
},
{
"first": "D",
"middle": [
"D"
],
"last": "Walker",
"suffix": ""
},
{
"first": "E",
"middle": [
"K"
],
"last": "Ringger",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 International Conference on Document Analysis and Recognition",
"volume": "",
"issue": "",
"pages": "764--768",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. B. Lund, D. D. Walker, and E. K. Ringger. 2011. Progressive alignment and discriminative error cor- rection for multiple ocr engines. In 2011 Inter- national Conference on Document Analysis and Recognition, pages 764-768, Sept.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "How well does multiple ocr error correction generalize? Society of Photo-Optical Instrumentation Engineers",
"authors": [
{
"first": "Eric",
"middle": [
"K"
],
"last": "William B Lund",
"suffix": ""
},
{
"first": "Daniel D",
"middle": [],
"last": "Ringger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B Lund, Eric K Ringger, and Daniel D Walker. 2014. How well does multiple ocr error correction generalize? Society of Photo-Optical Instrumenta- tion Engineers.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical learning for ocr text correction",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Aminul",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Yajing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Abidalrahman",
"middle": [],
"last": "Moh'd",
"suffix": ""
},
{
"first": "Evangelos",
"middle": [
"E"
],
"last": "Milios",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.06950"
]
},
"num": null,
"urls": [],
"raw_text": "Jie Mei, Aminul Islam, Yajing Wu, Abidalrahman Moh'd, and Evangelos E Milios. 2016. Statisti- cal learning for ocr text correction. arXiv preprint arXiv:1611.06950.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the 2017 alta shared task: Correcting ocr errors",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Molla",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Cassidy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Molla and Steve Cassidy. 2017. Overview of the 2017 alta shared task: Correcting ocr errors. In Proceedings of Australasian Language Technology Association Workshop, Brisbane, Australia.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learn- ing in Python. Journal of Machine Learning Re- search, 12:2825-2830.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ocrspell: an interactive spelling correction system for ocr errors in text",
"authors": [
{
"first": "Kazem",
"middle": [],
"last": "Taghva",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Stofsky",
"suffix": ""
}
],
"year": 2001,
"venue": "International Journal on Document Analysis and Recognition",
"volume": "3",
"issue": "3",
"pages": "125--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazem Taghva and Eric Stofsky. 2001. Ocrspell: an interactive spelling correction system for ocr errors in text. International Journal on Document Analysis and Recognition, 3(3):125-137.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The state of the art in online handwriting recognition",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Tappert",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Suen",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wakahara",
"suffix": ""
}
],
"year": 1990,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "12",
"issue": "8",
"pages": "787--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. C. Tappert, C. Y. Suen, and T. Wakahara. 1990. The state of the art in online handwriting recogni- tion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(8):787-808, Aug.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Combining multiple thresholding binarization values to improve ocr output",
"authors": [
{
"first": "Eric",
"middle": [
"K Ringger"
],
"last": "William",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lund",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"J"
],
"last": "Kennard",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric K. Ringger William B. Lund, Douglas J. Kennard. 2013. Combining multiple thresholding binarization values to improve ocr output.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table><tr><td>: Stage 2 results -Error rectification</td></tr><tr><td>dataset remains unused since the correct labels for</td></tr><tr><td>erroneous words are unknown. Table 1 reports the</td></tr><tr><td>intermediate results obtained by adding the fea-</td></tr><tr><td>tures defined in Section 2.1.2 incrementally.</td></tr></table>",
"num": null,
"type_str": "table",
"text": ""
}
}
}
}