| { |
| "paper_id": "P13-1021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:37:17.491379Z" |
| }, |
| "title": "Unsupervised Transcription of Historical Documents", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Berg-Kirkpatrick", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of California at Berkeley", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of California at Berkeley", |
| "location": {} |
| }, |
| "email": "gdurrett@cs.berkeley.edu" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of California at Berkeley", |
| "location": {} |
| }, |
| "email": "klein@cs.berkeley.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a generative probabilistic model, inspired by historical printing processes, for transcribing images of documents from the printing press era. By jointly modeling the text of the document and the noisy (but regular) process of rendering glyphs, our unsupervised system is able to decipher font structure and more accurately transcribe images into text. Overall, our system substantially outperforms state-of-the-art solutions for this task, achieving a 31% relative reduction in word error rate over the leading commercial system for historical transcription, and a 47% relative reduction over Tesseract, Google's open source OCR system.", |
| "pdf_parse": { |
| "paper_id": "P13-1021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a generative probabilistic model, inspired by historical printing processes, for transcribing images of documents from the printing press era. By jointly modeling the text of the document and the noisy (but regular) process of rendering glyphs, our unsupervised system is able to decipher font structure and more accurately transcribe images into text. Overall, our system substantially outperforms state-of-the-art solutions for this task, achieving a 31% relative reduction in word error rate over the leading commercial system for historical transcription, and a 47% relative reduction over Tesseract, Google's open source OCR system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Standard techniques for transcribing modern documents do not work well on historical ones. For example, even state-of-the-art OCR systems produce word error rates of over 50% on the documents shown in Figure 1 . Unsurprisingly, such error rates are too high for many research projects (Arlitsch and Herbert, 2004; Shoemaker, 2005; Holley, 2010) . We present a new, generative model specialized to transcribing printing-press era documents. Our model is inspired by the underlying printing processes and is designed to capture the primary sources of variation and noise.", |
| "cite_spans": [ |
| { |
| "start": 285, |
| "end": 313, |
| "text": "(Arlitsch and Herbert, 2004;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 314, |
| "end": 330, |
| "text": "Shoemaker, 2005;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 331, |
| "end": 344, |
| "text": "Holley, 2010)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 201, |
| "end": 209, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One key challenge is that the fonts used in historical documents are not standard (Shoemaker, 2005) . For example, consider Figure 1a . The fonts are not irregular like handwriting -each occurrence of a given character type, e.g. a, will use the same underlying glyph. However, the exact glyphs are unknown. Some differences between fonts are minor, reflecting small variations in font design. Others are more severe, like the presence of the archaic long s character before 1804. To address the general problem of unknown fonts, our model learns the font in an unsupervised fashion. Font shape and character segmentation are tightly coupled, and so they are modeled jointly.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 99, |
| "text": "(Shoemaker, 2005)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 124, |
| "end": 133, |
| "text": "Figure 1a", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A second challenge with historical data is that the early typesetting process was noisy. Handcarved blocks were somewhat uneven and often failed to sit evenly on the mechanical baseline. Figure 1b shows an example of the text's baseline moving up and down, with varying gaps between characters. To deal with these phenomena, our model incorporates random variables that specifically describe variations in vertical offset and horizontal spacing.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 187, |
| "end": 196, |
| "text": "Figure 1b", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A third challenge is that the actual inking was also noisy. For example, in Figure 1c some characters are thick from over-inking while others are obscured by ink bleeds. To be robust to such rendering irregularities, our model captures both inking levels and pixel-level noise. Because the model is generative, we can also treat areas that are obscured by larger ink blotches as unobserved, and let the model predict the obscured text based on visual and linguistic context.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 76, |
| "end": 85, |
| "text": "Figure 1c", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our system, which we call Ocular, operates by fitting the model to each document in an unsupervised fashion. The system outperforms state-ofthe-art baselines, giving a 47% relative error reduction over Google's open source Tesseract system, and giving a 31% relative error reduction over ABBYY's commercial FineReader system, which has been used in large-scale historical transcription projects (Holley, 2010) . Over-inked It appeared that the Prisoner was very E :", |
| "cite_spans": [ |
| { |
| "start": 395, |
| "end": 409, |
| "text": "(Holley, 2010)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 412, |
| "end": 422, |
| "text": "Over-inked", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "X :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Wandering baseline Historical font Figure 2 : An example image from a historical document (X) and its transcription (E).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 35, |
| "end": 43, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Relatively little prior work has built models specifically for transcribing historical documents. Some of the challenges involved have been addressed (Ho and Nagy, 2000; Huang et al., 2006; Kae and Learned-Miller, 2009) , but not in a way targeted to documents from the printing press era. For example, some approaches have learned fonts in an unsupervised fashion but require pre-segmentation of the image into character or word regions (Ho and Nagy, 2000; Huang et al., 2006) , which is not feasible for noisy historical documents. Kae and Learned-Miller (2009) jointly learn the font and image segmentation but do not outperform modern baselines. Work that has directly addressed historical documents has done so using a pipelined approach, and without fully integrating a strong language model (Vamvakas et al., 2008; Kluzner et al., 2009; Kae et al., 2010; Kluzner et al., 2011) . The most comparable work is that of Kopec and Lomelin (1996) and Kopec et al. (2001) . They integrated typesetting models with language models, but did not model noise. In the NLP community, generative models have been developed specifically for correcting outputs of OCR systems (Kolak et al., 2003) , but these do not deal directly with images.", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 169, |
| "text": "(Ho and Nagy, 2000;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 170, |
| "end": 189, |
| "text": "Huang et al., 2006;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 190, |
| "end": 219, |
| "text": "Kae and Learned-Miller, 2009)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 438, |
| "end": 457, |
| "text": "(Ho and Nagy, 2000;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 458, |
| "end": 477, |
| "text": "Huang et al., 2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 534, |
| "end": 563, |
| "text": "Kae and Learned-Miller (2009)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 798, |
| "end": 821, |
| "text": "(Vamvakas et al., 2008;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 822, |
| "end": 843, |
| "text": "Kluzner et al., 2009;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 844, |
| "end": 861, |
| "text": "Kae et al., 2010;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 862, |
| "end": 883, |
| "text": "Kluzner et al., 2011)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 922, |
| "end": 946, |
| "text": "Kopec and Lomelin (1996)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 951, |
| "end": 970, |
| "text": "Kopec et al. (2001)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1166, |
| "end": 1186, |
| "text": "(Kolak et al., 2003)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A closely related area of work is automatic decipherment (Ravi and Knight, 2008; Snyder et al., 2010; Ravi and Knight, 2011; Berg-Kirkpatrick and Klein, 2011) . The fundamental problem is similar to our own: we are presented with a sequence of symbols, and we need to learn a correspondence between symbols and letters. Our approach is also similar in that we use a strong language model (in conjunction with the constraint that the correspondence be regular) to learn the correct mapping. However, the symbols are not noisy in decipherment problems and in our problem we face a grid of pixels for which the segmentation into symbols is unknown. In contrast, decipherment typically deals only with discrete symbols.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 80, |
| "text": "(Ravi and Knight, 2008;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 81, |
| "end": 101, |
| "text": "Snyder et al., 2010;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 102, |
| "end": 124, |
| "text": "Ravi and Knight, 2011;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 125, |
| "end": 158, |
| "text": "Berg-Kirkpatrick and Klein, 2011)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Most historical documents have unknown fonts, noisy typesetting layouts, and inconsistent ink levels, usually simultaneously. For example, the portion of the document shown in Figure 2 has all three of these problems. Our model must handle them jointly.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 176, |
| "end": 184, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We take a generative modeling approach inspired by the overall structure of the historical printing process. Our model generates images of documents line by line; we present the generative process for the image of a single line. Our primary random variables are E (the text) and X (the pixels in an image of the line). Additionally, we have a random variable T that specifies the layout of the bounding boxes of the glyphs in the image, and a random variable R that specifies aspects of the inking and rendering process. The joint distribution is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "P (E, T, R, X) = P (E) [Language model] \u2022 P (T |E) [Typesetting model] \u2022 P (R) [Inking model] \u2022 P (X|E, T, R) [Noise model]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We let capital letters denote vectors of concatenated random variables, and we denote the individual random variables with lower-case letters. For example, E represents the entire sequence of text, while e i represents ith character in the sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our language model, P (E), is a Kneser-Ney smoothed character n-gram model (Kneser and Ney, 1995) . We generate printed lines of text (rather than sentences) independently, without generating an explicit stop character. This means that, formally, the model must separately generate the character length of each line. We choose not to bias the model towards longer or shorter character sequences and let the line length m be drawn uniformly at random from the positive integers less than some large constant M. 1 When i < 1, let e i denote a line-initial null character. We can now write: a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 97, |
| "text": "(Kneser and Ney, 1995)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 588, |
| "end": 650, |
| "text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language Model P (E)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P (E) = P (m) \u2022 m i=1 P (e i |e i\u22121 , . . . , e i\u2212n ) e i 1 e i+1 e i l i g i r i X RPAD i X LPAD i X GLYPH i P ( \u2022 | th) P ( \u2022 | th) a b c . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language Model P (E)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P ( \u2022 | pe) Inking: \u2713 INK", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language Model P (E)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Figure 3: Character tokens ei are generated by the language model. For each token index i, a glyph bounding box width gi, left padding width li, and a right padding width ri, are generated. Finally, the pixels in each glyph bounding box X GLYPH i are generated conditioned on the corresponding character, while the pixels in left and right padding bounding boxes, X LPAD i and X RPAD i , are generated from a background distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inking params", |
| "sec_num": null |
| }, |
| { |
| "text": "Generally speaking, the process of typesetting produces a line of text by first tiling bounding boxes of various widths and then filling in the boxes with glyphs. Our generative model, which is depicted in Figure 3 , reflects this process. As a first step, our model generates the dimensions of character bounding boxes; for each character token index i we generate three bounding box widths: a glyph box width g i , a left padding box width l i , and a right padding box width r i , as shown in Figure 3 . We let the pixel height of all lines be fixed to h. Let T i = (l i , g i , r i ) so that T i specifies the dimensions of the character box for token index i; T is then the concatenation of all T i , denoting the full layout.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 206, |
| "end": 214, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 496, |
| "end": 504, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Typesetting Model P (T |E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Because the width of a glyph depends on its shape, and because of effects resulting from kerning and the use of ligatures, the components of each T i are drawn conditioned on the character token e i . This means that, as part of our param- . We can now express the typesetting layout portion of the model as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Typesetting Model P (T |E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "P (T |E) = m i=1 P (Ti|ei) = m i=1 P (li; \u03b8 LPAD e i ) \u2022 P (gi; \u03b8 GLYPH e i ) \u2022 P (ri; \u03b8 RPAD e i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Typesetting Model P (T |E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Each character type c in our font has another set of parameters, a matrix \u03c6 c . These are weights that specify the shape of the character type's glyph, and are depicted in Figure 3 as part of the font parameters. \u03c6 c will come into play when we begin generating pixels in Section 3.3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 172, |
| "end": 180, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Typesetting Model P (T |E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Before we start filling the character boxes with pixels, we need to specify some properties of the inking and rendering process, including the amount of ink used and vertical variation along the text baseline. Our model does this by generating, for each character token index i, a discrete value d i that specifies the overall inking level in the character's bounding box, and a discrete value v i that specifies the glyph's vertical offset. These variations in the inking and typesetting process are mostly independent of character type. Thus, in our model, their distributions are not characterspecific. There is one global set of multinomial parameters governing inking level (\u03b8 INK ), and another governing offset (\u03b8 VERT ); both are depicted on the left-hand side of Figure 3 . Let", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 772, |
| "end": 780, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inking Model P (R)", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "R i = (d i , v i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inking Model P (R)", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "and let R be the concatenation of all R i so that we can express the inking model as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inking Model P (R)", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "P (R) = m i=1 P (Ri) = m i=1 P (di; \u03b8 INK ) \u2022 P (vi; \u03b8 VERT )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inking Model P (R)", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "The d i and v i variables are suppressed in Figure 3 to reduce clutter but are expressed in Figure 4 , which depicts the process of rendering a glyph box.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 52, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 92, |
| "end": 100, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inking Model P (R)", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "Now that we have generated a typesetting layout T and an inking context R, we have to actually generate each of the pixels in each of the character boxes, left padding boxes, and right padding boxes; the matrices that these groups of pixels comprise are denoted X GLYPH We assume that pixels are binary valued and sample their values independently from Bernoulli distributions. 2 The probability of black (the Bernoulli parameter) depends on the type of pixel generated. All the pixels in a padding box have the same probability of black that depends only on the inking level of the box, d i . Since we have already generated this value and the widths l i and r i of each padding box, we have enough information to generate left and right padding pixel matrices", |
| "cite_spans": [ |
| { |
| "start": 378, |
| "end": 379, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noise Model P (X|E, T, R)", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "X LPAD i and X RPAD i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noise Model P (X|E, T, R)", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The Bernoulli parameter of a pixel inside a glyph bounding box depends on the pixel's location inside the box (as well as on d i and v i , but for simplicity of exposition, we temporarily suppress this dependence) and on the model parameters governing glyph shape (for each character type c, the parameter matrix \u03c6 c specifies the shape of the character's glyph.) The process by which glyph pixels are generated is depicted in Figure 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 427, |
| "end": 435, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Noise Model P (X|E, T, R)", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The dependence of glyph pixels on location complicates generation of the glyph pixel matrix X GLYPH i since the corresponding parameter matrix 2 We could generate real-valued pixels with a different choice of noise distribution. } } } a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a ", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 144, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 229, |
| "end": 336, |
| "text": "} } } a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Noise Model P (X|E, T, R)", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "g i d i v i ei \u2713 PIXEL (j, k, g i , d i , v i ; ei ) \u21e5 X GLYPH i \u21e4 jk \u21e0 Bernoulli Bernoulli parameters", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "} }", |
| "sec_num": null |
| }, |
| { |
| "text": "Choose inking Figure 4 : We generate the pixels for the character token ei by first sampling a glyph width gi, an inking level di, and a vertical offset vi. Then we interpolate the glyph weights \u03c6e i and apply the logistic function to produce a matrix of Bernoulli parameters of width gi, inking di, and offset vi. \u03b8 PIXEL (j, k, gi, di, vi; \u03c6e i ) is the Bernoulli parameter at row j and column k. Finally, we sample from each Bernoulli distribution to generate a matrix of pixel values,", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 22, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "X GLYPH i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c6 e i has some type-level width w which may differ from the current token-level width g i . Introducing distinct parameters for each possible width would yield a model that can learn completely different glyph shapes for slightly different widths of the same character. We, instead, need a parameterization that ties the shapes for different widths together, and at the same time allows mobility in the parameter space during learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "Our solution is to horizontally interpolate the weights of the shape parameter matrix \u03c6 e i down to a smaller set of columns matching the tokenlevel choice of glyph width g i . Thus, the typelevel matrix \u03c6 e i specifies the canonical shape of the glyph for character e i when it takes its maximum width w. After interpolating, we apply the logistic function to produce the individual Bernoulli parameters. If we let [X GLYPH i ] jk denote the value of the pixel at the jth row and kth column of the glyph pixel matrix X GLYPH i for token i, and let \u03b8 PIXEL (j, k, g i ; \u03c6 e i ) denote the token-level", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2713 PIXEL :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "Interpolate, apply logistic c :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "Glyph weights Bernoulli params \u00b5 Figure 5 : In order to produce Bernoulli parameter matrices \u03b8 PIXEL of variable width, we interpolate over columns of \u03c6c with vectors \u00b5, and apply the logistic function to each result.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 33, |
| "end": 41, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "Bernoulli parameter for this pixel, we can write:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "[X GLYPH i ] jk \u223c Bernoulli \u03b8 PIXEL (j, k, gi; \u03c6e i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "The interpolation process for a single row is depicted in Figure 5 . We define a constant interpolation vector \u00b5(g i , k) that is specific to the glyph box width g i and glyph box column k. Each \u00b5(g i , k) is shaped according to a Gaussian centered at the relative column position in \u03c6 e i . The glyph pixel Bernoulli parameters are defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 58, |
| "end": 66, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b8 PIXEL (j, k,gi; \u03c6e i ) = logistic w k =1 \u00b5(gi, k) k \u2022 [\u03c6e i ] jk", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "The fact that the parameterization is log-linear will ensure that, during the unsupervised learning process, updating the shape parameters \u03c6 c is simple and feasible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "By varying the magnitude of \u00b5 we can change the level of smoothing in the logistic model and cause it to permit areas that are over-inked. This is the effect that d i controls. By offsetting the rows of \u03c6 c that we interpolate weights from, we change the vertical offset of the glyph, which is controlled by v i . The full pixel generation process is diagrammed in Figure 4 , where the dependence of \u03b8 PIXEL on d i and v i is also represented.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 365, |
| "end": 373, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pixel values", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the EM algorithm (Dempster et al., 1977) to find the maximum-likelihood font parameters:", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 47, |
| "text": "(Dempster et al., 1977)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u03c6 c , \u03b8 LPAD c , \u03b8 GLYPH c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4" |
| }, |
| { |
| "text": ", and \u03b8 RPAD c . The image X is the only observed random variable in our model. The identities of the characters E the typesetting layout T and the inking R will all be unobserved. We do not learn \u03b8 INK and \u03b8 VERT , which are set to the uniform distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4" |
| }, |
| { |
| "text": "During the E-step we compute expected counts for E and T , but maximize over R, for which we compute hard counts. Our model is an instance of a hidden semi-Markov model (HSMM), and therefore the computation of marginals is tractable with the semi-Markov forward-backward algorithm (Levinson, 1986) .", |
| "cite_spans": [ |
| { |
| "start": 281, |
| "end": 297, |
| "text": "(Levinson, 1986)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expectation Maximization", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "During the M-step, we update the parameters \u03b8 LPAD c , \u03b8 RPAD c using the standard closed-form multinomial updates and use a specialized closedform update for \u03b8 GLYPH c that enforces unimodality of the glyph width distribution. 3 The glyph weights, \u03c6 c , do not have a closed-form update. The noise model that \u03c6 c parameterizes is a local log-linear model, so we follow the approach of Berg-Kirkpatrick et al. (2010) and use L-BFGS (Liu and Nocedal, 1989) to optimize the expected likelihood with respect to \u03c6 c .", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 229, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 386, |
| "end": 416, |
| "text": "Berg-Kirkpatrick et al. (2010)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 432, |
| "end": 455, |
| "text": "(Liu and Nocedal, 1989)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expectation Maximization", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The number of states in the dynamic programming lattice grows exponentially with the order of the language model (Jelinek, 1998; Koehn, 2004) . As a result, inference can become slow when the language model order n is large. To remedy this, we take a coarse-to-fine approach to both learning and inference. On each iteration of EM, we perform two passes: a coarse pass using a low-order language model, and a fine pass using a high-order language model (Petrov et al., 2008; Zhang and Gildea, 2008) . We use the marginals 4 from the coarse pass to prune states from the dynamic program of the fine pass.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 128, |
| "text": "(Jelinek, 1998;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 129, |
| "end": 141, |
| "text": "Koehn, 2004)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 453, |
| "end": 474, |
| "text": "(Petrov et al., 2008;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 475, |
| "end": 498, |
| "text": "Zhang and Gildea, 2008)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coarse-to-Fine Learning and Inference", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the early iterations of EM, our font parameters are still inaccurate, and to prune heavily based on such parameters would rule out correct analyses. Therefore, we gradually increase the aggressiveness of pruning over the course of EM. To ensure that each iteration takes approximately the same amount of computation, we also gradually increase the order of the fine pass, only reaching the full order n on the last iteration. To produce a decoding of the image into text, on the final iteration we run a Viterbi pass using the pruned fine model. Figure 6 : Portions of several documents from our test set representing a range of difficulties are displayed. On document (a), which exhibits noisy typesetting, our system achieves a word error rate (WER) of 25.2. Document (b) is cleaner in comparison, and on it we achieve a WER of 15.4. On document (c), which is also relatively clean, we achieve a WER of 12.5. On document (d), which is severely degraded, we achieve a WER of 70.0.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 549, |
| "end": 557, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Coarse-to-Fine Learning and Inference", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We perform experiments on two historical datasets consisting of images of documents printed between 1700 and 1900 in England and Australia. Examples from both datasets are displayed in Figure 6 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 185, |
| "end": 193, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The first dataset comes from a large set of images of the proceedings of the Old Bailey, a criminal court in London, England (Shoemaker, 2005) . The Old Bailey curatorial effort, after deciding that current OCR systems do not adequately handle 18th century fonts, manually transcribed the documents into text. We will use these manual transcriptions to evaluate the output of our system. From the Old Bailey proceedings, we extracted a set of 20 images, each consisting of 30 lines of text to use as our first test set. We picked 20 documents, printed in consecutive decades. The first document is from 1715 and the last is from 1905. We choose the first document in each of the corresponding years, choose a random page in the document, and extracted an image of the first 30 consecutive lines of text consisting of full sentences. 5 The ten documents in the Old Bailey dataset that were printed before 1810 use the long s glyph, while the remaining ten do not.", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 142, |
| "text": "(Shoemaker, 2005)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 833, |
| "end": 834, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Old Bailey", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Our second dataset is taken from a collection of digitized Australian newspapers that were printed between the years of 1803 and 1954. This collection is called Trove, and is maintained by the the National Library of Australia (Holley, 2010) . We extracted ten images from this collection in the same way that we extracted images from Old Bailey, but starting from the year 1803. We manually produced our own gold annotations for these ten images. Only the first document of Trove uses the long s glyph.", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 241, |
| "text": "(Holley, 2010)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trove", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Many of the images in historical collections are bitonal (binary) as a result of how they were captured on microfilm for storage in the 1980s (Arlitsch and Herbert, 2004) . This is part of the reason our model is designed to work directly with binarized images. For consistency, we binarized the images in our test sets that were not already binary by thresholding pixel values.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 170, |
| "text": "(Arlitsch and Herbert, 2004)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pre-processing", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Our model requires that the image be presegmented into lines of text. We automatically segment lines by training an HSMM over rows of pixels. After the lines are segmented, each line is resampled so that its vertical resolution is 30 pixels. The line extraction process also identifies pixels that are not located in central text regions, and are part of large connected components of ink, spanning multiple lines. The values of such pixels are treated as unobserved in the model since, more often than not, they are part of ink blotches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pre-processing", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We evaluate our system by comparing our text recognition accuracy to that of two state-of-the-art systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our first baseline is Google's open source OCR system, Tesseract (Smith, 2007) . Tesseract takes a pipelined approach to recognition. Before recognizing the text, the document is broken into lines, and each line is segmented into words. Then, Tesseract uses a classifier, aided by a wordunigram language model, to recognize whole words.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 78, |
| "text": "(Smith, 2007)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Our second baseline, ABBYY FineReader 11 Professional Edition, 6 is a state-of-the-art commercial OCR system. It is the OCR system that the National Library of Australia used to recognize the historical documents in Trove (Holley, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 222, |
| "end": 236, |
| "text": "(Holley, 2010)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We evaluate the output of our system and the baseline systems using two metrics: character error rate (CER) and word error rate (WER). Both these metrics are based on edit distance. CER is the edit distance between the predicted and gold transcriptions of the document, divided by the number of characters in the gold transcription. WER is the word-level edit distance (words, instead of characters, are treated as tokens) between predicted and gold transcriptions, divided by the number of words in the gold transcription. When computing WER, text is tokenized into words by splitting on whitespace.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We ran experiments using two different language models. The first language model was trained on the initial one million sentences of the New York Times (NYT) portion of the Gigaword corpus (Graff et al., 2007) , which contains about 36 million words. This language model is out of domain for our experimental documents. To investigate the effects of using an in domain language model, we created a corpus composed of the manual annotations of all the documents in the Old Bailey proceedings, excluding those used in our test set. This corpus consists of approximately 32 million words. In all experiments we used a character n-gram order of six for the final Viterbi de- 22.9 49.2 Ocular w/ NYT (this work) 14.9 33.0 Table 1 : We evaluate the predicted transcriptions in terms of both character error rate (CER) and word error rate (WER), and report macro-averages across documents. We compare with two baseline systems: Google's open source OCR system, Tessearact, and a state-of-the-art commercial system, ABBYY FineReader. We refer to our system as Ocular w/ NYT and Ocular w/ OB, depending on whether NYT or Old Bailey is used to train the language model. coding pass and an order of three for all coarse passes.", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 209, |
| "text": "(Graff et al., 2007)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 717, |
| "end": 724, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language Model", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We used as a development set ten additional documents from the Old Bailey proceedings and five additional documents from Trove that were not part of our test set. On this data, we tuned the model's hyperparameters 7 and the parameters of the pruning schedule for our coarse-to-fine approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initialization and Tuning", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "In experiments we initialized \u03b8 RPAD c and \u03b8 LPAD c to be uniform, and initialized \u03b8 GLYPH c and \u03c6 c based on the standard modern fonts included with the Ubuntu Linux 12.04 distribution. 8 For documents that use the long s glyph, we introduce a special character type for the non-word-final s, and initialize its parameters from a mixture of the modern f and | glyphs. 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initialization and Tuning", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "The results of our experiments are summarized in Table 1 . We refer to our system as Ocular w/ NYT or Ocular w/ OB, depending on whether the language model was trained using NYT or Old Bailey, respectively. We compute macro-averages 7 One of the hyperparameters we tune is the exponent of the language model. This balances the contributions of the language model and the typesetting model to the posterior (Och and Ney, 2004 ).", |
| "cite_spans": [ |
| { |
| "start": 233, |
| "end": 234, |
| "text": "7", |
| "ref_id": null |
| }, |
| { |
| "start": 406, |
| "end": 424, |
| "text": "(Och and Ney, 2004", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 49, |
| "end": 56, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "8 http://www.ubuntu.com/ 9 Following Berg-Kirkpatrick et al. (2010) , we use a regularization term in the optimization of the log-linear model parameters \u03c6c during the M-step. Instead of regularizing towards zero, we regularize towards the initializer. This slightly improves performance on our development set and can be thought of as placing a prior on the glyph shape parameters. across documents from all years. Our system, using the NYT language model, achieves an average WER of 28.1 on Old Bailey and an average WER of 33.0 on Trove. This represents a substantial error reduction compared to both baseline systems.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 67, |
| "text": "Berg-Kirkpatrick et al. (2010)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "If we average over the documents in both Old Bailey and Trove, we find that Tesseract achieved an average WER of 56.3, ABBYY FineReader achieved an average WER of 43.1, and our system, using the NYT language model, achieved an average WER of 29.7. This means that while Tesseract incorrectly predicts more than half of the words in these documents, our system gets more than threequarters of them right. Overall, we achieve a relative reduction in WER of 47% compared to Tesseract and 31% compared to ABBYY FineReader.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The baseline systems do not have special provisions for the long s glyph. In order to make sure the comparison is fair, we separately computed average WER on only the documents from after 1810 (which do no use the long s glyph). We found that using this evaluation our system actually acheives a larger relative reduction in WER: 50% compared to Tesseract and 35% compared to ABBYY FineReader.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Finally, if we train the language model using the Old Bailey corpus instead of the NYT corpus, we see an average improvement of 4 WER on the Old Bailey test set. This means that the domain of the language model is important, but, the results are not affected drastically even when using a language model based on modern corpora (NYT). test documents. For each portion of a test document, the first line shows the transcription predicted by our model, and the second line shows padding and glyph regions predicted by the model, where the grayscale glyphs represent the learned Bernoulli parameters for each pixel. The third line shows the input image. Figure 7a demonstrates a case where our model has effectively explained both the uneven baseline and over-inked glyphs by using the vertical offsets v i and inking variables d i . In Figure 7b the model has used glyph widths g i and vertical offsets to explain the thinning of glyphs and falling baseline that occurred near the binding of the book. In separate experiments on the Old Bailey test set, using the NYT language model, we found that removing the vertical offset variables from the model increased WER by 22, and removing the inking variables increased WER by 16. This indicates that it is very important to model both these aspects of printing press rendering. Figure 9 : This Old Bailey document from 1719 has severe ink bleeding from the facing page. We annotated these blotches (in red) and treated the corresponding pixels as unobserved in the model. The layout shown is predicted by the model. Figure 7c shows the output of our system on a difficult document. Here, missing characters and ink blotches confuse the model, which picks something that is reasonable according to the language model, but incorrect.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 651, |
| "end": 660, |
| "text": "Figure 7a", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 834, |
| "end": 843, |
| "text": "Figure 7b", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 1324, |
| "end": 1332, |
| "text": "Figure 9", |
| "ref_id": null |
| }, |
| { |
| "start": 1562, |
| "end": 1571, |
| "text": "Figure 7c", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "7" |
| }, |
| { |
| "text": "It is interesting to look at the fonts learned by our system, and track how historical fonts changed over time. Figure 8 shows several grayscale images representing the Bernoulli pixel probabilities for the most likely width of the glyph for g under various conditions. At the center is the representation of the initial parameter values, and surrounding this are the learned parameters for documents from various years. The learned shapes are visibly different from the initializer, which is essentially an average of modern fonts, and also vary across decades.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 112, |
| "end": 120, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learned Fonts", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "We can ask to what extent learning the font structure actually improved our performance. If we turn off learning and just use the initial parameters to decode, WER increases by 8 on the Old Bailey test set when using the NYT language model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learned Fonts", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "As noted earlier, one strength of our generative model is that we can make the values of certain pixels unobserved in the model, and let inference fill them in. We conducted an additional experiment on a document from the Old Bailey proceedings that was printed in 1719. This document, a fragment of which is shown in Figure 9 , has severe ink bleeding from the facing page. We manually annotated the ink blotches (shown in red), and made them unobserved in the model. The resulting typesetting layout learned by the model is also shown in Figure 9 . The model correctly predicted most of the obscured words. Running the model with the manually specified unobserved pixels re-duced the WER on this document from 58 to 19 when using the NYT language model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 318, |
| "end": 326, |
| "text": "Figure 9", |
| "ref_id": null |
| }, |
| { |
| "start": 540, |
| "end": 548, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unobserved Ink Blotches", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "We performed error analysis on our development set by randomly choosing 100 word errors from the WER alignment and manually annotating them with relevant features. Specifically, for each word error we recorded whether or not the error contained punctuation (either in the predicted word or the gold word), whether the text in the corresponding portion of the original image was italicized, and whether the corresponding portion of the image exhibited over-inking, missing ink, or significant ink blotches. These last three feature types are subjective in nature but may still be informative. We found that 56% of errors were accompanied by over-inking, 50% of errors were accompanied by ink blotches, 42% of errors contained punctuation, 21% of errors showed missing ink, and 12% of errors contained text that was italicized in the original image.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Remaining Errors", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "Our own subjective assessment indicates that many of these error features are in fact causal. More often than not, italicized text is incorrectly transcribed. In cases of extreme ink blotching, or large areas of missing ink, the system usually makes an error.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Remaining Errors", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "We have demonstrated a model, based on the historical typesetting process, that effectively learns font structure in an unsupervised fashion to improve transcription of historical documents into text. The parameters of the learned fonts are interpretable, as are the predicted typesetting layouts. Our system achieves state-of-the-art results, significantly outperforming two state-of-the-art baseline systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In particular, we do not use the kind of \"word bonus\" common to statistical machine translation models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We compute the weighted mean and weighted variance of the glyph width expected counts. We set \u03b8 GLYPH c to be proportional to a discretized Gaussian with the computed mean and variance. This update is approximate in the sense that it does not necessarily find the unimodal multinomial that maximizes expected log-likelihood, but it works well in practice.4 In practice, we use max-marginals for pruning to ensure that there is still a valid path in the pruned lattice.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This ruled out portions of the document with extreme structural abnormalities, like title pages and lists. These might be interesting to model, but are not within the scope of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.abbyy.com", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Microfilm, paper, and OCR: Issues in newspaper digitization. the Utah digital newspapers program. Microform & Imaging Review", |
| "authors": [ |
| { |
| "first": "Kenning", |
| "middle": [], |
| "last": "Arlitsch", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Herbert", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenning Arlitsch and John Herbert. 2004. Microfilm, paper, and OCR: Issues in newspaper digitization. the Utah digital newspapers program. Microform & Imaging Review.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Simple effective decipherment via combinatorial optimization", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Berg", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Kirkpatrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor Berg-Kirkpatrick and Dan Klein. 2011. Simple effective decipherment via combinatorial optimiza- tion. In Proceedings of the 2011 Conference on Em- pirical Methods in Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Painless unsupervised learning with features", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Berg-Kirkpatrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Bouchard-C\u00f4t\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-C\u00f4t\u00e9, John DeNero, and Dan Klein. 2010. Painless un- supervised learning with features. In Proceedings of the 2010 Annual Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies:.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Maximum likelihood from incomplete data via the EM algorithm", |
| "authors": [ |
| { |
| "first": "Arthur", |
| "middle": [], |
| "last": "Dempster", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Laird", |
| "suffix": "" |
| }, |
| { |
| "first": "Donald", |
| "middle": [], |
| "last": "Rubin", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Journal of the Royal Statistical Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arthur Dempster, Nan Laird, and Donald Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical So- ciety.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "English Gigaword third edition", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Graff", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Ke", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuaki", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Linguistic Data Consortium, Catalog Number LDC2007T07", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2007. English Gigaword third edi- tion. Linguistic Data Consortium, Catalog Number LDC2007T07.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "OCR with no shape training", |
| "authors": [ |
| { |
| "first": "Kam", |
| "middle": [], |
| "last": "Tin", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Ho", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nagy", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 15th International Conference on Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tin Kam Ho and George Nagy. 2000. OCR with no shape training. In Proceedings of the 15th Interna- tional Conference on Pattern Recognition.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Trove: Innovation in access to information in", |
| "authors": [ |
| { |
| "first": "Rose", |
| "middle": [], |
| "last": "Holley", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rose Holley. 2010. Trove: Innovation in access to information in Australia. Ariadne.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Cryptogram decoding for optical character recognition", |
| "authors": [ |
| { |
| "first": "Gary", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [ |
| "G" |
| ], |
| "last": "Learned-Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mc-Callum", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gary Huang, Erik G Learned-Miller, and Andrew Mc- Callum. 2006. Cryptogram decoding for optical character recognition. University of Massachusetts- Amherst Technical Report.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Statistical methods for speech recognition", |
| "authors": [ |
| { |
| "first": "Fred", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fred Jelinek. 1998. Statistical methods for speech recognition. MIT press.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Learning on the fly: font-free approaches to difficult OCR problems", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Kae", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Learned-Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 International Conference on Document Analysis and Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Kae and Erik Learned-Miller. 2009. Learn- ing on the fly: font-free approaches to difficult OCR problems. In Proceedings of the 2009 International Conference on Document Analysis and Recognition.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Improving state-of-theart OCR through high-precision document-specific modeling", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Kae", |
| "suffix": "" |
| }, |
| { |
| "first": "Gary", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Doersch", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Learned-Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Kae, Gary Huang, Carl Doersch, and Erik Learned-Miller. 2010. Improving state-of-the- art OCR through high-precision document-specific modeling. In Proceedings of the 2010 IEEE Confer- ence on Computer Vision and Pattern Recognition.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Word-based adaptive OCR for historical books", |
| "authors": [ |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Kluzner", |
| "suffix": "" |
| }, |
| { |
| "first": "Asaf", |
| "middle": [], |
| "last": "Tzadok", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuval", |
| "middle": [], |
| "last": "Shimony", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Walach", |
| "suffix": "" |
| }, |
| { |
| "first": "Apostolos", |
| "middle": [], |
| "last": "Antonacopoulos", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 International Conference on on Document Analysis and Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vladimir Kluzner, Asaf Tzadok, Yuval Shimony, Eu- gene Walach, and Apostolos Antonacopoulos. 2009. Word-based adaptive OCR for historical books. In Proceedings of the 2009 International Conference on on Document Analysis and Recognition.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Hybrid approach to adaptive OCR for historical books", |
| "authors": [ |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Kluzner", |
| "suffix": "" |
| }, |
| { |
| "first": "Asaf", |
| "middle": [], |
| "last": "Tzadok", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Chevion", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Walach", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 International Conference on Document Analysis and Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vladimir Kluzner, Asaf Tzadok, Dan Chevion, and Eu- gene Walach. 2011. Hybrid approach to adaptive OCR for historical books. In Proceedings of the 2011 International Conference on Document Anal- ysis and Recognition.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Improved backing-off for m-gram language modeling", |
| "authors": [ |
| { |
| "first": "Reinhard", |
| "middle": [], |
| "last": "Kneser", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Pro- ceedings of the International Conference on Acous- tics, Speech, and Signal Processing.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models. Machine translation: From real users to research", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn. 2004. Pharaoh: a beam search de- coder for phrase-based statistical machine transla- tion models. Machine translation: From real users to research.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A generative probabilistic OCR model for NLP applications", |
| "authors": [ |
| { |
| "first": "Okan", |
| "middle": [], |
| "last": "Kolak", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Byrne", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Okan Kolak, William Byrne, and Philip Resnik. 2003. A generative probabilistic OCR model for NLP ap- plications. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Documentspecific character template estimation", |
| "authors": [ |
| { |
| "first": "Gary", |
| "middle": [], |
| "last": "Kopec", |
| "suffix": "" |
| }, |
| { |
| "first": "Mauricio", |
| "middle": [], |
| "last": "Lomelin", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the International Society for Optics and Photonics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gary Kopec and Mauricio Lomelin. 1996. Document- specific character template estimation. In Proceed- ings of the International Society for Optics and Pho- tonics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Ngram language models for document image decoding", |
| "authors": [ |
| { |
| "first": "Gary", |
| "middle": [], |
| "last": "Kopec", |
| "suffix": "" |
| }, |
| { |
| "first": "Maya", |
| "middle": [], |
| "last": "Said", |
| "suffix": "" |
| }, |
| { |
| "first": "Kris", |
| "middle": [], |
| "last": "Popat", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of Society of Photographic Instrumentation Engineers", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gary Kopec, Maya Said, and Kris Popat. 2001. N- gram language models for document image decod- ing. In Proceedings of Society of Photographic In- strumentation Engineers.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Continuously variable duration hidden Markov models for automatic speech recognition", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stephen Levinson", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Computer Speech & Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Levinson. 1986. Continuously variable du- ration hidden Markov models for automatic speech recognition. Computer Speech & Language.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "On the limited memory BFGS method for large scale optimization", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nocedal", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dong C Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical programming.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The alignment template approach to statistical machine translation", |
| "authors": [ |
| { |
| "first": "Franz", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Coarse-to-fine syntactic machine translation using language projections", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov, Aria Haghighi, and Dan Klein. 2008. Coarse-to-fine syntactic machine translation using language projections. In Proceedings of the 2008 Conference on Empirical Methods in Natural Lan- guage Processing.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Attacking decipherment problems optimally with low-order ngram models", |
| "authors": [ |
| { |
| "first": "Sujith", |
| "middle": [], |
| "last": "Ravi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sujith Ravi and Kevin Knight. 2008. Attacking de- cipherment problems optimally with low-order n- gram models. In Proceedings of the 2008 Confer- ence on Empirical Methods in Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Bayesian inference for Zodiac and other homophonic ciphers", |
| "authors": [ |
| { |
| "first": "Sujith", |
| "middle": [], |
| "last": "Ravi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sujith Ravi and Kevin Knight. 2011. Bayesian infer- ence for Zodiac and other homophonic ciphers. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Digital London: Creating a searchable web of interlinked sources on eighteenth century London. Electronic Library and Information Systems", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Shoemaker", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Shoemaker. 2005. Digital London: Creating a searchable web of interlinked sources on eighteenth century London. Electronic Library and Informa- tion Systems.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "An overview of the tesseract ocr engine", |
| "authors": [ |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Ninth International Conference on Document Analysis and Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ray Smith. 2007. An overview of the tesseract ocr engine. In Proceedings of the Ninth International Conference on Document Analysis and Recognition.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A statistical model for lost language decipherment", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Snyder", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipher- ment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A complete optical character recognition methodology for historical documents", |
| "authors": [ |
| { |
| "first": "Georgios", |
| "middle": [], |
| "last": "Vamvakas", |
| "suffix": "" |
| }, |
| { |
| "first": "Basilios", |
| "middle": [], |
| "last": "Gatos", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "The Eighth IAPR International Workshop on Document Analysis Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Georgios Vamvakas, Basilios Gatos, Nikolaos Stam- atopoulos, and Stavros Perantonis. 2008. A com- plete optical character recognition methodology for historical documents. In The Eighth IAPR Interna- tional Workshop on Document Analysis Systems.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Efficient multipass decoding for synchronous context free grammars", |
| "authors": [ |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hao Zhang and Daniel Gildea. 2008. Efficient multi- pass decoding for synchronous context free gram- mars. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Portions of historical documents with (a) unknown font, (b) uneven baseline, and (c) over-inking.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "eterization of the font, for each character type c we have vectors of multinomial parameters \u03b8 LPAD c , \u03b8 GLYPH c , and \u03b8 RPAD c governing the distribution of the dimensions of character boxes of type c. These parameters are depicted on the right-hand side of", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "Figure 3. We can now express the typesetting layout portion of the model as:", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF3": { |
| "text": "are depicted at the bottom of Figure 3.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF5": { |
| "text": "For each of these portions of test documents, the first line shows the transcription predicted by our model and the second line shows a representation of the learned typesetting layout. The grayscale glyphs show the Bernoulli pixel distributions learned by our model, while the padding regions are depicted in blue. The third line shows the input image.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF6": { |
| "text": "shows a representation of the typesetting layout learned by our model for portions of several The central glyph is a representation of the initial model parameters for the glyph shape for g, and surrounding this are the learned parameters for documents from various years.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| } |
| } |
| } |
| } |