| { |
| "paper_id": "J06-4003", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:00:57.001295Z" |
| }, |
| "title": "Unsupervised Multilingual Sentence Boundary Detection", |
| "authors": [ |
| { |
| "first": "Tibor", |
| "middle": [], |
| "last": "Kiss", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Strunk", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "strunk@linguistics.rub.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this article, we present a language-independent, unsupervised approach to sentence boundary detection. It is based on the assumption that a large number of ambiguities in the determination of sentence boundaries can be eliminated once abbreviations have been identified. Instead of relying on orthographic clues, the proposed system is able to detect abbreviations with high accuracy using three criteria that only require information about the candidate type itself and are independent of context: Abbreviations can be defined as a very tight collocation consisting of a truncated word and a final period, abbreviations are usually short, and abbreviations sometimes contain internal periods. We also show the potential of collocational evidence for two other important subtasks of sentence boundary disambiguation, namely, the detection of initials and ordinal numbers. The proposed system has been tested extensively on eleven different languages and on different text genres. It achieves good results without any further amendments or language-specific resources. We evaluate its performance against three different baselines and compare it to other systems for sentence boundary detection proposed in the literature.", |
| "pdf_parse": { |
| "paper_id": "J06-4003", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this article, we present a language-independent, unsupervised approach to sentence boundary detection. It is based on the assumption that a large number of ambiguities in the determination of sentence boundaries can be eliminated once abbreviations have been identified. Instead of relying on orthographic clues, the proposed system is able to detect abbreviations with high accuracy using three criteria that only require information about the candidate type itself and are independent of context: Abbreviations can be defined as a very tight collocation consisting of a truncated word and a final period, abbreviations are usually short, and abbreviations sometimes contain internal periods. We also show the potential of collocational evidence for two other important subtasks of sentence boundary disambiguation, namely, the detection of initials and ordinal numbers. The proposed system has been tested extensively on eleven different languages and on different text genres. It achieves good results without any further amendments or language-specific resources. We evaluate its performance against three different baselines and compare it to other systems for sentence boundary detection proposed in the literature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The sentence is a fundamental and relatively well understood unit in theoretical and computational linguistics. Many linguistic phenomena-such as collocations, idioms, and variable binding, to name a few-are constrained by the abstract concept 'sentence' in that they are confined by sentence boundaries. The successful determination of these boundaries is thus a prerequisite for proper sentence processing. Sentence boundary detection is not a trivial task, though. Graphemes often serve more than one purpose in writing systems. The period, which is employed as sentence boundary marker, is no exception. It is also used to mark abbreviations, initials, ordinal numbers, and ellipses. Moreover, a period can be used to mark an abbreviation and a sentence boundary at the same time. In such cases, the second period is haplologically omitted and only one period is used as end-of-sentence and abbreviation marker. 1 Sentence boundary detection thus has to be considered as an instance of ambiguity resolution. The ambiguity of the period is illustrated by example (1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Example 1 CELLULAR COMMUNICATIONS INC. sold 1,550,000 common shares at $21.75 each yesterday, according to lead underwriter L.F. Rothschild & Co. (cited from Wall Street Journal 05/29/1987) Periods that form part of an abbreviation but are taken to be end-of-sentence markers or vice versa do not only introduce errors in the determination of sentence boundaries. As has been reported in Walker et al. (2001) and Kiss and Strunk (2002b) , segmentation errors propagate into further components, which rely on accurate sentence segmentation, and subsequent analyses are affected negatively.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 189, |
| "text": "Rothschild & Co. (cited from Wall Street Journal 05/29/1987)", |
| "ref_id": null |
| }, |
| { |
| "start": 388, |
| "end": 408, |
| "text": "Walker et al. (2001)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 413, |
| "end": 436, |
| "text": "Kiss and Strunk (2002b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this article, we present an approach to sentence boundary detection that builds on language-independent methods and determines sentence boundaries with high accuracy. It does not make use of additional annotations, part-of-speech tagging, or precompiled lists to support sentence boundary detection, but extracts all necessary data from the corpus to be segmented. Also, it does not use orthographic information as primary evidence and is thus suited to processing single-case text. It focuses on robustness and flexibility in that it can be applied with good results to a variety of languages without any further adjustments. At the same time, the modular structure of the system makes it possible to integrate language-specific methods and clues to further improve its accuracy. The basic algorithm has been determined experimentally on the basis of an unannotated development corpus of English. We have applied the resulting system to further corpora of English text as well as to corpora from ten other languages: Brazilian Portuguese, Dutch, Estonian, French, German, Italian, Norwegian, Spanish, Swedish, and Turkish. Without further additions or amendments to the system produced through experimentation on the development corpus, the mean accuracy of sentence boundary detection on newspaper corpora in the eleven languages is 98.74%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We approach sentence boundary detection by first determining possible abbreviations in the text. Quantitatively, abbreviations are a major source of ambiguities in sentence boundary detection since they often constitute up to 30% of the possible candidates for sentence boundaries in running text; see Section 6.1. Abbreviations can be characterized by a set of robust as well as cross-linguistically valid properties. The same cannot be said of the concept 'sentence boundary'. The end of a sentence cannot easily be characterized as either appearing after a particular word, between two particular words, after a particular word class, or in between two particular word classes. But, as we will show, an abbreviation can be cross-linguistically characterized in such a way.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "It is our basic assumption that abbreviations are collocations of the truncated word and the following period, and hence, that methods for the detection of collocations can be successfully applied to abbreviation detection. Firth (1957, page 181) characterizes the collocations of a word as \"statements of the habitual or customary places of that word.\" In languages that mark abbreviations with a following period, one could say that the abbreviation is habitually made up of a truncated word (or sequence of words) and a following period. But this might even be too weak a formulation. While typical elements of a collocation can also appear together with other words, the abbreviation is strongly tied to the following period. Ideally, in the absence of homography and typing errors, an abbreviation should always end in a final period. 2 Hence, we characterize an abbreviation as a very strict collocation and use standard techniques for the detection of collocations. These techniques will be modified appropriately to account for the stricter tie between an abbreviated word and the following period. It should be clear from the outset that abbreviations cannot simply be handled by listing them because they form a productive and hence open word class; see also M\u00fcller, Amerl, and Natalis (1980, pages 52ff.) and Mikheev (2002, page 291) . We corroborate this fact with an experiment in Section 6.4.4.", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 246, |
| "text": "Firth (1957, page 181)", |
| "ref_id": null |
| }, |
| { |
| "start": 1269, |
| "end": 1315, |
| "text": "M\u00fcller, Amerl, and Natalis (1980, pages 52ff.)", |
| "ref_id": null |
| }, |
| { |
| "start": 1320, |
| "end": 1344, |
| "text": "Mikheev (2002, page 291)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We offer a formal characterization of abbreviations in terms of three major properties, which only rely on the candidate word type itself and not on the local context in which an instance of the candidate type appears. First, as was already mentioned, an abbreviation looks like a very tight collocation in that the abbreviated word preceding the period and the period itself form a close bond. Second, abbreviations have the tendency to be rather short. This does not mean that we have to assume a fixed upper bound for the length of a possible abbreviation, but that the likelihood of being an abbreviation declines if candidates become longer. Using the length of a candidate as a counterbalance to the collocational bond between candidate and final period allows our method to identify quite long abbreviations, as long as the collocational bond between the candidate type and the period is very strong. As a third characteristic property, we have identified the occurrence of word-internal periods contained in many abbreviations. While we have determined the aforementioned properties experimentally, we believe that they indeed represent crucial traits of abbreviations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Using just these three characteristics, our system is able to detect abbreviations with a mean accuracy of 99.38% on newspaper corpora in eleven languages. The effectiveness of the three properties is further corroborated by an experiment we have carried out with a log-linear classifier; compare Section 6.4.7. The reported figure does not include initials and ordinal numbers because these subclasses of abbreviations cannot be discovered using these characteristics and have to be treated differently. The complete system with special heuristics for initials and ordinal numbers achieves an accuracy of 99.20% for the detection of abbreviations, initials, and ordinal numbers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The determination of abbreviation types already yields a large percentage of all sentence boundaries because all periods occurring after non-abbreviation types can be classified as end-of-sentence markers. Such a disambiguation on the type level, however, is insufficient by itself because it still has to be determined for every period following an abbreviation whether it serves as a sentence boundary marker at the same time. The detection of initials and of ordinal numbers, which are represented by digits followed by a period in several languages, also requires the application of token-based methods because these subclasses of abbreviations are problematic for type-based methods. These observations suggest a two-stage treatment of sentence boundary detection, which is both type and token based. We define a classifier as type based if it uses global evidence, for example, the distribution of a type in a corpus, to classify a type as a whole. In contrast, a token-based classifier determines a class for each individual token based on its local context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In the first stage, a resolution is performed on the type level to detect abbreviation types and ordinary word types. After this stage, the corpus receives an intermediate annotation where all instances of abbreviations detected by the first stage are marked as such with the tag <A> and all ellipses with the tag <E>. All periods following nonabbreviations are assumed to be sentence boundary markers and receive the annotation <S>. 3 The second, token-based stage employs additional heuristics on the basis of the intermediate annotation to refine and correct the output of the first classifier for each individual token. The token-based classifier is particularly suited to determine abbreviations and ellipses at the end of a sentence giving them the final annotation <A><S> or <E><S>. But it is also used to correct the intermediate annotation by detecting initials and ordinal numbers that cannot easily be recognized with type-based methods and thus often receive the wrong annotation from the first stage. The overall architecture of the present system, which we have baptized Punkt (German for period), is given in Figure 1 .", |
| "cite_spans": [ |
| { |
| "start": 434, |
| "end": 435, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1124, |
| "end": 1132, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The present article is structured as follows: Likelihood ratios can be considered the heart of the present proposal. Both the type-based and the token-based classifiers employ likelihood ratios to determine collocational bonds between a possible abbreviation and its final period, between the sentence boundary period and a word following it, and between words that surround a period. Section 2 introduces the concept of a likelihood ratio and discusses the specific properties of the likelihood ratios employed by Punkt. Section 3 describes the type-based classification stage, while Section 4 introduces the token-based reclassification methods. Section 5 gives a short account of how Punkt was developed and how we determined some necessary parameters. The experiments carried out with the present system are discussed in Section 6. In Section 7, we compare Punkt to other sentence boundary detection systems proposed in the literature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Punkt employs likelihood ratios to determine collocational ties in the type-based as well as in the token-based stage. The usefulness of likelihood ratios for collocation detection has been made explicit by Dunning (1993) and has been confirmed by an evaluation of various collocation detection methods carried out by Evert and Krenn (2001) . Kiss and Strunk (2002a, 2002b) characterize abbreviations as collocations and use Dunning's log-likelihood ratio (log \u03bb) to detect them on the type level. The present proposal differs from Kiss and Strunk's earlier suggestion in employing a highly modified log-likelihood ratio for abbreviation detection in the type-based stage. The reasons for this divergence will be discussed in Section 2.1. In the token-based stage, we employ Dunning's original log \u03bb, but add an additional constraint to make it one-sided. This version of log \u03bb will be described in Section 2.2.", |
| "cite_spans": [ |
| { |
| "start": 207, |
| "end": 221, |
| "text": "Dunning (1993)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 318, |
| "end": 340, |
| "text": "Evert and Krenn (2001)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 343, |
| "end": 351, |
| "text": "Kiss and", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 352, |
| "end": 373, |
| "text": "Strunk (2002a, 2002b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The log-likelihood ratio by Dunning (1993) tests whether the probability of a word is dependent on the occurrence of the preceding word type. When applied to abbreviations, the following two hypotheses are compared in a Dunning-style log-likelihood ratio.", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 42, |
| "text": "Dunning (1993)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Null hypothesis H 0 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P(\u2022|w) = p = P(\u2022|\u00acw)", |
| "eq_num": "( 1 )" |
| } |
| ], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Alternative hypothesis H A :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P(\u2022|w) = p 1 = p 2 = P(\u2022|\u00acw)", |
| "eq_num": "( 2 )" |
| } |
| ], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The null hypothesis in (1) states that the probability of occurrence of a period is not dependent on the preceding word. The alternative hypothesis in (2) assumes a dependency between the period and the preceding word. The log-likelihood ratio is calculated using the binomial distribution to estimate the likelihoods of the two hypotheses as in (3). As the probabilities p, p 1 , and p 2 are determined by maximum-likelihood estimation (MLE) and the null hypothesis includes fewer parameters than the alternative hypothesis, the ratio log \u03bb is asymptotically \u03c7 2 -distributed and can thus be used as a test statistic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "log \u03bb = \u22122 log P binom (H 0 ) P binom (H A )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "However, we have decided to employ different hypotheses for abbreviation detection. While the revised null hypothesis given in (4) is quite close to (1), the revised alternative hypothesis in (5) differs sharply from the one suggested by Dunning. 4 Revised null hypothesis H 0 :", |
| "cite_spans": [ |
| { |
| "start": 238, |
| "end": 248, |
| "text": "Dunning. 4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P(\u2022|w) = P MLE (\u2022) = C(\u2022) N", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Revised alternative hypothesis H A :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P(\u2022|w) = 0.99", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The formulation of the alternative hypothesis in (5) reflects that we do not only require that a period occurs together with an abbreviated word more often than expected, but instead that a period almost always occurs after the truncated word. By choosing the value 0.99 instead of 1, we can provide for a certain probability that an abbreviation type sometimes erroneously occurs without a final period in a corpus. The hypothesis in (5) captures our intuitions about abbreviations better than the original version in (2) because it is no longer sufficient that a word type appears more often than average with a following period to yield a high log-likelihood score. Instead, the likelihood of a period appearing after a type should be almost 1 in order for it to get assigned a high log-likelihood score and thus a high likelihood of being classified as an abbreviation. Due to the revision of the hypotheses H 0 and H A , the log-likelihood ratio for abbreviation detection is no longer asymptotically \u03c7 2 -distributed. However, this is not a disadvantage since the resulting log-likelihood value expresses only one of three crucial properties of abbreviations. Since this value is counterbalanced by other factors and the resulting log-likelihood score thus scaled in various ways, the \u03c7 2 distribution could not be retained anyway, as will become clear in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Type-based Stage", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In the token-based classification stage, Dunning's log-likelihood ratio is used in two different heuristics. The collocation heuristic, described in Section 4.1.2, takes a pair of words w 1 and w 2 surrounding a period and tests whether a collocational tie exists between them. A positive answer to this question is used as evidence against an intervening sentence boundary. The frequent sentence starter heuristic, described in Section 4.1.3, makes use of the results of the type-based classifier and searches for word types that form a collocation with a preceding sentence boundary, that is, which occur particularly often after end-of-sentence periods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Dunning's formulation of the log-likelihood ratio is a two-tailed statistical test. For a pair of word types w 1 and w 2 , the null hypothesis H 0 : P(w 2 | w 1 ) = p = P(w 2 | \u00acw 1 ), and the alternative hypothesis H A : P(w 2 | w 1 ) = p 1 = p 2 = P(w 2 | \u00acw 1 ), the log \u03bb value is high if p 1 and p 2 significantly diverge from each other. But in fact, one should only consider those pairs of words as collocations for which p 1 is much higher than p 2 . Only the latter case means that w 2 occurs more often than expected after w 1 , whereas if p 1 is less than p 2 this means that w 2 occurs less often after w 1 than expected. Manning and Sch\u00fctze (1999, page 172) comment in their description that \"[w]e assume that p 1 >> p 2 if Hypothesis 2 [i.e., the alternative hypothesis, TK/JS] is true. The case p 1 << p 2 is rare, and we will ignore it here.\" To us, this step seems to be premature since it ignores that Dunning's log \u03bb is a two-tailed test, where the equation in (6) holds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "log \u03bb = 0 iff C(w 2 ) N = C(w 1 , w 2 ) C(w 1 )", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If the two sides of this equation diverge, log \u03bb will take a value greater than 0. In the usual case considered by Dunning (1993) and discussed by Manning and Sch\u00fctze (1999) , the right-hand side of the equation is larger than the left-hand side. A high log \u03bb value thus properly expresses the fact that w 1 and w 2 form a collocation because the occurrence of w 2 after w 1 is more likely than expected from the unconditional likelihood of w 2 . If, however, the left side of the equation is greater than the right side, we still get a log \u03bb value greater than 0, but this time, this indicates that w 2 occurs less often than expected after w 1 . This is obviously at odds with the general idea of a collocation. For collocations in general, the assumption made by Manning and Sch\u00fctze (1999) can indeed be considered safe, since the types w 1 and w 2 are likely to occur only rarely. For this reason, we do not expect a large negative deviation, where the right-hand side of equation 6is significantly smaller than the left-hand side. However, some of the types that we test for collocational ties in the token-based stage occur very often. The frequent sentence starter heuristic, for example, tests whether a given word w 2 forms a collocation with a preceding sentence boundary w 1 , that is, with the sentence boundary symbol <S> inserted by the type-based first stage of Punkt. The abstract type 'sentence boundary' (i.e., <S>) may be very frequent in many corpora, as can be witnessed from a sample from a German newspaper corpus, where C(<S>) = 35,775 and N = 847,206.", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 129, |
| "text": "Dunning (1993)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 147, |
| "end": 173, |
| "text": "Manning and Sch\u00fctze (1999)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In Table 1 , we have tested for four words in the German newspaper corpus whether they are frequent sentence starters or not. The first two words, ist (the third-person singular form of the verb sein 'to be') and zu (infinitive marker or preposition 'to') occur very often, while the latter two words do not occur so often. Neither ist nor zu should be considered frequent sentence starters, while both dennoch ('nevertheless') and erstens ('first') are true frequent sentence starters. Yet, all four words receive very high log \u03bb values, since in all cases p 1 diverges significantly from p 2 . However, it holds only for dennoch and erstens that they occur more often after <S> than expected, while ist and zu occur much less often than expected after <S>. In order to exclude cases like ist and zu from being detected as collocates of a preceding sentence boundary, we add Table 1 Correct and incorrect frequent sentence starters. the constraint in (7) to log \u03bb calculations for the frequent sentence starter heuristic and similarly apply it to the collocation heuristic.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 876, |
| "end": 883, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w 1 w 2 C(w 1 ) C(w 2 ) C(w 1 ,", |
| "eq_num": "w 2 )" |
| } |
| ], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "w 1 , w 2 is a collocation if log \u03bb > threshold and C(w 1 ,w 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "C(w 1 ) > C(w 2 ) N", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In general, it seems to be a good idea to add the one-sidedness condition given in (7) to the log-likelihood ratio used for the purpose of collocation detection. The version of the likelihood ratio that is calculated for the type-based classifier in Section 3 is not affected by this problem since it does not follow a \u03c7 2 distribution and is in fact no longer two-tailed because of the form of the two hypotheses that are compared.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood Ratios in the Token-based Stage", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The type-based classification stage of Punkt employs three characteristic properties of abbreviations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type-based Classification", |
| "sec_num": "3." |
| }, |
| { |
| "text": "1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type-based Classification", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Strong collocational dependency: Abbreviations always occur with a final period. 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type-based Classification", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Brevity: Abbreviations tend to be short.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Internal periods: Many abbreviations contain additional internal periods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "As these three characteristics do not change for each individual instance of a type, we combine them in a type-based approach to abbreviation detection. The implementation of the first property makes use of a likelihood ratio with the revised hypotheses introduced in (4) and (5). The list of all types that ever occur with a following period in the corpus is sorted according to this likelihood ratio. The resulting value for a type expresses the assumption that this type is more likely to be an abbreviation than all types having lower values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "The first four columns of Table 2 show a section of this sorted list, where nonabbreviations are indicated in italics. The figures of occurrence in Table 2 , which are drawn from an actual corpus sample comprising 351,529 tokens of Wall Street Journal text, also illustrate a potential problem, namely, that most of the candidate types are quite rare. As has been pointed out by Dunning (1993) , the calculation of log \u03bb assumes a binomial distribution. It is therefore better suited to deal with sparse data than statistics that are based on a normal distribution, such as the t test. This consideration carries over to the modified log-likelihood ratio employed here.", |
| "cite_spans": [ |
| { |
| "start": 379, |
| "end": 393, |
| "text": "Dunning (1993)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 26, |
| "end": 33, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 148, |
| "end": 155, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Some true abbreviations in the left half of Table 2 are either ranked lower than nonabbreviations or receive the same log-likelihood value as non-abbreviations. According to the criterion of strong collocational dependency, ounces is a very good candidate for an abbreviation, as it never occurs without a final period in the corpus. The collocational criterion alone is thus not sufficient to detect abbreviations with high precision. Table 2 confirms that abbreviations tend to be rather short: Each non-abbreviation is longer than the longest abbreviation. We therefore use brevity as a further characteristic property to counterbalance the likelihood ratio. In contrast to other proposals such as the one by Mikheev (2002, page 299) , we refrain from using a fixed maximum length for abbreviations. Instead, we multiply the likelihood ratio for each candidate with an inversely exponential scaling factor derived from the length of that candidate. The length of the candidate is defined as the number of characters in front of the final period minus the number of internal periods, as illustrated in example (8).", |
| "cite_spans": [ |
| { |
| "start": 712, |
| "end": 736, |
| "text": "Mikheev (2002, page 299)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 436, |
| "end": 443, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "length(u.s.a.) = 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "We exclude internal periods from the count because they are good evidence that a candidate should be classified as an abbreviation (see below). We thus prevent a counterintuitive, higher penalty for candidates with internal periods. The exact form of the length factor is given in (9).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "F length (w) = 1 e length(w) (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "We have chosen an inversely exponential scaling factor since it reflects well the likelihood that a type of a certain length is an abbreviation. Typically, short types are more likely to be abbreviations than longer types. The validity of this assumption can be witnessed from Figure 2 . It shows that the ratio of the number of abbreviation types to the number of non-abbreviation types decreases with growing length in a Dutch newspaper corpus. While about 96% of all types of length one are abbreviations, this percentage drops rapidly to 59% for types of length two and to 15% for types of length three and so on.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 277, |
| "end": 285, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "In addition to avoiding any higher penalty on the factor F length caused by internal periods, we use another scaling factor F periods as given in (10). This factor expresses the intuition that a higher number of internal periods increases the likelihood that a type is a true abbreviation. The scaling factor has been designed to leave unchanged the values of candidates that do not contain internal periods, while those candidates that contain internal periods receive an extra advantage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "F periods (w) = number of internal periods in w + 1", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Multiplying the log-likelihood ratio with these two scaling factors leads to a significantly better sorting of the candidate list. The scaled log-likelihood ratio does not exclude a candidate from being classified as an abbreviation just because it has occurred without a final period once or twice in the corpus if there is otherwise good evidence that it is a true abbreviation. For most languages, this increased robustness is unproblematic because almost all ordinary words occur without a period a sufficient number of times. However, for some languages, the scaled log-likelihood ratio is not restrictive enough. Verb-final languages, such as Turkish, where certain very common verbs appear at the end of a sentence most of the time, are one example. In such a case, the scaled log-likelihood ratio described so far runs into difficulties because it mistakes the occurrences of these verbs without a period as exceptions. To remedy this problem, the calculated log \u03bb values are multiplied by a third factor, which penalizes occurrences without a final period exponentially:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "F penalty (w) = 1 length(w) C(w,\u00ac\u2022)", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "It should be noted that we use the length of the candidate as a basis for the calculation because the likelihood of existence of homographic non-abbreviations for a given abbreviation type is dependent on the length of the candidate. Homographic nonabbreviations occur particularly often with abbreviations of length 1, and accordingly, there will be no penalty at all. With a length of 2, the penalty factor is still moderate, but increases with length to reflect that longer abbreviations are not very likely to have homographic non-abbreviations. Furthermore, the penalty factor is exponentially scaled. Hence, it will mostly affect candidate types with a high number of occurrences without a final period. A candidate type that occurs six times in all and one time without a final period is much less affected by (11) than a candidate that occurs 600 times in all and 100 times without a final period. This reflects our intuition that homographic nonabbreviations and typing errors can always occur, but that a high number of instances without a final period requires a strong penalty. As we cannot rely any longer on the asymptotic \u03c7 2 distribution of the log-likelihood ratio (cf. Section 2.1), we propose a new threshold value. All candidates above it will be considered abbreviations; all candidates below it will be classified as ordinary words. We use the classification function defined in (12) with a threshold value of 0.3represented by a dashed line in Table 2 . We have determined the threshold value by manual experimentation (cf. Section 5) and have used it throughout the evaluation in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1467, |
| "end": 1474, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "For each candidate word type w:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "If log \u03bb(w)\u00d7F length (w)\u00d7F periods (w)\u00d7F penalty (w)\u22650.3 \u2192 w is an abbreviation. If log \u03bb(w)\u00d7F length (w)\u00d7F periods (w)\u00d7F penalty (w)<0.3 \u2192 w is not an abbreviation.", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "The last two columns of Table 2 show the final sorting of the candidates after applying the three scaling factors to the log-likelihood value for each candidate. Multiplication with the three factors has led to a much cleaner separation of the true abbreviation types from the non-abbreviations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 31, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "The second classification stage of Punkt operates on the token level and improves the intermediate annotation of the corpus provided by the type-based classification stage. For every token with a final period, the system decides on the basis of its immediate right context whether the intermediate annotation has to be modified or corrected. For this classification step, the relevant tokens are separated into different classes (cf. Figure 1) . Each class triggers a reexamination with a different combination of a few basic heuristics. The most important classes are ordinary abbreviations and ellipses, which may appear at the end of a sentence. Moreover, there are two special classes of abbreviations, which are problematic for a type-based approach, namely, possible initials and possible ordinal numbers. Three basic heuristics are employed in the token-based classification stage: the orthographic heuristic, whose task is to test for orthographic clues for the detection of sentence boundaries after abbreviations and ellipses; the collocation heuristic, which determines whether two words surrounding a period form a collocation and interprets a positive answer to this question as evidence against an intervening sentence boundary; and finally the frequent sentence starter heuristic, which suggests a preceding sentence boundary if a word appearing after a period is found on a list of frequent sentence starters induced on the fly from the text that is to be segmented.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 434, |
| "end": 443, |
| "text": "Figure 1)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Token-based Classification", |
| "sec_num": "4." |
| }, |
| { |
| "text": "At first sight, it might seem reasonable to rely on orthographic conventions for the detection of sentence boundaries. For instance, a capitalized word usually indicates a preceding sentence boundary in mixed-case text. However, such a procedure is perilous for various reasons. To begin with, certain word classes are capitalized even if they occur sentence-internally as is the case with the majority of proper nouns in English and all nouns in German. Even a lowercase first letter does not guarantee that the word in question is not preceded by a sentence boundary. This is particularly evident for mathematical variables or names that are conventionally written without capitalization such as amnesty international; see also Nunberg (1990, pages 54ff.) . Finally, any method that relies solely on capitalization will not help at all with single-case text.", |
| "cite_spans": [ |
| { |
| "start": 730, |
| "end": 757, |
| "text": "Nunberg (1990, pages 54ff.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics in the Token-based Stage 4.1.1 The Orthographic Heuristic.", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Still, we think that capitalization information-if used cautiously-can help to determine whether an abbreviation or ellipsis precedes a sentence boundary or not; see Section 6.4.5 for a discussion of the importance of orthographic information in comparison to other types of evidence. In order to make the usage of orthographic information safer and more robust, Punkt counts how often every word type occurs with an uppercase and lowercase first letter at the beginning of a sentence and sentenceinternally in the corpus; see Table 3 for some example statistics. It bases its calculations on the sentence boundaries determined by the type-based classification stage. It does not count tokens occurring after an abbreviation or an ellipsis because we have not yet determined whether they start a new sentence or not. The algorithm also ignores tokens that occur after possible initials and numbers. Again, a reclassification is likely to happen; see Section 4.3. In sum, as the counts are based on imperfectly annotated data, we try to exclude most doubtful cases. Figure 3 gives a pseudocode description of the orthographic heuristic, which decides for a token following an abbreviation or an ellipsis on the basis of the orthographic statistics gathered for all word types whether it represents good evidence for a preceding sentence boundary or not.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 527, |
| "end": 534, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1065, |
| "end": 1073, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Heuristics in the Token-based Stage 4.1.1 The Orthographic Heuristic.", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "If the word following an abbreviation or ellipsis is capitalized, the heuristic determines whether it occurs with a lowercase first letter in the text and whether it occurs with an uppercase first letter sentence-internally. Only if it occurs with a lowercase first letter at least once and never occurs with an uppercase first letter sentence-internally, the heuristic opts for a sentence boundary after the abbreviation or ellipsis. Otherwise, it returns undecided. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics in the Token-based Stage 4.1.1 The Orthographic Heuristic.", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Pseudo code of the orthographic heuristic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "If the token following the abbreviation or ellipsis has a lowercase first letter, the heuristic decides against a sentence boundary if that type also occurs with an uppercase first letter or if it never occurs with a lowercase first letter after a sentence boundary. In all other cases, the heuristic returns undecided.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, if the input of the orthographic heuristic is the capitalized token A, the heuristic would return undecided given the data in Table 3 , because the type a occurs capitalized both at the beginning of and inside a sentence. If the input is the lowercase token a, it would return no sentence boundary. For the proper name Smith, the result would be undecided since the name never occurs with a lowercase first letter at all. For the input tokens Across, Actual, and Psychologists, the heuristic would decide in favor of a sentence boundary. If the same tokens were not capitalized, the decision would be no sentence boundary.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 139, |
| "end": 146, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "In the worst-case scenario of an all-uppercase corpus, the orthographic heuristic would always return undecided. If all tokens in the corpus begin with a lowercase letter, it would either return undecided or no sentence boundary, which we think is the safest option since in this case, the heuristic cannot adduce any additional token-based evidence and most sentence boundaries have already been discovered by the type-based stage. The option to refuse a decision on the basis of capitalization information thus makes the orthographic heuristic very robust.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "The basic intuition behind the collocation heuristic is that sentence boundaries block collocational ties; compare, for example, Manning and Sch\u00fctze (1999, page 195) . If a period is surrounded by two words that form a collocation, we do not expect it to act as a sentence boundary marker. A period should therefore be interpreted as an abbreviation marker and not as a sentence boundary marker if the two tokens surrounding it can indeed be considered as a collocation according to Dunning's (1993) original log-likelihood ratio amended with the one-sidedness constraint introduced in Section 2.2. Following the asymptotic \u03c7 2 distribution of Dunning's original proposal, we require a log \u03bb value of at least 7.88, which represents a confidence of 99.5%. If this condition is met, we assume that the two words surrounding the potential sentence boundary form a collocation and hence represent evidence against an intervening sentence boundary.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 165, |
| "text": "Manning and Sch\u00fctze (1999, page 195)", |
| "ref_id": null |
| }, |
| { |
| "start": 483, |
| "end": 499, |
| "text": "Dunning's (1993)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Collocation Heuristic.", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "We also employ Dunning's log \u03bb as an unsupervised method for the extraction of frequent sentence starters, that is, word types that occur particularly often after a sentence boundary. We take the viewpoint of collocation detection and define a frequent sentence starter as a word type that has a strong collocational tie to a preceding sentence boundary. The occurrence of such a frequent sentence starter after a period can thus be used to adduce further evidence that the preceding period marks a sentence boundary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Frequent Sentence Starter Heuristic.", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "In contrast to Mikheev (2002, page 297), we do not extract the list of frequent sentence starters from an additional corpus but use the test corpus itself. The basic idea is to build a list of frequent sentence starters on the fly by counting how often every word type occurs following a sure sentence boundary, as determined by the type-based first stage. Sure sentence boundaries are all single periods following words that have been classified as non-abbreviations and are not possibly initials or numbers, that is, single letters or a sequence of one or more digits. Once the problem has been formulated as a problem of collocation detection, we can use the amended version of Dunning's log \u03bb, as described in Section 2.2, to test the candidate word types for a collocational tie to a preceding sentence boundary. Since we are relying on uncertain information, namely, the intermediate annotation, we assume an exceptionally high threshold value of 30 for the classification. Only if this value is reached or exceeded do we put the candidate on the list of frequent sentence starters. The high cutoff value has been determined experimentally during the development of Punkt (cf. Section 5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Frequent Sentence Starter Heuristic.", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "Finally, there exists an interaction between the frequent sentence starter heuristic and the collocation heuristic described in the preceding section in that the frequent sentence starter heuristic may help to counterbalance the collocation heuristic. Sometimes, the collocation heuristic will detect a collocation across a sentence boundary, particularly if the word preceding the boundary occurs quite often at the end of a sentence and the word following the sentence boundary is a frequent sentence starter. By determining the frequent sentence starters in the corpus and preventing the detection of collocations with these types as second elements, collocation detection is made safer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Frequent Sentence Starter Heuristic.", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "The main question for all tokens classified as abbreviations by the type-based first stage and all ellipses is whether they precede a sentence boundary or not. A sentence boundary after these two classes of candidate tokens is assumed by Punkt if the orthographic heuristic applied to the token following the abbreviation or ellipsis decides in favor of a sentence boundary, or the token following the abbreviation or ellipsis is a capitalized frequent sentence starter. However, only abbreviations that are longer than one letter and thus not possibly initials are reclassified in this way. Initials present special problems and are therefore reclassified differently, as will be discussed in the following section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token-based Reclassification of Abbreviations and Ellipses", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Initials are a subclass of abbreviations consisting of a single letter followed by a period. 6 As there are only about thirty different letters in the average Latin-derived alphabet, the likelihood of being a homograph of an ordinary word is very high for initials; consider, for example, the Portuguese definite articles o and a or the Swedish preposition i ('in'). Moreover, there are also various other uses for single letters: in formulas, enumerations, and so on. Initials are therefore often not detected by the type-based first stage of the Punkt system. For this reason, all single letters followed by a token-final period are treated as possible initials during the token-based reclassification-regardless of whether they have been classified as abbreviations or not by the type-based stage.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 94, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token-based Detection of Initials and Ordinal Numbers", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Luckily, initials are very often part of a complex name and can often be identified using collocational evidence. If a possible initial forms a collocation with the following token and the following token is not a frequent sentence starter, the period in between is reclassified as an abbreviation marker. Alternatively, if the orthographic heuristic decides against a sentence boundary on the basis of the token following the possible initial, the period is also reclassified as an abbreviation period. Last but not least, we employ a special heuristic for initials: If the orthographic heuristic returns undecided and the type following the possible initial always occurs with an uppercase first letter, it is assumed to be a proper name and the period between the two tokens is again classified as an abbreviation marker. The system never reclassifies a period following a possible initial as a sentence boundary marker because we assume that if a single letter is indeed not used as an abbreviation but, for example, as a mathematical symbol or if it is an ordinary word, there will usually be enough occurrences of this type without a final period so that the type-based stage will classify all periods following instances of the type in question as sentence boundary periods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token-based Detection of Initials and Ordinal Numbers", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In many languages, such as German, ordinal numbers written in digits are also marked by a token-final period; compare example (2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token-based Detection of Initials and Ordinal Numbers", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Was sind die Konsequenzen der Abstimmung vom 12. Juni? What are the consequences of the poll of the 12 th of June? (cited from NZZ 06/13/1994) As every numeric type can also be used as a cardinal number, it cannot be decided by a type-based algorithm whether a period after a number is an abbreviation marker or a sentence boundary marker. Numbers are therefore treated in the same way as initials. If the token following a number with a final period forms a collocation with the abstract type ##number## 7 and is not a frequent sentence starter, the period in between is classified as an abbreviation period. The same conclusion is reached if the orthographic heuristic decides against a sentence boundary on the basis of the following token.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 2", |
| "sec_num": null |
| }, |
| { |
| "text": "The three scaling factors used to improve on the initial sorting by the collocational criterion for abbreviation detection were obtained by manual experiments on a 10 MB development corpus of American English containing Wall Street Journal (WSJ) articles. This corpus is distinct from the portions of the WSJ we use for evaluation purposes in Section 6. From this development corpus, a candidate list of possible abbreviation types was extracted and sorted according to the log-likelihood ratio described in Section 2.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Development of the Punkt System", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We experimented with different factors and measured their impact in terms of precision and recall on the candidates above a given threshold value. Our goal was to maximize precision and recall for the top part of the list, that is, to get a clear separation of true abbreviations at the top of the list from non-abbreviations at the bottom of the list. The factors F length (9) and F periods (10) were conceived and tested solely on the basis of the WSJ development corpus. We have added the third factor F penalty (11) to cope with the problem of very common ordinary words that precede a sentence boundary most of the time. We encountered this problem in the Turkish test corpus, but it would probably arise for other verb-final languages as well. After fixing the final form of the scaling factors, we also used the candidate list from the development corpus to determine the ideal threshold value for type-based abbreviation detection by manual inspection. The best combination of the different methods in the token-based stage and the threshold value 30 used for finding frequent sentence starters were also determined by manual experimentation on the development corpus. The same parameters and combinations of heuristics have been employed in all tests described in Section 6 unless otherwise noted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Development of the Punkt System", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We have tested our system extensively for a number of different languages and under different circumstances. We report the results that we obtained from our experiments in Section 6.4, after giving a short characterization of the test corpora on which we did our evaluation in Section 6.1, defining the performance measures we use in Section 6.2 and proposing three baselines as lower bounds and standards of comparison in Section 6.3. We compare our approach to other systems for sentence boundary detection proposed in the literature in Section 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6." |
| }, |
| { |
| "text": "We have evaluated Punkt on corpora from eleven different languages: Brazilian Portuguese, Dutch, English, Estonian, French, German, Italian, Norwegian, Spanish, Swedish, and Turkish. For all of these languages, we have created test corpora containing newspaper text, the genre that is most often used to test sentence boundary detection systems; compare Section 7. Table 4 provides a short description for each one of these newspaper corpora. Five of them-Dutch, French, Italian, Spanish, and Swedish-are parts of corpora taken from the CD-ROM Multilingual Corpus 1 distributed by the European Corpus Initiative (ECI). For English, we have used sections 03-06 of the WSJ portion of the Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993) distributed by the Linguistic Data Consortium (LDC), which have frequently been used to evaluate sentence boundary detection systems before; compare Section 7. For the other languages, we have chosen newspaper corpora that were either available on the Internet (Brazilian Portuguese), distributed by the newspaper itself (German), or kindly provided to us by other research institutions (Estonian, Norwegian, and Turkish). While the Swedish corpus contains a small amount of literary fiction in addition to newspaper articles from several Swedish newspapers, the other corpora consist solely of newswire text.", |
| "cite_spans": [ |
| { |
| "start": 700, |
| "end": 743, |
| "text": "(Marcus, Santorini, and Marcinkiewicz 1993)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 365, |
| "end": 372, |
| "text": "Table 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Test Corpora", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "To determine whether Punkt is also suitable for different text genres, we have additionally evaluated it on a piece of American English literature (compare Table 5 ) obtained from Project Gutenberg (www.gutenberg.org). Last but not least, we have also tested it on the Brown corpus of American English (Francis and Kucera 1982) , which has often been used to evaluate other sentence boundary detection systems. This corpus contains a mixture of text genres including news, scientific articles, and literary fiction. For the evaluation, we have created annotated versions of the test corpora, in which all periods were disambiguated by hand and labeled with the correct tag from Table 6 . Table 7 illustrates some statistical properties of the test corpora. For each corpus, we provide the number of tokens it contains and the number of all tokens with a final period, that is, all periods that had to be classified by Punkt. As an indication of the difficulty of the sentence boundary detection task, we also give information on how many abbreviations each corpus contains and what percentage of all the tokens with a final period actually are abbreviations. Finally, the last column shows the number of different abbreviation types occurring in each test corpus.", |
| "cite_spans": [ |
| { |
| "start": 302, |
| "end": 327, |
| "text": "(Francis and Kucera 1982)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 156, |
| "end": 163, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 678, |
| "end": 685, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 688, |
| "end": 695, |
| "text": "Table 7", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Test Corpora", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Tags used in the evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Sentence boundary <A> Abbreviation <E> Ellipsis <A><S> Abbreviation at the end of sentence <E><S> Ellipsis at the end of sentence ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "<S>", |
| "sec_num": null |
| }, |
| { |
| "text": "The most important measure we use is the error rate given in (13). It is defined as the ratio of the number of incorrectly classified candidates to the number of all candidates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Measures", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "error rate = false positives + false negatives number of all candidates", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Measures", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In addition, we use precision and recall to provide better information on what kinds of errors were made by Punkt. Precision is the ratio between the number of candidate tokens that have been correctly assigned to a class and the number of all candidates that have been assigned to this class. precision = true positives true positives + false positives", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Measures", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Recall is defined as the proportion of all candidates truly belonging to a certain class that have also been assigned to that class by the evaluated system. recall = true positives true positives + false negatives", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Measures", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Finally, the so-called F measure is the harmonic mean of precision and recall (van Rijsbergen 1979) .", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 99, |
| "text": "(van Rijsbergen 1979)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Measures", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "F measure = 2 \u00d7 precision \u00d7 recall precision + recall", |
| "eq_num": "(16)" |
| } |
| ], |
| "section": "Performance Measures", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "All the measures we use are based on counting true and false positives and true and false negatives. There are, however, three possibilities of what classifications could be regarded as a positive or a negative outcome because Punkt actually performs three classification tasks at the same time. The most important one is the decision whether a token-final period marks a sentence boundary or not. But the system also decides for each candidate token ending in a period whether it is an abbreviation or not and whether it is an ellipsis or not; compare Table 6 . The performance measures for sentence boundary detection, abbreviation detection, and ellipsis detection do not directly depend on each other because a candidate can be an abbreviation (or an ellipsis) and precede a sentence boundary at the same time. We thus calculate error rate, precision, recall, and F measure for the sentence boundary detection problem and for the abbreviation detection problem separately. For sentence boundary detection, we count the following tags as positives: <S>, <A><S>, and <E><S>. The tags <A> and <A><S> are considered positives for the abbreviation detection task. As the correct classification of ellipses is straightforward, we do not give figures for the detection of ellipses.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 553, |
| "end": 560, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance Measures", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In Sections 6.4.1 and 6.4.2, we evaluate Punkt against three different baseline algorithms. These baselines serve several purposes. First, they establish a lower bound for the task of sentence boundary detection. Any sentence boundary detection system should perform significantly better than these baseline algorithms. Second, although we compare Punkt to other systems proposed in the literature in Section 7, most previous work on sentence boundary detection considered at most three different languages so that no direct comparison is possible for many of the corpora and languages that we have used in our evaluation. A comparison with the performance of the three baselines can at least give an indication of how well our system did on these corpora. Third, there is still an assumption held in the field that simple algorithms such as the baselines presented here are sufficiently reliable to be used for sentence boundary detection. This opinion was, for example, held by a reviewer of Kiss and Strunk (2002a) . As will become clear in the following sections, a baseline algorithm may perform pretty well on one corpus, but this performance typically does not carry over to other languages or corpora. The baselines thus also serve to illustrate the complexity of the sentence boundary detection problem.", |
| "cite_spans": [ |
| { |
| "start": 994, |
| "end": 1017, |
| "text": "Kiss and Strunk (2002a)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The absolute baseline (AbsBL) is the simplest approach to sentence boundary detection we can think of. It simply assumes that all token-final periods in a test corpus represent sentence boundaries. Consequently, all periods are tagged with <S>.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The second baseline algorithm (TokBL) relies only on the local context of a period. It is a token-based approach that uses only orthographic information. All token-final periods (including those that form part of an ellipsis) that do not precede a token starting with a lowercase letter, a digit, or one of the following sentence internal punctuation marks [; : ,] are classified as sentence boundary markers and annotated with <S>. All other token-final periods are either classified as abbreviations (<A>) or ellipses (<E>).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The third baseline algorithm (TypeBL) is based on a method described by Grefenstette (1999) and also by Mikheev (2002, page 299) . It is a type-based approach that decides for each candidate type whether it is an abbreviation or not. All instances of candidates that ever occur in an unambiguous position, that is, before a lowercase letter or a sentence-internal punctuation mark [; : ,] are classified as abbreviations, and a period following them is not considered as an end-of-sentence marker. All other ", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 91, |
| "text": "Grefenstette (1999)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 104, |
| "end": 128, |
| "text": "Mikheev (2002, page 299)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We have tested Punkt on the various corpora introduced in Section 6.1. In all cases, it was only provided with the unannotated test corpus as input and no further data whatsoever, most importantly, no lexicon and no list of abbreviations. Its main classification task was to decide for all token-final periods whether they indicated the end of a sentence or not. 8 In addition, it had to decide for all token-final periods whether they were used as an abbreviation marker or were part of an ellipsis. The results Punkt achieved on the newspaper corpora are presented in Section 6.4.1. Those obtained for the remaining corpora are given in Section 6.4.2. In Section 6.4.3, we provide the results of an experiment in which we evaluated our system on all-uppercase and all-lowercase corpora. As many competing systems require a list of abbreviations, we have carried out an experiment to determine the usefulness of abbreviation lists derived from general-purpose dictionaries. The results are reported in Section 6.4.4. Last but not least, we take a closer look at the architecture of Punkt in Section 6.4.5 by examining the contributions of its individual parts, look at remaining errors and problems in Section 6.4.6, and discuss the hypothesis that the methods and heuristics we use can be called language independent in Section 6.4.7. Table 8 shows the results that we obtained for the tasks of sentence boundary detection and abbreviation detection on the eleven newspaper corpora. We performed two test runs for each language: one with detection of ordinal numbers and one without a special treatment of numbers. For languages such as English, which do not usually mark ordinal numbers with a final period, it is obviously preferable not to try to detect them. In Table 8 , we only report the best result from the two test runs for each language. 9 Those languages in which the period is not usually used to mark ordinal numbers and for which the test without special treatment of numbers achieved better results are italicized in the following tables. However, even if the special treatment of numbers was not turned off for such languages, the resulting increase in the error rate was not very high, maximally 0.03%; see also Section 6.4.5. For the sentence boundary detection task, the error rates Punkt achieved on the eleven newspaper corpora range from 2.12% on the Estonian corpus to only 0.35% on the German corpus with an average error rate of 1.26%. The error rates for abbreviation detection are slightly lower, lying between 1.75% on the Estonian corpus and 0.26% on the German corpus with an average of 0.80% for all eleven corpora. Table 9 compares Punkt's performance to that of the three baseline algorithms. The error rates achieved by Punkt for the sentence boundary task are reduced by about 83% on average compared to the absolute baseline, by about 73% compared to the token-based baseline, and by almost 80% compared to the type-based baseline. The error rates for the abbreviation detection task have decreased even more considerably, namely, by approximately 89% in comparison to the absolute baseline, by about 83% in comparison to the token-based baseline, and by almost 86% in comparison to the typebased baseline. Table 9 also shows that whereas the good performance of our system is quite stable across the eleven corpora with a standard deviation of only 0.49% for sentence boundary detection and 0.46% for abbreviation detection, the performance of the baselines is not reliable at all. Although one of the baselines sometimes performed well on one corpus-such as TokBL on the Swedish corpus, the only case where Punkt was not better than all of the baselines, or TypeBL on the Brazilian Portuguese and Dutch corpora, the baselines exhibit a very large standard deviation in their error rates across the eleven corpora and sometimes seem to fail completely, such as TokBL on the English corpus and TypeBL on the Turkish corpus.", |
| "cite_spans": [ |
| { |
| "start": 1851, |
| "end": 1852, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1337, |
| "end": 1344, |
| "text": "Table 8", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 1768, |
| "end": 1775, |
| "text": "Table 8", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 2650, |
| "end": 2657, |
| "text": "Table 9", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 3246, |
| "end": 3253, |
| "text": "Table 9", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "In order to show that Punkt is also suited to process text genres different from newspaper text and that its performance carries over to other text types, we have tested it on two additional corpora of American English-the entire Brown corpus and The Works of Edgar Allan Poe (volumes I-III). Table 10 provides the results Punkt achieved on the two additional corpora.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 293, |
| "end": 301, |
| "text": "Table 10", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on the Other Corpora.", |
| "sec_num": "6.4.2" |
| }, |
| { |
| "text": "The error rate on the Brown corpus is 1.02% for the sentence boundary detection task and 0.82% for the abbreviation detection task. This represents a reduction of about 90% compared to the absolute baseline, a reduction of more than 85% compared to the token-based baseline, and a reduction of more than 70% compared to the typebased baseline; see Table 11 . The error rate on The Works of Edgar Allan Poe is 0.80% for sentence boundary detection and 0.46% for abbreviation detection, which corresponds to a reduction of about 85% in comparison to AbsBL, a reduction by more than 80% compared to TokBL, and by about 75% compared to TypeBL. These results achieved on the literary Poe corpus and the Brown corpus with its balanced content fall within the range of the error rates achieved on the newspaper corpora and thus indicate that Punkt is also well suited to deal with literary texts and corpora containing mixed content.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 348, |
| "end": 356, |
| "text": "Table 11", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on the Other Corpora.", |
| "sec_num": "6.4.2" |
| }, |
| { |
| "text": "Corpora. We have also tested the applicability of Punkt to single-case text. The newspaper test corpora have been converted to all-uppercase and all-lowercase versions in order to determine how much Punkt is affected by the loss of capitalization information. In fact, one should keep in mind that single-case text does not only lack useful capitalization information, but actually contains information that is highly misleading for systems that rely primarily on capitalization. Table 12 shows the performance of our system on the single-case corpora. The left half of the table contains the error rates and F values for sentence boundary detection and abbreviation detection on the lowercase corpora. The right half gives the corresponding values for the tests on the uppercase corpora. The last two rows of the table compare these results with those Punkt achieved on the mixed-case (MC) versions of the newspaper corpora; compare Section 6.4.1. As can be seen in Table 12 , the performance of our system is only minimally affected by the loss of capitalization information, slightly more so on the all-lowercase corpora. The error rate our system produces for the task of sentence boundary detection is 0.41% higher on average on the lowercase corpora than on the mixed-case corpora. The increase in the error rate on the uppercase corpora is slightly lower: 0.29%. For the task of abbreviation detection, the increase in the error rates is even lower: 0.14% on the lowercase corpora and 0.13% on the uppercase corpora. This is expected because Punkt only uses capitalization information as evidence during the token-based correction and reclassification stage and not as primary evidence for the detection of abbreviations. The experiments on the single-case corpora show that Punkt is quite robust and well suited also to process single-case text.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 480, |
| "end": 488, |
| "text": "Table 12", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 967, |
| "end": 975, |
| "text": "Table 12", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments with Single-case", |
| "sec_num": "6.4.3" |
| }, |
| { |
| "text": "Punkt is able to dynamically detect abbreviations in the test corpus itself. It therefore does not depend on precompiled abbreviation lists like some of its competitors; compare Section 7. But even though an abbreviation list is not necessary for Punkt to perform well, such a list can easily be integrated into its architecture. The abbreviations read from such a list are simply added to those the system has detected in the test corpus after the type-based stage. Ideally, one would use a domain-specific abbreviation list if the domain of the test corpus is known beforehand. However, we wanted to determine the usefulness of general-purpose abbreviation lists derived from general-purpose dictionaries. We have therefore built such abbreviation lists by extracting by hand all abbreviations from a German spelling dictionary-the Rechtschreibduden (Dudenredaktion 2004)and all English abbreviations from a bilingual dictionary-the small Muret-Sanders English-German dictionary by Langenscheidt (Willmann and Messinger 1996) . This yielded a total number of 769 abbreviations for German and 1,537 for English; compare Table 13 . We then made three additional versions of these lists from which we deleted potentially harmful entries: one from which we removed all abbreviations that had obvious non-abbreviation homographs, one from which we removed all single-character abbreviations, and one from which we removed both. Table 13 gives the number of remaining abbreviations for these different versions. We produced these additional versions to test how much care is needed when preparing abbreviation lists for a system like Punkt.", |
| "cite_spans": [ |
| { |
| "start": 998, |
| "end": 1027, |
| "text": "(Willmann and Messinger 1996)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1121, |
| "end": 1129, |
| "text": "Table 13", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1425, |
| "end": 1433, |
| "text": "Table 13", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments with Additional Abbreviation Lists.", |
| "sec_num": "6.4.4" |
| }, |
| { |
| "text": "We then carried out two experiments on the German and English newspaper corpora and the Brown corpus. In the first experiment, we tested how well Punkt performed when it was provided with the different abbreviation lists in addition to the abbreviations it was able to detect on the fly; see Table 14 for the results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 292, |
| "end": 300, |
| "text": "Table 14", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments with Additional Abbreviation Lists.", |
| "sec_num": "6.4.4" |
| }, |
| { |
| "text": "The results show that Punkt can indeed benefit from additional abbreviation lists, but only if these are prepared with care. Providing such a carefully prepared abbreviation list reduced the error rate of our system on the WSJ corpus from 1.65% to 1.58%, the error rate on the Brown corpus from 1.02% to 0.92%, and the error rate on the German NZZ corpus from 0.35% to 0.32%. Additional general-purpose abbreviation lists thus do improve the performance of our system, but the decrease of the error rate is not very great. Table 14 also shows that abbreviation lists from which abbreviations homographic to ordinary words and single-character abbreviations have not been removed are not helpful at all and instead lead to an increased error rate on all of the three corpora.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 523, |
| "end": 531, |
| "text": "Table 14", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments with Additional Abbreviation Lists.", |
| "sec_num": "6.4.4" |
| }, |
| { |
| "text": "In the second experiment, Punkt could only use the abbreviations on the different lists and was not allowed to add any additional abbreviations on the fly. This experiment thus really tests the coverage of general-purpose abbreviation lists and also the productivity of abbreviation use in the test corpora. Table 15 contains the results from this experiment. The column On the fly gives the error rates that Punkt achieved in its normal configuration detecting abbreviations on the fly without being provided with an additional abbreviation list. The remaining columns show the results it produced when it could use a fixed list of abbreviations only. A comparison between the first column and the other columns makes clear that abbreviation use in the corpora is quite productive and that fixed general-purpose abbreviation lists are clearly not sufficient for sentence boundary detection. A versatile sentence boundary detection system should therefore always be able to detect unknown abbreviations on the fly.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 308, |
| "end": 316, |
| "text": "Table 15", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments with Additional Abbreviation Lists.", |
| "sec_num": "6.4.4" |
| }, |
| { |
| "text": "In this section, we take a look at the contributions of the individual parts of the system to its overall performance. First, we tried to determine the effectiveness of reclassifying the different candidate classes in the token-based stage separately using specific combinations of evidence, namely, ordinary abbreviations, ellipses, initials, and numbers. We therefore built four different versions of our system, which are described in Table 16 . The different configurations become cumulatively more specialized in their treatment of the different candidate classes from System 1 with no token-based reclassification at all to the complete System 4. Table 17 gives the results that the four different configurations achieved on the eleven newspaper corpora. It shows that the cumulatively more specialized treatment of the different candidate classes helps to improve on the error rate of the type-based stage considerably. Moreover, a separate reclassification of initials and numbers is quite effective for reducing the error rate on the newspaper corpora, often even more so than the detection of sentence boundaries after abbreviations and ellipses. The separate treatment of initials is quite beneficial for the English corpus, for example, reducing the error rate from 2.06% to 1.65% (System 2 vs. System 3), but also for Dutch, French, Italian, Norwegian and Spanish, while the detection of ordinal numbers is a very important factor for the German newspaper corpus, reducing the error rate from 2.25% to only 0.35% (System 3 vs. System 4), and also for the Estonian, Norwegian, and Turkish corpora. For all languages that use the period to mark ordinal numbers, the detection of ordinal numbers thus turns out to be a very important subtask of sentence boundary disambiguation. A comparison between System 3 and System 4 also shows that leaving the detection of ordinal numbers on for languages that do not mark them with a final period is not really harmful, resulting in a maximal increase in the error rate of 0.03%. In a second experiment, we have tested the usefulness of the different heuristics used during the token-based stage. Table 18 provides information on the five different configurations we have evaluated. We have again added heuristics cumulatively: first, the collocation heuristic; next the frequent sentence starter heuristic; then the orthographic heuristic; and finally, the special orthographic heuristic for initials. Table 19 contains the error rates produced by Systems A to E on the eleven newspaper corpora. It confirms that all heuristics contribute to the performance of the system, though to different degrees depending on the specific corpus. It also shows that the collocation heuristic is very effective in reducing the error rate on the different corpora, more effective in fact than the orthographic heuristic. This fact supports our argument that the importance of brittle orthographic evidence can be reduced and sentence boundary detection can be made more robust by relying more on collocational evidence. The collocation heuristic reduces the error rate from 7.37% to 2.94% on the Estonian corpus, for example, and is also very effective for German, Norwegian, and Spanish. The impact of the frequent sentence starter heuristic is somewhat smaller, but it still leads to a substantial decrease in the error rate from 2.61% to 1.96% for French and to smaller reductions for all other languages. Although Punkt does not rely so much on capitalization, the orthographic heuristic still reduces the error rate from 2.80% to 2.18% for Estonian, for example, and leads to smaller improvements for the other languages except for English, where it causes a small increase in the error rate. As most combinations of initials and a following proper name are already captured by the collocation heuristic, the special orthographic heuristic for initials is only applied to complex names that occur infrequently and thus does not result in a large reduction of the error rate. Still, it never has a negative effect and is able to reduce the error rate from 1.84% to 1.65% on the English corpus and from 1.78% to 1.54% on the French corpus. We conclude that our heuristics are well motivated in that they decrease the error rates on the newspaper corpora substantially and never have any severe detrimental effect. Moreover, as Section 6.4.3 shows, they work effectively even for single-case corpora.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 438, |
| "end": 446, |
| "text": "Table 16", |
| "ref_id": null |
| }, |
| { |
| "start": 653, |
| "end": 661, |
| "text": "Table 17", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 2148, |
| "end": 2156, |
| "text": "Table 18", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 2454, |
| "end": 2462, |
| "text": "Table 19", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Contributions of the Individual Parts of the System.", |
| "sec_num": "6.4.5" |
| }, |
| { |
| "text": "6.4.6 Remaining Errors and Problematic Cases. The preceding sections have shown that Punkt's type-based abbreviation detection stage by itself is already quite effective and that the token-based heuristics are successful at further reducing the error rates on the test corpora. However, there remain some problematic cases, which Punkt currently is not able to deal with well and which we have to leave for further work. The problems that resulted in errors on the test corpora can be grouped into a few major types: (1) homography, (2) inconsistent use of abbreviations, (3) data sparseness, (4) insufficient or contradicting orthographic evidence, and (5) problems with text structure. The first type of error results from the type-based nature of our approach to abbreviation detection. Abbreviations may not be recognized as such and wrong sentence boundaries may be introduced if an abbreviation is a homograph of an ordinary word or an acronym because the type-based abbreviation detection stage is not able to distinguish between homographic types. For example, the English abbreviation in. for inch might not be recognized because it coincides with the frequent preposition in. All occurrences of in with and without a final period will be added together and the type in will most probably be classified as a non-abbreviation. All instances of in followed by a period will then be marked as ordinary words preceding a sentence boundary. This type of error was common in the English, Italian, and Portuguese test corpora.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions of the Individual Parts of the System.", |
| "sec_num": "6.4.5" |
| }, |
| { |
| "text": "Inconsistent use of abbreviations within the same corpus represents a related problem. Sometimes, (certain) abbreviations are not obligatorily marked with a final period. For example, abbreviations for physical units such as m for meter or kg for kilogram are sometimes marked with a final period and sometimes not. If both usages occur within the same test corpus, Punkt may classify such a type as a non-abbreviation, which may lead to incorrectly introduced sentence boundaries if the final period of some instances of this type serves as abbreviation marker only. This problem was important for the Swedish test corpus, where many frequent abbreviation types, such as osv. 'and so on', were sometimes used with a final period and sometimes without one.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions of the Individual Parts of the System.", |
| "sec_num": "6.4.5" |
| }, |
| { |
| "text": "The type-based abbreviation detection stage is also affected by data sparseness. If an abbreviation type, especially a long one, occurs only very infrequently, there is simply not enough collocational evidence to recognize it as an abbreviation. Similarly, if an infrequent ordinary word by chance always occurs in front of a period, it might be mistaken for an abbreviation. This type of error occurred frequently in the Norwegian and Spanish corpora, in which quite a few long and rare abbreviations were used. The issue of data sparseness can also have an impact on the collocational and orthographic heuristics in the token-based stage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions of the Individual Parts of the System.", |
| "sec_num": "6.4.5" |
| }, |
| { |
| "text": "Sentence boundaries after abbreviations and ellipses are sometimes not found if the token following the period belongs to a type that does not provide good orthographic evidence for assuming a preceding sentence boundary. This is the case if the type is always capitalized, for example, because it is a proper noun, or if it occurs with an uppercase first letter within a sentence. If the type is rare it may also occur only at the beginning of a sentence by chance and will thus only be encountered in capitalized form. This lack of orthographic evidence had a detrimental effect on the recognition of abbreviations at the end of a sentence in the English corpus and on the detection of sentence boundaries after ellipses, for example, in the French corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions of the Individual Parts of the System.", |
| "sec_num": "6.4.5" |
| }, |
| { |
| "text": "Last but not least, not all parts of a text can be easily divided into sentences. For example, headlines are usually not terminated with a period. If an abbreviation is the last word in a headline, Punkt often tags it with <A><S> because the following token starts a new sentence and may provide good evidence for a preceding sentence boundary. However, a human annotator will probably consider the period following the abbreviation as an abbreviation marker only and not as a sentence boundary period in analogy to other headlines. Another problem relating to text structure is the question of whether an ordinal number in an enumerated list belongs to the same sentence as the list item itself or whether there is a sentence boundary between them. List items often begin with a capitalized word; compare, for example, the enumeration in Section 3. Punkt therefore mostly assumes that there is a sentence boundary between the number and the list item. Our human annotators, however, have considered the ordinal number to be part of the following sentence. These problems suggest that the automatic recognition of text structure could be quite beneficial for sentence boundary detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions of the Individual Parts of the System.", |
| "sec_num": "6.4.5" |
| }, |
| { |
| "text": "Punkt is conceived as a language and domain-independent corpus preprocessing tool that can be used out of the box for all languages that use an alphabetic writing system and employ the same symbol to mark abbreviations and the end of sentence. It is our hypothesis that the threshold values we use should not vary much from language to language and from corpus to corpus. We have therefore determined optimal thresholds once on an English development corpus and retained these values for all experiments described so far; compare Section 5. We have carried out two experiments to substantiate our hypothesis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language Independence and Optional Recalibration.", |
| "sec_num": "6.4.7" |
| }, |
| { |
| "text": "In the first experiment, we have tested different threshold values for the type-based abbreviation detection stage. Figure 4 shows that the threshold value of 0.3, which we have used so far, turns out to be the ideal value for three of the corpora and that the minimum error rate for all eleven newspaper corpora lies at or is very close to this threshold value. Table 20 gives the differences between the best error rate on each corpus and the error rate produced by our system with the chosen threshold value of 0.3. The", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 116, |
| "end": 124, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 363, |
| "end": 371, |
| "text": "Table 20", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language Independence and Optional Recalibration.", |
| "sec_num": "6.4.7" |
| }, |
| { |
| "text": "Error rates for different classification threshold values in the type-based stage. maximal difference is 0.13% for Italian. However, the average difference is only 0.03%. Moreover, Table 20 also indicates that the threshold values that produced the best results for the different languages-specified in parentheses in the Lowest column-all lie very close to 0.3. The biggest deviation is the optimal threshold value 0.6 for Estonian. The outcome of this experiment shows that abbreviations in the eleven languages behave very similarly and that the crucial type-based stage of Punkt can indeed be called language independent. This is further corroborated by a second experiment in which we trained generalized linear models for the detection of abbreviation types on the eleven newspaper corpora using a logit link function and no intercept term. In this experiment, we used the following three factors, which correspond to the collocational factor, the length factor, and the internal periods factor used in the type-based stage of Punkt:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 181, |
| "end": 189, |
| "text": "Table 20", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 4", |
| "sec_num": null |
| }, |
| { |
| "text": "Ratio: The ratio of occurrences of a candidate with a final period to all occurrences of this candidate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "Length: The length of the candidate type (excluding periods).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Periods: The number of internal periods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "We then examined the resulting parameters of the trained models in order to determine whether the evidence we use in the type-based stage of Punkt is significant information for an effective model for abbreviation detection. All three factors always make a highly significant contribution and cannot be dropped from the models. The factor Periods is sometimes a little less important than the other two as there are some corpora in which most abbreviations do not contain internal periods. Figure 5 indicates the variation of the parameters that we obtained for the three factors (on the logit link scale). It is remarkably small. The coefficient for the factor Periods exhibits the most variability, but is still quite stable. This relatively small variation of the parameters further substantiates our claim that the evidence we use for abbreviation detection can be considered language independent.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 490, |
| "end": 498, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "We know from further experiments that the threshold values 7.88 and 30, which we have chosen for the collocation heuristic and the frequent sentence starter heuristic, respectively, work well for our test corpora but are not always the optimal values for each individual corpus. Although Punkt is conceived as a flexible, unsupervised system, one can optionally recalibrate the threshold values by providing it with a handannotated training corpus. We have tested this possibility by annotating a second French Table 20 Difference between lowest error rate and error rate achieved with a threshold of 0.3. B. Port. 1.11% 1.10% (0.4) 0.01% Italian 1.13% 1.00% (0.2) 0.13% Dutch 0.97% 0.93% (0.2) 0.04% Norwegian 0.81% 0.81% (0.3) 0.00% English 1.65% 1.59% (0.2) 0.06% Spanish 1.06% 1.06% (0.3) 0.00% Estonian 2.12% 2.10% (0.6) 0.02% Swedish 1.76% 1.76% (0.3) 0.00% French 1.54% 1.51% (0.2) 0.03% Turkish 1.31% 1.25% (0.4) 0.06% German 0.35% 0.34% (0.2) 0.01% Average difference: 0.03%", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 511, |
| "end": 519, |
| "text": "Table 20", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Variation of parameters in a log-linear model for type-based abbreviation detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "corpus by hand. It again comprises articles from the newspaper Le Monde. It contains 371,526 tokens in all and 13,664 tokens ending in a final period. Punkt achieved an error rate for sentence boundary detection of 1.84% on this corpus. We used this second French corpus as training corpus to recalibrate the threshold values for the collocation heuristic and the frequent sentence starter heuristic. The optimal values determined on this training corpus were 17 for the collocation heuristic and 5 for the frequent sentence starter heuristic. We then used these values in a second test run on our original French test corpus. The resulting error rate for the task of sentence boundary detection was 1.44%, while the error rate for abbreviation detection was 0.71%. These results are a little better than the ones achieved without recalibration (1.54% and 0.72%); compare Section 6.4.1. The optimal threshold values determined on the original test corpus itself are 6 for the collocation heuristic and 5 for the frequent sentence starter heuristic. The large difference between the best threshold values for the collocation heuristic on the two French corpora shows that the ideal value can vary from corpus to corpus, even if two corpora contain text of the same language and the same genre: in this case newspaper articles from Le Monde. Nevertheless, the lower error rate obtained in this experiment shows that Punkt can optionally be recalibrated on a training corpus to further optimize its performance. It can thus benefit from supervised training data, such as abbreviation lists or a training corpus, but does not require such data in order to perform well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "The results we presented in the previous section show that Punkt is able to achieve low error rates on corpora from eleven different languages, that it is well-suited to process different text genres, and that it is robust enough to deal with single-case text. Moreover, it reliably outperforms the three baseline algorithms, and its performance is much more stable than that of the baselines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to Other Systems", |
| "sec_num": "7." |
| }, |
| { |
| "text": "In this section, we want to compare the performance of our system directly to that of competing systems and discuss advantages and disadvantages of the different approaches to sentence boundary detection. Unless otherwise indicated, the term error rate in this section always refers to the error rate for the task of sentence boundary detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to Other Systems", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Rule-based systems make use of hard-coded rules and fixed lists of lexical items such as abbreviations in order to identify which punctuation marks signal sentence boundaries and which do not; that is, they neither learn from an annotated training corpus nor use the test corpus itself to induce the required knowledge but rather employ precompiled resources usually provided by a human expert. We will discuss one such system by Silla, Valle, and Kaestner (2003) and compare its performance directly with that of our system. Moreover, we will also refer to results by Grefenstette (1999) . Silla, Valle, and Kaestner (2003) . The RE system 10 scans the test corpus until it encounters a period. It then compares the one token preceding and the one token following the period with a database of regular expressions that describe exceptions such as Web addresses, decimal numbers, and, most importantly, abbreviations, in which the period does not indicate the end of a sentence. If the preceding token and/or the following token match a regular expression in the database, the RE system concludes that the period does not indicate a sentence boundary and searches for the next period. If no matching regular expression is found, the period is classified as a sentence boundary.", |
| "cite_spans": [ |
| { |
| "start": 430, |
| "end": 463, |
| "text": "Silla, Valle, and Kaestner (2003)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 569, |
| "end": 588, |
| "text": "Grefenstette (1999)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 591, |
| "end": 624, |
| "text": "Silla, Valle, and Kaestner (2003)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule-based Systems", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "In Strunk, Silla, and Kaestner (2006) , we have compared Punkt's performance to that of the RE system on two test collections: on articles from the WSJ taken from the TIPSTER document collection (TREC reference number: WSJ-910130) and on the Brazilian Portuguese Lacio-Web Corpus (Aluisio et al. 2003 ). The RE system was specifically developed for English newspaper texts. In order to use it on the Portuguese Lacio-Web corpus, 240 new regular expressions, which match Portuguese abbreviations, had to be added. Silla and Kaestner (2004) describe the adaptation process as \"easy, although time consuming.\"", |
| "cite_spans": [ |
| { |
| "start": 3, |
| "end": 37, |
| "text": "Strunk, Silla, and Kaestner (2006)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 280, |
| "end": 300, |
| "text": "(Aluisio et al. 2003", |
| "ref_id": null |
| }, |
| { |
| "start": 513, |
| "end": 538, |
| "text": "Silla and Kaestner (2004)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The RE system by", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "On the English TIPSTER test corpus, Punkt achieved results that were only slightly worse than those of the RE system: The RE system reached a precision of 92.39% and a recall of 91.18%, which yielded an F measure of 91.78%, while Punkt achieved a slightly lower precision of 90.70% and a slightly higher recall of 92.34% resulting in an F measure of 91.51%. When comparing these results, it has to be kept in mind that the RE system employs a handcrafted list of more than 700 abbreviations and was specifically developed for English newspaper text, while Punkt was not given any information besides the test corpus itself. Punkt's performance on the English TIPSTER corpus is thus quite impressive. This is further corroborated by the fact that Punkt was able to outperform the RE system on the Portuguese Lacio-Web corpus, even though 240 Portuguese abbreviations had been collected for the database of the RE system by hand:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The RE system by", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "The RE system scored a precision of 91.80% and a recall of 88.02%, which resulted in an F measure of 89.87%, while Punkt achieved a precision of 97.58% and a recall of 96.87%, yielding a much better F measure of 97.22%. In sum, Punkt almost matched the performance of the RE system on the English test corpus and clearly outperformed it on the Portuguese test corpus. (1999) . This book chapter by Grefenstette-which is based on earlier work by Grefenstette and Tapanainen (1994) -discusses different approaches to sen-tence boundary disambiguation using different sources of information and evaluates them on the Brown corpus, on which Punkt achieved an error rate of 1.02%. The first approach described is a simple regular expressions approach that tries to recognize abbreviations by matching against the following patterns (Grefenstette 1999, page 127) : r a single capital followed by a period, such as \"A.\", \"B.,\" and \"C.\"; r a sequence of letter-period-letter-period's, such as \"U.S.\", \"i.e.,\"", |
| "cite_spans": [ |
| { |
| "start": 368, |
| "end": 374, |
| "text": "(1999)", |
| "ref_id": null |
| }, |
| { |
| "start": 445, |
| "end": 479, |
| "text": "Grefenstette and Tapanainen (1994)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 827, |
| "end": 856, |
| "text": "(Grefenstette 1999, page 127)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The RE system by", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "and \"m.p.h.\"; r a capital letter followed by a sequence of consonants followed by a period, such as \"Mr.\", \"St.,\" and \"Assn.\". This approach produces a high error rate of 2.34%. Moreover, it makes language-specific assumptions in that strings such as Mr might well be valid ordinary words in other languages. Grefenstette considers a second approach in which he additionally tries to identify abbreviations with a type-based heuristic, which inspired our type-based baseline (TypeBL) (Grefenstette 1999 , pages 128 and 129):", |
| "cite_spans": [ |
| { |
| "start": 484, |
| "end": 502, |
| "text": "(Grefenstette 1999", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grefenstette", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "Let us define as a likely abbreviation any string of letters terminated by a period and followed by either a comma or semi-colon, a question mark, a lower-case letter, or a number, or followed by a word beginning with a capital letter and ending in a period. [. . . ] We can apply the corpus itself as a filter by eliminating from the list of likely abbreviations those strings that appear without a terminal period in the corpus.", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 267, |
| "text": "[. . . ]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grefenstette", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "The token-based and type-based heuristics combined produce an error rate of 1.65%, which is still much higher than that of Punkt. An alternative approach evaluated by Grefenstette is to use a lexicon containing all ordinary words in the Brown corpus but no abbreviations or proper names in combination with the type-based heuristic for abbreviation detection. Even an approach with this massive amount of lexical knowledge still produces an error rate that is higher than that achieved by Punkt: 1.73% versus 1.02%. Only when he uses the complete lexicon and a list of common abbreviations, is Grefenstette (1999) able to attain a result that is slightly better than ours: His most resource intensive system achieves an error rate of 0.93%.", |
| "cite_spans": [ |
| { |
| "start": 594, |
| "end": 613, |
| "text": "Grefenstette (1999)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grefenstette", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "These results indicate that Punkt can keep up well with rule-based systems even when tested on the specific language and text type that they have been developed for. Whereas rule-based systems either require extensive lexical resources or a large amount of manual labor, Punkt can be applied to new languages and corpora out of the box with no manual adaptation. The fact that it can recognize new abbreviations on the fly is especially a great advantage because the results of the RE system on the Lacio-Web corpus and our own experiments in Section 6.4.4 show that rule or list-based systems are often not sufficient to cover the productivity of abbreviation use in new corpora and languages; compare also Mikheev (2002, pages 298, 299, and 311) and Silla and Kaestner (2004) .", |
| "cite_spans": [ |
| { |
| "start": 708, |
| "end": 747, |
| "text": "Mikheev (2002, pages 298, 299, and 311)", |
| "ref_id": null |
| }, |
| { |
| "start": 752, |
| "end": 777, |
| "text": "Silla and Kaestner (2004)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grefenstette", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "In this section, we discuss several supervised machine-learning approaches to sentence boundary detection described in the literature and compare their results to those achieved by Punkt. We regard those sentence boundary systems as supervised that require a set of manually disambiguated instances as training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Machine-Learning Systems", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Riley induces a decision tree for sentence boundary detection using the following features (Riley 1989, pages 351 and 352) : The resulting decision tree is able to classify the periods in the Brown corpus with a very low error rate of only 0.2%, which is 0.82% better than that achieved by Punkt. However, the impressive performance of Riley's approach also requires impressive amounts of training data: He calculated the probabilities that a certain word occurs before or after a sentence boundary from 25 million words of AP newswire text. Such a large training corpus is probably not available for many languages; see also the comments in Palmer and Hearst (1997, page 245) . Moreover, the last of Riley's features, namely, abbreviation class, requires quite specific lexical knowledge about abbreviation types that can only be taken from additional handcrafted resources. It is unclear how well his approach would do with a realistic amount of training data and without these specific lexical resources. Palmer and Hearst (1997) . The Satz system 11 uses estimates of the part-of-speech distribution of the words surrounding potential end-of-sentence punctuation marks as input to a machine-learning algorithm. The part-of-speech information is derived from a lexicon that contains part-of-speech frequency data. In case a word is not in the lexicon, a part-of-speech distribution is estimated by different guessing heuristics. In addition, Satz also uses an abbreviation list and capitalization information. After training the system on a small training and a small cross-validation corpus, which consist of documents with sentence boundaries annotated by hand, it can be used on new documents to detect sentence boundaries. The system can work with any kind of machine-learning approach in principle. Palmer and Hearst's original results were obtained using neural networks and decision trees.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 122, |
| "text": "(Riley 1989, pages 351 and 352)", |
| "ref_id": null |
| }, |
| { |
| "start": 642, |
| "end": 676, |
| "text": "Palmer and Hearst (1997, page 245)", |
| "ref_id": null |
| }, |
| { |
| "start": 1008, |
| "end": 1032, |
| "text": "Palmer and Hearst (1997)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Riley (1989).", |
| "sec_num": "7.2.1" |
| }, |
| { |
| "text": "r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Riley (1989).", |
| "sec_num": "7.2.1" |
| }, |
| { |
| "text": "We have used the same portion of the Wall Street Journal for evaluation in the preceding sections as Palmer and Hearst (1997) so that a direct comparison between the systems is possible. Palmer and Hearst report an error rate of 1.5% for the initial version of their system, which uses neural nets as machine-learning algorithm, a lexicon of 30,000 words including an abbreviation list comprising 206 items, and was trained on a training set of 573 cases and a cross-validation set of 258 cases. This result is only slightly better than the error rate of 1.65% that we obtained with Punkt. Their best result was produced by a version of their system that used decision tree induction, a lexicon of 5,000 words including the 206 abbreviations, and was trained on a set of 6,373 hand-annotated items. This configuration achieved an error rate of 1.0% on the same test corpus. Palmer and Hearst have also evaluated their system on one French corpus and two German corpora. For the French corpus, they report an error rate of 0.4%. On the two German corpora, their system produced error rates of 1.3% and 0.5%, respectively. Whereas the results of the Satz system for French are better than those for our system, their results on the two German corpora are worse than ours. However, as we were not able to use the same test corpora in our evaluation, these results are not directly comparable. Strunk, Silla, and Kaestner (2006) give the results of a comparative evaluation of the present system against three other approaches, including Satz, on English and Brazilian Portuguese corpora. For both of these corpora, Satz achieved a slightly lower error rate than our system; compare Table 22 .", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 125, |
| "text": "Palmer and Hearst (1997)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1390, |
| "end": 1424, |
| "text": "Strunk, Silla, and Kaestner (2006)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1679, |
| "end": 1687, |
| "text": "Table 22", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Satz System by", |
| "sec_num": "7.2.2" |
| }, |
| { |
| "text": "While it is true that Satz performs somewhat better than Punkt in general, this is only the case if enough training data and additional resources are available. When it was not provided with a precompiled list of abbreviations, for example, it produced an error rate as high as 4.9% on the WSJ corpus (Palmer and Hearst 1997, page 255) . This result combined with the results from Section 6.4.4, which showed that generalpurpose abbreviation lists are not sufficient due to the productivity of abbreviation use, suggests that a reliable performance of the Satz system on new corpora can only be ensured if it is provided with an abbreviation list suitable for the domain in question and it is ideally trained on documents of the same genre as the corpus it will be tested on. Moreover, Palmer and Hearst report that Satz possesses a similar robustness with regard to single-case corpora as our system. However, this is again only the case if it has been retrained specifically on single-case corpora (Palmer and Hearst 1997, pages 255, 256, 259) . Reynar and Ratnaparkhi (1997) use maximum-entropy modeling to learn contextual features from a hand-annotated training corpus that can be used to identify sentence boundaries. Their system, called MxTerminator, 12 employs features such as the token preceding a potential sentence boundary, the token following it, capitalization information about these tokens, whether one or both of them are abbreviations or not, and so on in its most portable version. It also induces a list of abbreviations from the training corpus by considering as an abbreviation every token in the training corpus that contains a possible end-of-sentence symbol but does not indicate a sentence boundary. This portable version does not depend on any lexical resources such as the part-of-speech information required by Satz. Reynar and Ratnaparkhi also built a version specialized for English newspaper text, which makes use of additional handcrafted resources: a list of honorific abbreviations such as Ms. and Dr. and a list of corporate-designating abbreviations such as Corp. and S.p.A. This specialized system achieved an error rate of 1.2% on the English WSJ test corpus also used by Palmer and Hearst and in our work. While this result is clearly better than that of our system, which produced an error rate of 1.65%, MxTerminator had to be trained on 39,441 sentences of WSJ text and used hand-crafted lexical resources. For the more portable version of their system without language-specific abbreviation lists, Reynar and Ratnaparkhi report an error rate of 2.0% on the same text corpus, an error rate that is higher than that achieved by our system without training or lexical resources.", |
| "cite_spans": [ |
| { |
| "start": 301, |
| "end": 335, |
| "text": "(Palmer and Hearst 1997, page 255)", |
| "ref_id": null |
| }, |
| { |
| "start": 1000, |
| "end": 1045, |
| "text": "(Palmer and Hearst 1997, pages 255, 256, 259)", |
| "ref_id": null |
| }, |
| { |
| "start": 1048, |
| "end": 1077, |
| "text": "Reynar and Ratnaparkhi (1997)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Satz System by", |
| "sec_num": "7.2.2" |
| }, |
| { |
| "text": "We have evaluated the portable version of MxTerminator on our eleven newspaper corpora using ten-fold cross-validation with nine tenths of the corpus as training set and one tenth as test set. Table 21 gives the average number of training cases and the mean error rates for MxTerminator on each corpus and compares them to those achieved by Punkt. Although MxTerminator achieves a slightly lower error rate on two corpora, namely Brazilian Portuguese and English, it produces an average error rate of 1.77% on the newspaper corpora, which lies well above our system's mean error rate of 1.26%, even though the number of training instances was sometimes as high as 34,256 (for German) and never fell below 10,000. MxTerminator's performance was also worse than that of our system in a comparative evaluation on English and Brazilian Portuguese corpora described in Strunk, Silla, and Kaestner (2006) .", |
| "cite_spans": [ |
| { |
| "start": 864, |
| "end": 898, |
| "text": "Strunk, Silla, and Kaestner (2006)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 193, |
| "end": 201, |
| "text": "Table 21", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MxTerminator.", |
| "sec_num": "7.2.3" |
| }, |
| { |
| "text": "MxTerminator also shows that one cannot in general expect that the performance of a machine-learning system carries over to new corpora written in the same language without retraining. When Reynar and Ratnaparkhi evaluated their system on the Brown corpus without retraining, it achieved a relatively high error rate of 2.1% (cf. Punkt's error rate of 1.02%). The authors themselves remark:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MxTerminator.", |
| "sec_num": "7.2.3" |
| }, |
| { |
| "text": "We present the Brown corpus performance to show the importance of training on the genre of text on which testing will be performed. This points to the general problem that supervised systems that are not able to dynamically incorporate new knowledge, for example, by discovering abbreviation types on the fly, cannot be expected to perform reliably on new corpora if the specific domain or genre is not known beforehand; compare also Mikheev (2002, pages 298, 314) . , and Kokkinakis (1999) . These authors apply a specifically adapted version of transformation-based learning (Brill 1995) to the problem of sentence boundary detection. For the initial-state annotation, they assume that every possible sentence-ending punctuation mark does indeed indicate a sentence boundary. In the first learning stage, their system uses characteristics such as identity, length, capital- ization of the token containing a possible end-of-sentence boundary marker and of the token immediately following it as possible triggering environments to learn rules that remove sentence boundaries. In the second stage, it learns rules that reinsert sentence boundaries using the same space of possible triggering environments. Stamatatos, Fakotakis, and Kokkinakis (1999) trained their system on a hand-annotated corpus of Greek newspaper articles that contained 9,136 candidate punctuation marks. They tested it on a corpus containing articles from the same newspaper with 10,977 candidate punctuation marks and a lower bound of 79.6%. Their system learned 312 rules in all and produced an error rate of 0.6% on all punctuation marks, including periods, exclamation and question marks, and ellipses. When only periods and ellipses were considered, it achieved an error rate of 0.57% with a set of 234 rules. Punkt achieved a respectable error rate of 1.50% on this corpus without any training at all. As Stamatatos, Fakotakis, and Kokkinakis (1999) use individual words and brittle features such as capitalization information as triggering environments, their system is probably not very robust in that it requires training on a corpus that is very close to the texts that the system is intended to be used on in order to guarantee a low error rate.", |
| "cite_spans": [ |
| { |
| "start": 434, |
| "end": 464, |
| "text": "Mikheev (2002, pages 298, 314)", |
| "ref_id": null |
| }, |
| { |
| "start": 467, |
| "end": 490, |
| "text": ", and Kokkinakis (1999)", |
| "ref_id": null |
| }, |
| { |
| "start": 577, |
| "end": 589, |
| "text": "(Brill 1995)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MxTerminator.", |
| "sec_num": "7.2.3" |
| }, |
| { |
| "text": "Unsupervised systems are systems that neither require specific hand-written rules or lexical resources nor have to be trained on hand-annotated training examples. Instead, they extract the required information from the test corpus itself and/or additional unannotated text. We also regard our own approach as an unsupervised system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Machine-Learning Systems", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": ". Mikheev proposes to combine sentence boundary detection, proper name identification, and abbreviation detection in one system. He tackles the sentence boundary disambiguation task with a set of simple rules that can be applied after the tokens immediately to the left and to the right of a potential sentence boundary marker have been fully disambiguated. The most important questions are whether the token preceding a period is an abbreviation and whether the token following a period is a proper name. In order to answer these questions, he uses a combination of typebased abbreviation-guessing heuristics, some of which have already been discussed in Grefenstette (1999) , and what he calls the document-centered approach to abbreviation detection and proper name identification. This approach is based on the idea of classifying a candidate type as a whole as a proper name or an abbreviation based on instances of that type that occur in unambiguous contexts: For example, a type that always appears with a final period and occurs before a lowercase word is likely to be an abbreviation, while a type that occurs with an uppercase first letter in the middle of a sentence is likely to be a proper name. He enhances these methods with the ability to distinguish between homographs (between an abbreviation and an ordinary word and between a proper name and an ordinary word) by collecting common sequences of more than one token that contain the candidate. His system requires some additional resources, namely, a list of common words (i.e., not proper names), a list of common words that are frequent sentence starters, a list of frequent proper names that coincide with common words, and a domain-specific list of abbreviations, which can all be created from an unannotated corpus without human intervention. Mikheev evaluated his system on the WSJ corpus and the Brown corpus of American English, extracting the required additional resources from a 300,000 word corpus of New York Times articles. It achieved an error rate of 0.45% on the WSJ corpus and an error rate of 0.28% on the Brown corpus, while Punkt's error rates on these corpora were 1.65% and 1.02%, respectively. 13 Mikheev himself admits that the use of an additional domain-specific abbreviation list, which has to be recreated for every new domain, is not always possible, especially if the system is expected \"to handle documents from unknown origin\" (Mikheev 2002, pages 305, 306 ). When his system was not equipped with an additional abbreviation list, the error rates rose to 1.41% on the WSJ corpus and 0.65% on the Brown corpus and were more comparable with those achieved by Punkt. Mikheev also tested his system in conjunction with a morphological analyzer on a corpus of BBC articles in Russian and obtained an error rate of 0.1% for sentence boundary detection.", |
| "cite_spans": [ |
| { |
| "start": 656, |
| "end": 675, |
| "text": "Grefenstette (1999)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 2186, |
| "end": 2188, |
| "text": "13", |
| "ref_id": null |
| }, |
| { |
| "start": 2428, |
| "end": 2457, |
| "text": "(Mikheev 2002, pages 305, 306", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mikheev (2002)", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "While Mikheev's system and Punkt are quite similar in spirit, his system uses advanced methods for the identification of proper names, while we have mostly concentrated on abbreviation detection. In fact, combining Mikheev's insights with our methods of abbreviation detection would most likely lead to another performance increase: Mikheev's abbreviation detection methods achieved an error rate of 6.6% on the WSJ and an error rate of 8.9% on the Brown corpus for the task of abbreviation detection when no additional abbreviation list was used. Even when such a list was consulted, his error rates-0.8% on the WSJ and 1.2% on the Brown corpus-remained above those achieved by Punkt-0.71% and 0.82%, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mikheev (2002)", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "Abbreviation detection is also the area where we have spotted some critical issues in Mikheev's approach. Although Mikheev aims at a domain-independent system, he makes some decisions that could be harmful for domains other than news and literary fiction. He places an arbitrary length limit on possible abbreviations by applying his document-centered approach to abbreviation detection only to candidates that have a maximal length of four letters (Mikheev 2002, page 299) . This limit is already too strict for some abbreviations found in English newspaper corpora, such as Messrs., Calif., and Thurs. The abbreviation Calif.(ornia), for instance, occurs 88 times in the portion of the WSJ corpus we used in our evaluation. The abbreviation lists that we have extracted from a German dictionary and a bilingual English-German dictionary (cf. Section 6.4.4) show that such an arbitrary length limit can be quite problematic. In the German list, 28% of all abbreviation types have a length greater than four. In the English list, 26% are longer than four characters.", |
| "cite_spans": [ |
| { |
| "start": 449, |
| "end": 473, |
| "text": "(Mikheev 2002, page 299)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mikheev (2002)", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "Moreover, Mikheev's document-centered approach is mostly based on capitalization information and therefore unable to uncover abbreviations that are always followed by a capitalized word, while our approach uses collocational information pertaining to the abbreviation itself and its final period and thus does not incur this problem. Mikheev's strong reliance on capitalization also renders his approach unsuitable for single-case text and leads to some problems for German where all nouns are always capitalized and not only proper names (Mikheev 2002, page 315) . Last but not least, Mikheev's system does not include a specialized treatment of ordinal numbers, which we have shown to be quite important for some languages; compare Section 6.4.5. Mikheev (2000 Mikheev ( , 2002 describes the combination of a trigram tagger with his document-centered approach to abbreviation detection and proper name identification. After bootstrapping the training process on 20,000 words of tagged text, he trained the tagger in the unsupervised mode on the Brown corpus and evaluated it on the WSJ corpus and vice versa. When he used the tagger alone for the disambiguation of possible end-of-sentence marks, it achieved an error rate of 1.95% on the WSJ corpus (vs. 1.65% achieved by Punkt) and an error rate of 0.98% on the Brown corpus (vs. 1.02% achieved by our system). Enhancing the tagger with the heuristics to identify proper names and abbreviations described above improved the error rates to 0.31% on the WSJ corpus and 0.20% on the Brown corpus. When the heuristics were applied only to the test corpora themselves and no additional abbreviation list was employed, the resulting error rates were 1.39% on the WSJ corpus and 0.65% on the Brown corpus. When comparing these results achieved by using a part-of-speech tagger with those of our system, it has to be kept in mind that although Mikheev trained his tagger in the unsupervised mode, this technique normally still requires an extensive lexicon that contains the possible parts of speech for each lexical item, a resource that is not always available for every language.", |
| "cite_spans": [ |
| { |
| "start": 539, |
| "end": 563, |
| "text": "(Mikheev 2002, page 315)", |
| "ref_id": null |
| }, |
| { |
| "start": 749, |
| "end": 762, |
| "text": "Mikheev (2000", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 763, |
| "end": 779, |
| "text": "Mikheev ( , 2002", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mikheev (2002)", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "We have presented an unsupervised multilingual sentence boundary detection system that does not depend on any additional resources besides the corpus it is supposed to segment into sentences. It uses collocational information as a new type of evidence for the detection of abbreviations, initials, and ordinal numbers and is therefore much less dependent on orthographic information than competing systems. This fact enables it to accurately detect sentence boundaries even for single-case corpora. The experiments that we have carried out for eleven languages show that it is an accurate method well suited for different languages and text genres. Although its performance is slightly inferior to the best results published in the literature on sentence boundary detection (cf . Table 22) , it can keep up well with rule-based methods and more straightforward supervised systems. Moreover, we were able to show that abbreviation use is quite productive and that systems that rely on resources such as abbreviation lists will have to be adapted specifically to every new language and every new domain, while no manual work and no language-specific resources are needed to adapt Punkt to new corpora.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 778, |
| "end": 789, |
| "text": ". Table 22)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "There are several types of abbreviations that are usually not marked with a final period: acronyms such as NATO or unit abbreviations such as kg (kilogram). These do not present a problem for sentence boundary detection and are therefore not discussed further in this article. We will henceforth use the term abbreviation to refer only to classes of abbreviations that are normally marked with a final period.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "These tags are not intended as XML tags and there are thus no corresponding closing tags </A>, </E>, and </S>. The tags are attached to the right edge of all tokens that end in a final period.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "N represents the number of tokens in the corpus. C(\u2022) is the number of times a token-final period occurs in the corpus. C(. . . ) always signifies the absolute frequency of some element or elements in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "If all or certain abbreviations do not occur with a final period in a language, the problem of deciding between the sentence boundary and the abbreviation marker does not occur in that language or for these abbreviations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Sometimes, two or more initials are not separated from each other by spaces, compare L.F. Rothschild in example (1). Although these cases are usually still regarded as initials, our system does not treat them as such but as ordinary abbreviations because it cannot distinguish abbreviations with internal periods from such run-on combinations of initials. This is however not harmful because combinations like L.F. are normally short, contain internal periods, and will probably not occur without a following period so that there is a high likelihood that they will be recognized as abbreviations in the type-based stage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "As specific numbers often occur very infrequently, we fold all numeric types into one abstract type ##number## for the purposes of collocation detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Only periods were classified. We did not include the less ambiguous exclamation and question marks in the evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "One exception is the French corpus. It contains rankings from sports events in which ranks are indicated using digits and a following period. Using the special detection of ordinal numbers on this corpus results in a lower error rate of 1.33% for the task of sentence boundary detection and 0.50% for abbreviation detection. However, as ordinal numbers in French are not usually indicated with final periods, we have given the results of the system without special treatment of ordinal numbers for French inTable 8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This is the name given to it inSilla and Kaestner (2004). RE stands for regular expressions. It is available from http://www.ppgia.pucpr.br/\u223csilla/softwares/yasd.zip.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Available from: http://elib.cs.berkeley.edu/src/satz/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Available from: ftp://ftp.cis.upenn.edu/pub/adwait/jmx/jmx.tar.gz.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "However, Mikheev seems to have tested on the whole WSJ portion of the Penn Treebank, while we have used only Sections 03-06.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Antti Arppe, J\u00f8rg Asmussen, Marti Hearst, Knut Hofland, Heiki-Jaan Kaalep, Celso Kaestner, Cristina Mota, Umut\u00d6zge, Bilge Say, Carlos Silla Jr., Efstathios Stamatatos, and particularly Katja Ke\u00dfelmeier, Anneli von K\u00f6nemann, and Kays Mutlu for their invaluable assistance in the construction of our test corpora. We are much obliged to the Gesellschaft der Freunde der Ruhr-Universit\u00e4t e.V. for financial support. We are also grateful to three anonymous reviewers who gave us very helpful comments and suggestions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Lacio-Web Project: Overview and issues in Brazilian Portuguese corpora creation", |
| "authors": [ |
| { |
| "first": "Stella", |
| "middle": [ |
| "E O" |
| ], |
| "last": "Nunes", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tagnin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of Corpus Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "14--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nunes, and Stella E. O. Tagnin. 2003. The Lacio-Web Project: Overview and issues in Brazilian Portuguese corpora creation. In Proceedings of Corpus Linguistics, pages 14-21, Lancaster, UK.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational Linguistics", |
| "volume": "21", |
| "issue": "4", |
| "pages": "543--565", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brill, Eric. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543-565.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Duden. Die deutsche Rechtschreibung", |
| "authors": [], |
| "year": 2004, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dudenredaktion, editor. 2004. Duden. Die deutsche Rechtschreibung (23rd ed.), volume 1 of Der Duden in zw\u00f6lf B\u00e4nden. Dudenverlag, Mannheim.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Accurate methods for the statistics of surprise and coincidence", |
| "authors": [ |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Dunning", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "1", |
| "pages": "61--74", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dunning, Ted. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61-74.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Methods for the qualitative evaluation of lexical association measures", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Evert", |
| "suffix": "" |
| }, |
| { |
| "first": "Brigitte", |
| "middle": [], |
| "last": "Krenn", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "188--195", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evert, Stefan and Brigitte Krenn. 2001. Methods for the qualitative evaluation of lexical association measures. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 188-195, Toulouse, France.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A synopsis of linguistic theory 1930-55", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "R" |
| ], |
| "last": "Firth", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "Studies in Linguistic Analysis (special volume of the Philological Society). The Philological Society", |
| "volume": "", |
| "issue": "", |
| "pages": "1--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Firth, John R. 1957. A synopsis of linguistic theory 1930-55. In Studies in Linguistic Analysis (special volume of the Philological Society). The Philological Society, Oxford, pages 1-32.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Frequency Analysis of English Usage: Lexicon and Grammar", |
| "authors": [ |
| { |
| "first": "Nelson", |
| "middle": [ |
| "W" |
| ], |
| "last": "Francis", |
| "suffix": "" |
| }, |
| { |
| "first": "Henry", |
| "middle": [], |
| "last": "Kucera", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francis, Nelson W. and Henry Kucera. 1982. Frequency Analysis of English Usage: Lexicon and Grammar. Houghton Mifflin, New York.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Tokenization", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Syntactic Wordclass Tagging", |
| "volume": "", |
| "issue": "", |
| "pages": "117--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grefenstette, Gregory. 1999. Tokenization. In Hans van Halteren, editor, Syntactic Wordclass Tagging. Kluwer Academic Publishers, Dordrecht, pages 117-133.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "What is a word, what is a sentence? Problems of tokenization", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| }, |
| { |
| "first": "Pasi", |
| "middle": [], |
| "last": "Tapanainen", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 3rd International Conference on Computational Lexicography", |
| "volume": "", |
| "issue": "", |
| "pages": "79--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grefenstette, Gregory and Pasi Tapanainen. 1994. What is a word, what is a sentence? Problems of tokenization. In Proceedings of the 3rd International Conference on Computational Lexicography, pages 79-87, Budapest, Hungary.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Scaled log likelihood ratios for the detection of abbreviations in text corpora", |
| "authors": [ |
| { |
| "first": "Tibor", |
| "middle": [], |
| "last": "Kiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Strunk", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "D. and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "75--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiss, Tibor and Jan Strunk. 2002a. Scaled log likelihood ratios for the detection of abbreviations in text corpora. In Proceedings of COLING 2002, pages 1228-1232, Taipei, Taiwan. Kiss, Tibor and Jan Strunk. 2002b. Viewing sentence boundary detection as collocation identification. In Proceedings of KONVENS 2002, pages 75-82, Saarbr\u00fccken, Germany. Manning, Christopher D. and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Building a large annotated corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Tagging sentence boundaries", |
| "authors": [ |
| { |
| "first": "Andrei", |
| "middle": [], |
| "last": "Mikheev", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of ANLP-NAACL 2000", |
| "volume": "", |
| "issue": "", |
| "pages": "264--271", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikheev, Andrei. 2000. Tagging sentence boundaries. In Proceedings of ANLP-NAACL 2000, pages 264-271, Seattle, Washington.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Periods, capitalized words, etc", |
| "authors": [ |
| { |
| "first": "Andrei", |
| "middle": [], |
| "last": "Mikheev", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Computational Linguistics", |
| "volume": "28", |
| "issue": "3", |
| "pages": "289--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikheev, Andrei. 2002. Periods, capitalized words, etc. Computational Linguistics, 28(3):289-318.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Worterkennungsverfahren als Grundlage einer Universalmethode zur automatischen Segmentierung von Texten in S\u00e4tze. Ein Verfahren zur maschinellen Satzgrenzenbestimmung im Englischen", |
| "authors": [ |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Amerl", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Natalis", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Sprache und Datenverarbeitung", |
| "volume": "4", |
| "issue": "1", |
| "pages": "46--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M\u00fcller, Hans, V. Amerl, and G. Natalis. 1980. Worterkennungsverfahren als Grundlage einer Universalmethode zur automatischen Segmentierung von Texten in S\u00e4tze. Ein Verfahren zur maschinellen Satzgrenzenbestimmung im Englischen. Sprache und Datenverarbeitung, 4(1):46-64.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The Linguistics of Punctuation", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Nunberg", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "CSLI Lecture Notes. CSLI Publications", |
| "volume": "18", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nunberg, Geoffrey. 1990. The Linguistics of Punctuation, volume 18 of CSLI Lecture Notes. CSLI Publications, Stanford, CA.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Adaptive multilingual sentence boundary disambiguation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "D" |
| ], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Marti", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "23", |
| "issue": "2", |
| "pages": "241--267", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Palmer, David D. and Marti A. Hearst. 1997. Adaptive multilingual sentence boundary disambiguation. Computational Linguistics, 23(2):241-267.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A maximum entropy approach to identifying sentence boundaries", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "C" |
| ], |
| "last": "Reynar", |
| "suffix": "" |
| }, |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the Fifth ACL Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "16--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reynar, Jeffrey C. and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the Fifth ACL Conference on Applied Natural Language Processing, pages 16-19, Washington, DC.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Some applications of tree-based modeling to speech and language indexing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [ |
| "D" |
| ], |
| "last": "Riley", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proceedings of the DARPA Speech and Natural Language Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "339--352", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riley, Michael D. 1989. Some applications of tree-based modeling to speech and language indexing. In Proceedings of the DARPA Speech and Natural Language Workshop, pages 339-352, Cape Cod, MA.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "An analysis of sentence boundary detection systems for English and Portuguese documents", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Silla", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Carlos", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "A" |
| ], |
| "last": "Celso", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kaestner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Seoul", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Korea", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Silla", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Carlos", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jaime Dalla Valle", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "A" |
| ], |
| "last": "Celso", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kaestner", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the Workshop on Machine Learning in Human Language Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "88--92", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Silla Jr., Carlos N. and Celso A. A. Kaestner. 2004. An analysis of sentence boundary detection systems for English and Portuguese documents. In Proceedings of CICLing 2004, pages 135-141, Seoul, Korea. Silla Jr., Carlos N., Jaime Dalla Valle Jr., and Celso A. A. Kaestner. 2003. Detec\u00e7\u00e3o autom\u00e1tica de senten\u00e7as com o uso de express\u00f5es regulares. In Proceedings of CBComp 2003, pages 548-560, Itaja\u00ed, Brazil. Stamatatos, Efstathios, Nikos Fakotakis, and George K. Kokkinakis. 1999. Automatic extraction of rules for sentence boundary disambiguation. In Proceedings of the Workshop on Machine Learning in Human Language Technology, pages 88-92, Chania, Greece.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "A comparative evaluation of a new unsupervised sentence boundary detection approach on documents in English and Portuguese", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Strunk", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [ |
| "N" |
| ], |
| "last": "Silla", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "A" |
| ], |
| "last": "Celso", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kaestner", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "Proceedings of CICLing", |
| "volume": "", |
| "issue": "", |
| "pages": "132--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Strunk, Jan, Carlos N. Silla Jr., and Celso A. A. Kaestner. 2006. A comparative evaluation of a new unsupervised sentence boundary detection approach on documents in English and Portuguese. In Proceedings of CICLing 2006, pages 132-143, Mexico City, Mexico. van Rijsbergen, Cornelis J. 1979. Information Retrieval. Butterworths, London.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Sentence boundary detection: A comparison of paradigms for improving MT quality", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [ |
| "J" |
| ], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Maki", |
| "middle": [], |
| "last": "Clements", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Darwin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Amtrup", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the MT Summit VIII", |
| "volume": "I", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Walker, Daniel J., David E. Clements, Maki Darwin, and Jan W. Amtrup. 2001. Sentence boundary detection: A comparison of paradigms for improving MT quality. In Proceedings of the MT Summit VIII, Santiago de Compostela, Spain. Willmann, Helmut and Heinz Messinger, editors. 1996. Langenscheidts Gro\u00dfw\u00f6rterbuch Englisch. Der Kleine Muret- Sanders (7th ed.), volume I: English- German. Langenscheidt, Berlin/Munich.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "Architecture of the Punkt System.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "text": "Percentage of types of different lengths that are abbreviations in a Dutch corpus.", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "text": "Prob[word with \".\" occurs at end of sentence] r Prob[word after \".\" occurs at beginning of sentence] r Length of word with \".\" r Length of word after \".\" r Case of word with \".\": Upper, Lower, Cap, Numbers r Case of word after \".\": Upper, Lower, Cap, Numbers r Punctuation after \".\" (if any) r Abbreviation class of word with \".\" -e.g., month name, unit-of-measure, title, address name, etc.", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "Candidate list from an English test corpus. Revised log \u03bb Final sorting Scaled log \u03bb", |
| "content": "<table><tr><td colspan=\"3\">Candidate type C(w, \u2022) C(w, \u00ac\u2022) n.h 5 0</td><td>28.08</td><td>n.h</td><td>7.60</td></tr><tr><td>u.s.a</td><td>5</td><td>0</td><td>28.08</td><td>a.g</td><td>6.08</td></tr><tr><td>alex</td><td>8</td><td>2</td><td>26.75</td><td>m.j</td><td>4.56</td></tr><tr><td>ounces</td><td>4</td><td>0</td><td>22.46</td><td>u.n</td><td>4.56</td></tr><tr><td>a.g</td><td>4</td><td>0</td><td>22.46</td><td>u.s.a</td><td>4.19</td></tr><tr><td>ga</td><td>4</td><td>0</td><td>22.46</td><td>ga</td><td>3.04</td></tr><tr><td>vt</td><td>4</td><td>0</td><td>22.46</td><td>vt</td><td>3.04</td></tr><tr><td>ore</td><td>5</td><td>1</td><td>18.99</td><td>ore</td><td>0.32</td></tr><tr><td>1990s</td><td>5</td><td>1</td><td>18.99</td><td>reps</td><td>0.31</td></tr><tr><td>mo</td><td>8</td><td>3</td><td>17.67</td><td>mo</td><td>0.30</td></tr><tr><td>m.j</td><td>3</td><td>0</td><td>16.85</td><td>1990s</td><td>0.26</td></tr><tr><td>depositor</td><td>3</td><td>0</td><td>16.85</td><td>ounces</td><td>0.06</td></tr><tr><td>reps</td><td>3</td><td>0</td><td>16.85</td><td>alex</td><td>0.03</td></tr><tr><td>u.n</td><td>3</td><td>0</td><td>16.85</td><td>depositor</td><td>0.00</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "Example data for the orthographic heuristic.", |
| "content": "<table><tr><td>Type</td><td colspan=\"6\">Uppercase Lowercase Uppercase Lowercase Uppercase Lowercase</td></tr><tr><td/><td>all</td><td>all</td><td>after sure</td><td>after sure</td><td>clearly</td><td>clearly</td></tr><tr><td/><td/><td/><td>sentence</td><td>sentence</td><td>sentence</td><td>sentence</td></tr><tr><td/><td/><td/><td>boundary</td><td>boundary</td><td>internally</td><td>internally</td></tr><tr><td>a</td><td>2,229</td><td>34,483</td><td>720</td><td>0</td><td>654</td><td>34,466</td></tr><tr><td>across</td><td>2</td><td>129</td><td>1</td><td>0</td><td>0</td><td>129</td></tr><tr><td>actual</td><td>3</td><td>52</td><td>2</td><td>0</td><td>0</td><td>52</td></tr><tr><td>ask</td><td>9</td><td>140</td><td>1</td><td>0</td><td>7</td><td>140</td></tr><tr><td>psychologists</td><td>2</td><td>4</td><td>1</td><td>0</td><td>0</td><td>4</td></tr><tr><td>smith</td><td>348</td><td>0</td><td>6</td><td>0</td><td>218</td><td>0</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Newspaper corpora used in the evaluation.", |
| "content": "<table><tr><td>Language</td><td/><td>Origin</td><td>Content</td></tr><tr><td colspan=\"3\">Brazilian Portuguese CETENFolha corpus (Linguateca)</td><td>Folha de S. Paulo</td></tr><tr><td>Dutch</td><td colspan=\"2\">Multilingual Corpus 1 (ECI)</td><td>De Limburger</td></tr><tr><td>English</td><td colspan=\"2\">Penn Treebank (LDC)</td><td>Wall Street Journal</td></tr><tr><td>Estonian</td><td colspan=\"2\">By courtesy of the University of Tartu</td><td>Eesti Ekspress</td></tr><tr><td>French</td><td colspan=\"2\">Multilingual Corpus 1 (ECI)</td><td>Le Monde</td></tr><tr><td>German</td><td colspan=\"2\">Neue Z\u00fcrcher Zeitung AG CD-ROM</td><td>Neue Z\u00fcrcher Zeitung</td></tr><tr><td>Italian</td><td colspan=\"2\">Multilingual Corpus 1 (ECI)</td><td>La Stampa, Il Mattino</td></tr><tr><td>Norwegian</td><td colspan=\"2\">By courtesy of the Centre for Humanities</td><td>Bergens Tidende (Bokm\u00e5l</td></tr><tr><td/><td colspan=\"2\">Information Technologies, Bergen</td><td>and Nynorsk)</td></tr><tr><td>Spanish</td><td colspan=\"2\">Multilingual Corpus 1 (ECI)</td><td>Sur</td></tr><tr><td>Swedish</td><td colspan=\"2\">Multilingual Corpus 1 (ECI)</td><td>Dagens Nyheter (and others)</td></tr><tr><td>Turkish</td><td colspan=\"2\">METU Turkish Corpus (T\u00fcrk\u00e7e Derlem</td><td>Milliyet</td></tr><tr><td/><td colspan=\"2\">Projesi), by courtesy of the University</td></tr><tr><td/><td>of Ankara</td><td/></tr><tr><td>Table 5</td><td/><td/></tr><tr><td colspan=\"3\">Other corpora used in the evaluation.</td></tr><tr><td>Language</td><td>Origin</td><td colspan=\"2\">Content</td></tr><tr><td>English</td><td>Brown corpus</td><td colspan=\"2\">Balanced corpus of American English, different text genres</td></tr><tr><td>English</td><td>Project Gutenberg</td><td colspan=\"2\">The Works of Edgar Allan Poe in Five Volumes, Volumes I-III,</td></tr><tr><td/><td/><td>literary fiction</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF4": { |
| "text": "Statistical properties of the test corpora.", |
| "content": "<table><tr><td>Corpus</td><td>Tokens</td><td>Tokens with</td><td>Abbr.</td><td>Abbr.</td><td>Abbr.</td></tr><tr><td/><td/><td colspan=\"4\">final periods tokens tokens (%) types</td></tr><tr><td>B. Portuguese</td><td>321,032</td><td>15,250</td><td>481</td><td>3.15</td><td>102</td></tr><tr><td>Dutch</td><td>340,238</td><td>20,075</td><td>1,270</td><td>6.33</td><td>141</td></tr><tr><td>English -WSJ</td><td>469,396</td><td>26,980</td><td>7,297</td><td>27.05</td><td>196</td></tr><tr><td colspan=\"2\">English -Brown 1,105,348</td><td>54,722</td><td>5,586</td><td>10.21</td><td>213</td></tr><tr><td>English -Poe</td><td>324,247</td><td>11,247</td><td>600</td><td>5.33</td><td>59</td></tr><tr><td>Estonian</td><td>358,894</td><td>25,825</td><td>2,517</td><td>9.75</td><td>248</td></tr><tr><td>French</td><td>369,506</td><td>12,890</td><td>375</td><td>2.91</td><td>91</td></tr><tr><td>German</td><td>847,207</td><td>38,062</td><td>3,603</td><td>9.47</td><td>139</td></tr><tr><td>Italian</td><td>312,398</td><td>11,561</td><td>442</td><td>3.82</td><td>156</td></tr><tr><td>Norwegian</td><td>479,225</td><td>28,368</td><td>1,882</td><td>6.63</td><td>242</td></tr><tr><td>Spanish</td><td>352,773</td><td>13,015</td><td>570</td><td>4.38</td><td>84</td></tr><tr><td>Swedish</td><td>338,948</td><td>19,724</td><td>769</td><td>3.90</td><td>100</td></tr><tr><td>Turkish</td><td>333,451</td><td>21,047</td><td>598</td><td>2.84</td><td>103</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF5": { |
| "text": "Results of classification-newspaper corpora (mixed case).", |
| "content": "<table><tr><td>Corpus</td><td>Error</td><td>Prec.</td><td>Recall</td><td>F</td><td>Error</td><td>Prec.</td><td>Recall</td><td>F</td></tr><tr><td/><td colspan=\"8\">(<S>) (<S>) (<S>) (<S>) (<A>) (<A>) (<A>) (<A>)</td></tr><tr><td/><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td></tr><tr><td>B. Port.</td><td>1.11</td><td>99.14</td><td>99.72</td><td>99.43</td><td>0.99</td><td>96.88</td><td>70.89</td><td>81.87</td></tr><tr><td>Dutch</td><td>0.97</td><td>99.25</td><td>99.72</td><td>99.48</td><td>0.66</td><td>99.31</td><td>90.24</td><td>94.55</td></tr><tr><td>English</td><td>1.65</td><td>99.13</td><td>98.64</td><td>98.89</td><td>0.71</td><td>99.86</td><td>97.52</td><td>98.68</td></tr><tr><td>Estonian</td><td>2.12</td><td>98.58</td><td>99.07</td><td>98.83</td><td>1.75</td><td>98.22</td><td>83.51</td><td>90.27</td></tr><tr><td>French</td><td>1.54</td><td>99.31</td><td>99.08</td><td>99.19</td><td>0.72</td><td>95.19</td><td>79.20</td><td>86.46</td></tr><tr><td>German</td><td>0.35</td><td>99.69</td><td>99.93</td><td>99.81</td><td>0.26</td><td>99.91</td><td>97.34</td><td>98.61</td></tr><tr><td>Italian</td><td>1.13</td><td>99.32</td><td>99.49</td><td>99.41</td><td>0.74</td><td>96.60</td><td>83.48</td><td>89.56</td></tr><tr><td>Norw.</td><td>0.81</td><td>99.45</td><td>99.68</td><td>99.56</td><td>0.72</td><td>98.16</td><td>90.81</td><td>94.34</td></tr><tr><td>Spanish</td><td>1.06</td><td>99.66</td><td>99.23</td><td>99.45</td><td>0.35</td><td>98.70</td><td>93.33</td><td>95.94</td></tr><tr><td>Swedish</td><td>1.76</td><td>98.82</td><td>99.36</td><td>99.09</td><td>1.48</td><td>94.10</td><td>66.32</td><td>77.80</td></tr><tr><td>Turkish</td><td>1.31</td><td>99.40</td><td>99.24</td><td>99.32</td><td>0.43</td><td>95.35</td><td>89.13</td><td>92.13</td></tr><tr><td>Mean</td><td>1.26</td><td>99.25</td><td>99.38</td><td>99.31</td><td>0.80</td><td>97.48</td><td>85.62</td><td>90.93</td></tr><tr><td>SD</td><td>0.49</td><td>0.33</td><td>0.38</td><td>0.29</td><td>0.46</td><td>1.99</td><td>10.19</td><td>6.69</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF6": { |
| "text": "Comparison with the baselines-newspaper corpora (mixed case).", |
| "content": "<table><tr><td/><td/><td colspan=\"2\">Error <S></td><td/><td/><td colspan=\"2\">Error <A></td><td/></tr><tr><td>Corpus</td><td colspan=\"8\">Punkt AbsBL TokBL TypeBL Punkt AbsBL TokBL TypeBL</td></tr><tr><td/><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td></tr><tr><td>B. Port.</td><td>1.11</td><td>3.17</td><td>2.01</td><td>1.74</td><td>0.99</td><td>3.15</td><td>2.11</td><td>1.51</td></tr><tr><td>Dutch</td><td>0.97</td><td>6.50</td><td>5.66</td><td>1.63</td><td>0.66</td><td>6.33</td><td>5.59</td><td>1.36</td></tr><tr><td>English</td><td>1.65</td><td>25.60</td><td>13.37</td><td>7.14</td><td>0.71</td><td>27.05</td><td>14.96</td><td>5.40</td></tr><tr><td>Estonian</td><td>2.12</td><td>10.03</td><td>4.86</td><td>7.45</td><td>1.75</td><td>9.75</td><td>4.94</td><td>6.53</td></tr><tr><td>French</td><td>1.54</td><td>4.20</td><td>3.02</td><td>2.87</td><td>0.72</td><td>2.91</td><td>2.20</td><td>1.19</td></tr><tr><td>German</td><td>0.35</td><td>9.50</td><td>6.23</td><td>8.74</td><td>0.26</td><td>9.47</td><td>6.22</td><td>8.60</td></tr><tr><td>Italian</td><td>1.13</td><td>4.45</td><td>3.40</td><td>3.14</td><td>0.74</td><td>3.82</td><td>3.11</td><td>2.53</td></tr><tr><td>Norw.</td><td>0.81</td><td>6.57</td><td>2.98</td><td>5.44</td><td>0.72</td><td>6.63</td><td>3.09</td><td>5.10</td></tr><tr><td>Spanish</td><td>1.06</td><td>4.23</td><td>3.17</td><td>2.61</td><td>0.35</td><td>4.38</td><td>3.40</td><td>1.75</td></tr><tr><td>Swedish</td><td>1.76</td><td>4.02</td><td>1.68</td><td>2.58</td><td>1.48</td><td>3.90</td><td>1.83</td><td>1.79</td></tr><tr><td>Turkish</td><td>1.31</td><td>3.47</td><td>5.25</td><td>26.40</td><td>0.43</td><td>2.84</td><td>4.66</td><td>25.44</td></tr><tr><td>Mean</td><td>1.26</td><td>7.43</td><td>4.69</td><td>6.23</td><td>0.80</td><td>7.29</td><td>4.74</td><td>5.56</td></tr><tr><td>SD</td><td>0.49</td><td>6.46</td><td>3.24</td><td>7.48</td><td>0.46</td><td>7.01</td><td>3.69</td><td>7.05</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF7": { |
| "text": "Results of classification-other corpora (mixed case).", |
| "content": "<table><tr><td>Corpus</td><td>Error</td><td>Prec.</td><td>Recall</td><td>F</td><td>Error</td><td>Prec.</td><td>Recall</td><td>F</td></tr><tr><td/><td colspan=\"8\">(<S>) (<S>) (<S>) (<S>) (<A>) (<A>) (<A>) (<A>)</td></tr><tr><td/><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td></tr><tr><td>Brown</td><td>1.02</td><td>99.14</td><td>99.75</td><td>99.44</td><td>0.82</td><td>98.92</td><td>92.17</td><td>95.43</td></tr><tr><td>Poe</td><td>0.80</td><td>99.71</td><td>99.45</td><td>99.58</td><td>0.46</td><td>95.36</td><td>96.00</td><td>95.68</td></tr><tr><td>Table 11</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"7\">Comparison with the baselines-other corpora (mixed case).</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Error <S></td><td/><td/><td colspan=\"2\">Error <A></td><td/></tr><tr><td colspan=\"9\">Corpus Punkt AbsBL TokBL TypeBL Punkt AbsBL TokBL TypeBL</td></tr><tr><td/><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td></tr><tr><td>Brown</td><td>1.02</td><td>9.83</td><td>7.17</td><td>3.59</td><td>0.82</td><td>10.21</td><td>7.55</td><td>3.27</td></tr><tr><td>Poe</td><td>0.80</td><td>5.03</td><td>4.12</td><td>3.12</td><td>0.46</td><td>5.33</td><td>4.42</td><td>2.76</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF8": { |
| "text": "", |
| "content": "<table><tr><td colspan=\"6\">Results of classification-newspaper corpora (single case).</td><td/><td/><td/></tr><tr><td/><td/><td colspan=\"3\">All-lowercase corpora</td><td/><td colspan=\"3\">All-uppercase corpora</td></tr><tr><td>Corpus</td><td>Error</td><td>F</td><td>Error</td><td>F</td><td>Error</td><td>F</td><td>Error</td><td>F</td></tr><tr><td/><td colspan=\"8\">(<S>) (<S>) (<A>) (<A>) (<S>) (<S>) (<A>) (<A>)</td></tr><tr><td/><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td></tr><tr><td>B. Port.</td><td>1.25</td><td>99.36</td><td>1.11</td><td>82.68</td><td>1.19</td><td>99.39</td><td>1.04</td><td>81.28</td></tr><tr><td>Dutch</td><td>1.00</td><td>99.47</td><td>0.72</td><td>94.12</td><td>0.85</td><td>99.55</td><td>0.57</td><td>95.34</td></tr><tr><td>English</td><td>2.30</td><td>98.44</td><td>0.71</td><td>98.66</td><td>2.04</td><td>98.63</td><td>0.72</td><td>98.65</td></tr><tr><td>Estonian</td><td>2.57</td><td>98.57</td><td>1.66</td><td>91.01</td><td>2.80</td><td>98.45</td><td>2.10</td><td>88.13</td></tr><tr><td>French</td><td>2.56</td><td>98.65</td><td>0.85</td><td>84.36</td><td>1.99</td><td>98.96</td><td>0.85</td><td>84.38</td></tr><tr><td>German</td><td>0.40</td><td>99.78</td><td>0.26</td><td>98.62</td><td>0.47</td><td>99.74</td><td>0.30</td><td>98.38</td></tr><tr><td>Italian</td><td>1.48</td><td>99.23</td><td>0.83</td><td>88.38</td><td>1.37</td><td>99.29</td><td>0.80</td><td>88.97</td></tr><tr><td>Norwegian</td><td>1.55</td><td>99.17</td><td>1.32</td><td>89.73</td><td>1.41</td><td>99.25</td><td>1.18</td><td>90.54</td></tr><tr><td>Spanish</td><td>1.31</td><td>99.31</td><td>0.45</td><td>94.69</td><td>1.12</td><td>99.41</td><td>0.32</td><td>96.24</td></tr><tr><td>Swedish</td><td>2.39</td><td>98.76</td><td>1.86</td><td>72.88</td><td>2.28</td><td>98.82</td><td>1.74</td><td>74.25</td></tr><tr><td>Turkish</td><td>1.53</td><td>99.20</td><td>0.57</td><td>89.74</td><td>1.54</td><td>99.20</td><td>0.58</td><td>89.30</td></tr><tr><td>Mean</td><td>1.67</td><td>99.09</td><td>0.94</td><td>89.53</td><td>1.55</td><td>99.15</td><td>0.93</td><td>89.59</td></tr><tr><td>SD</td><td>0.70</td><td>0.42</td><td>0.50</td><td>7.54</td><td>0.67</td><td>0.40</td><td>0.56</td><td>7.56</td></tr><tr><td>Mean (MC)</td><td>1.26</td><td>99.31</td><td>0.80</td><td>90.93</td><td>1.26</td><td>99.31</td><td>0.80</td><td>90.93</td></tr><tr><td>Difference</td><td>0.41</td><td>\u22120.22</td><td>0.14</td><td>\u22121.40</td><td>0.29</td><td>\u22120.16</td><td>0.13</td><td>\u22121.34</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF9": { |
| "text": "Abbreviation lists.", |
| "content": "<table><tr><td>List</td><td colspan=\"2\">English German</td></tr><tr><td>All abbreviations</td><td>1537</td><td>769</td></tr><tr><td>No homographs</td><td>1115</td><td>729</td></tr><tr><td>No single characters</td><td>1513</td><td>742</td></tr><tr><td>No homographs and no single characters</td><td>1112</td><td>703</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF10": { |
| "text": "Results of classification-using an additional abbreviation list.", |
| "content": "<table><tr><td/><td/><td/><td>Error <S></td><td/><td/></tr><tr><td colspan=\"6\">Corpus No list (%) All (%) No homogr. (%) No single chars. (%) Neither (%)</td></tr><tr><td>WSJ</td><td>1.65</td><td>1.97</td><td>1.58</td><td>1.96</td><td>1.58</td></tr><tr><td>Brown</td><td>1.02</td><td>1.75</td><td>0.93</td><td>1.72</td><td>0.92</td></tr><tr><td>NZZ</td><td>0.35</td><td>0.37</td><td>0.32</td><td>0.37</td><td>0.32</td></tr><tr><td/><td/><td/><td>Error <A></td><td/><td/></tr><tr><td colspan=\"6\">Corpus No list (%) All (%) No homogr. (%) No single chars. (%) Neither (%)</td></tr><tr><td>WSJ</td><td>0.71</td><td>0.89</td><td>0.63</td><td>0.88</td><td>0.63</td></tr><tr><td>Brown</td><td>0.82</td><td>1.44</td><td>0.69</td><td>1.42</td><td>0.70</td></tr><tr><td>NZZ</td><td>0.26</td><td>0.28</td><td>0.23</td><td>0.28</td><td>0.23</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF11": { |
| "text": "Results of classification-using only a fixed abbreviation list. Configurations for testing the effectiveness of separate reclassification in the token-based stage.", |
| "content": "<table><tr><td/><td/><td/><td>Error <S></td><td/><td/></tr><tr><td colspan=\"6\">Corpus On the fly (%) All (%) No homogr. (%) No single chars. (%) Neither (%)</td></tr><tr><td>WSJ</td><td>1.65</td><td>5.33</td><td>16.62</td><td>5.35</td><td>16.65</td></tr><tr><td>Brown</td><td>1.02</td><td>2.53</td><td>5.24</td><td>2.60</td><td>5.26</td></tr><tr><td>NZZ</td><td>0.35</td><td>2.55</td><td>2.64</td><td>2.59</td><td>2.67</td></tr><tr><td/><td/><td/><td>Error <A></td><td/><td/></tr><tr><td colspan=\"6\">Corpus On the fly (%) All (%) No homogr. (%) No single chars. (%) Neither (%)</td></tr><tr><td>WSJ</td><td>0.71</td><td>4.57</td><td>16.70</td><td>4.58</td><td>16.73</td></tr><tr><td>Brown</td><td>0.82</td><td>2.32</td><td>5.32</td><td>2.43</td><td>5.36</td></tr><tr><td>NZZ</td><td>0.26</td><td>2.48</td><td>2.56</td><td>2.51</td><td>2.60</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF12": { |
| "text": "Contributions of separate reclassifications-newspaper corpora (mixed case). Configurations for testing the effectiveness of the heuristics in the token-based stage.", |
| "content": "<table><tr><td/><td/><td>Error <S></td><td/><td/></tr><tr><td>Corpus</td><td colspan=\"4\">System 1 (%) System 2 (%) System 3 (%) System 4 (%)</td></tr><tr><td>B. Portuguese</td><td>1.37</td><td>1.27</td><td>1.11</td><td>1.12</td></tr><tr><td>Dutch</td><td>1.96</td><td>2.73</td><td>1.27</td><td>0.97</td></tr><tr><td>English</td><td>2.71</td><td>2.06</td><td>1.65</td><td>1.68</td></tr><tr><td>Estonian</td><td>7.37</td><td>6.88</td><td>6.46</td><td>2.12</td></tr><tr><td>French</td><td>2.94</td><td>2.04</td><td>1.54</td><td>1.33</td></tr><tr><td>German</td><td>2.38</td><td>2.32</td><td>2.25</td><td>0.35</td></tr><tr><td>Italian</td><td>2.18</td><td>1.90</td><td>1.13</td><td>1.14</td></tr><tr><td>Norwegian</td><td>4.09</td><td>3.92</td><td>3.38</td><td>0.81</td></tr><tr><td>Spanish</td><td>1.81</td><td>1.67</td><td>1.06</td><td>1.08</td></tr><tr><td>Swedish</td><td>2.42</td><td>2.16</td><td>1.95</td><td>1.76</td></tr><tr><td>Turkish</td><td>1.89</td><td>1.72</td><td>1.70</td><td>1.31</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF13": { |
| "text": "Contributions of the heuristics-newspaper corpora (mixed case).", |
| "content": "<table><tr><td/><td/><td/><td>Error <S></td><td/><td/></tr><tr><td>Corpus</td><td colspan=\"5\">System A (%) System B (%) System C (%) System D (%) System E (%)</td></tr><tr><td>B. Portuguese</td><td>1.37</td><td>1.28</td><td>1.13</td><td>1.12</td><td>1.11</td></tr><tr><td>Dutch</td><td>1.96</td><td>1.17</td><td>1.13</td><td>1.05</td><td>0.97</td></tr><tr><td>English</td><td>2.71</td><td>2.15</td><td>1.80</td><td>1.84</td><td>1.65</td></tr><tr><td>Estonian</td><td>7.37</td><td>2.94</td><td>2.80</td><td>2.18</td><td>2.12</td></tr><tr><td>French</td><td>2.94</td><td>2.61</td><td>1.96</td><td>1.78</td><td>1.54</td></tr><tr><td>German</td><td>2.38</td><td>0.47</td><td>0.42</td><td>0.36</td><td>0.35</td></tr><tr><td>Italian</td><td>2.18</td><td>1.60</td><td>1.39</td><td>1.23</td><td>1.13</td></tr><tr><td>Norwegian</td><td>4.09</td><td>1.44</td><td>1.34</td><td>0.87</td><td>0.81</td></tr><tr><td>Spanish</td><td>1.81</td><td>1.39</td><td>1.25</td><td>1.08</td><td>1.06</td></tr><tr><td>Swedish</td><td>2.42</td><td>2.13</td><td>1.94</td><td>1.76</td><td>1.76</td></tr><tr><td>Turkish</td><td>1.89</td><td>1.58</td><td>1.54</td><td>1.31</td><td>1.31</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF14": { |
| "text": "Comparison between MxTerminator (Reynar and Ratnaparkhi 1997) and the Punkt System.", |
| "content": "<table><tr><td/><td/><td/><td colspan=\"2\">Error <S></td><td/><td/><td/></tr><tr><td>Corpus</td><td colspan=\"3\">Cases MxTerm. Punkt</td><td>Corpus</td><td colspan=\"2\">Cases MxTerm.</td><td>Punkt</td></tr><tr><td/><td/><td>(%)</td><td>(%)</td><td/><td/><td>(%)</td><td>(%)</td></tr><tr><td>B. Port.</td><td>13,725</td><td>1.10</td><td>1.11</td><td>Italian</td><td>10,405</td><td>2.45</td><td>1.13</td></tr><tr><td>Dutch</td><td>18,068</td><td>1.13</td><td>0.97</td><td>Norwegian</td><td>25,531</td><td>1.34</td><td>0.81</td></tr><tr><td>English</td><td>24,282</td><td>1.53</td><td>1.65</td><td>Spanish</td><td>11,714</td><td>1.60</td><td>1.06</td></tr><tr><td colspan=\"2\">Estonian 23,243</td><td>2.79</td><td>2.12</td><td>Swedish</td><td>17,752</td><td>2.39</td><td>1.76</td></tr><tr><td>French</td><td>11,601</td><td>2.66</td><td>1.54</td><td>Turkish</td><td>18,942</td><td>1.77</td><td>1.31</td></tr><tr><td>German</td><td>34,256</td><td>0.63</td><td>0.35</td><td colspan=\"4\">Mean Error MxTerm.: 1.76%, Punkt: 1.26%</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| }, |
| "TABREF15": { |
| "text": "Direct comparison between Punkt and other systems for sentence boundary detection.", |
| "content": "<table><tr><td/><td>F Measure</td><td/><td colspan=\"2\">Error <S></td></tr><tr><td>System</td><td colspan=\"4\">Tipster-WSJ (%) Lacio Web (%) Brown (%) WSJ (%)</td></tr><tr><td>Punkt</td><td>91.51</td><td>97.22</td><td>1.02</td><td>1.65</td></tr><tr><td>RE</td><td>91.78</td><td>89.87</td><td>-</td><td>-</td></tr><tr><td>Grefenstette & Tapanainen</td><td>-</td><td>-</td><td>0.93</td><td>-</td></tr><tr><td>Riley</td><td>-</td><td>-</td><td>0.20</td><td>-</td></tr><tr><td>Satz</td><td>91.88</td><td>99.16</td><td>-</td><td>1.00</td></tr><tr><td>MxTerminator</td><td>-</td><td>-</td><td>2.10</td><td>1.20</td></tr><tr><td>MxTerminator (portable)</td><td>91.22</td><td>96.46</td><td>2.50</td><td>2.00</td></tr><tr><td>Mikheev</td><td>-</td><td>-</td><td>0.28</td><td>0.45</td></tr><tr><td>Mikheev (tagger only)</td><td>-</td><td>-</td><td>0.98</td><td>1.95</td></tr><tr><td>Mikheev (+ tagger)</td><td>-</td><td>-</td><td>0.20</td><td>0.31</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |