ACL-OCL / Base_JSON /prefixW /json /W96 /W96-0201.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W96-0201",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:59:05.090313Z"
},
"title": "A Geometric Approach to Mapping Bitext Correspondence",
"authors": [
{
"first": "I",
"middle": [
"Dan"
],
"last": "Melamed",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA",
"country": "U.S.A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The first step in most corpus-based multilingual NLP work is to construct a detailed map of the correspondence between a text and its translation. Several automatic methods for this task have been proposed in recent years. \"Yet even the best of these methods can err by several typeset pages. The Smooth Injective Map Recognizer (SIMR) is a new bitext mapping algorithm. SIMR's errors are smaller than those of the previous front-runner by more than a factor of 4. Its robustness has enabled new commercial-quality applications. The greedy nature of the algorithm makes it independent of memory resources. Unlike other bitext mapping algorithms, SIMR allows crossing correspondences to account for word order differences. Its output can be converted quickly and easily into a sentence alignment. SIMR's output has been used to align more than 200 megabytes of the Canadian Hansards for publication by the Linguistic Data Consortium.",
"pdf_parse": {
"paper_id": "W96-0201",
"_pdf_hash": "",
"abstract": [
{
"text": "The first step in most corpus-based multilingual NLP work is to construct a detailed map of the correspondence between a text and its translation. Several automatic methods for this task have been proposed in recent years. \"Yet even the best of these methods can err by several typeset pages. The Smooth Injective Map Recognizer (SIMR) is a new bitext mapping algorithm. SIMR's errors are smaller than those of the previous front-runner by more than a factor of 4. Its robustness has enabled new commercial-quality applications. The greedy nature of the algorithm makes it independent of memory resources. Unlike other bitext mapping algorithms, SIMR allows crossing correspondences to account for word order differences. Its output can be converted quickly and easily into a sentence alignment. SIMR's output has been used to align more than 200 megabytes of the Canadian Hansards for publication by the Linguistic Data Consortium.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The first step in most corpus-based multilingual NLP work is to construct a detailed map of the correspondence between a text and its translation (a bitext map).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Several automatic methods have been proposed for this task in recent years. However, most of these methods address only the sub-problem of alignment (Catizone et al. 1989 , Brown et al. 1991 , Debili & Sammouda 1992 , Simard et al. 1992 , Kay & RSscheisen 1993 , Wu 1994 .",
"cite_spans": [
{
"start": 149,
"end": 170,
"text": "(Catizone et al. 1989",
"ref_id": null
},
{
"start": 171,
"end": 190,
"text": ", Brown et al. 1991",
"ref_id": "BIBREF0"
},
{
"start": 191,
"end": 215,
"text": ", Debili & Sammouda 1992",
"ref_id": "BIBREF1"
},
{
"start": 216,
"end": 236,
"text": ", Simard et al. 1992",
"ref_id": null
},
{
"start": 237,
"end": 260,
"text": ", Kay & RSscheisen 1993",
"ref_id": null
},
{
"start": 261,
"end": 270,
"text": ", Wu 1994",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Alignment algorithms assume the availability of text unit boundary information and their output has less expressive power than a general bitext map. The only published solution to the more difficult general bitext mapping problem (Church 1993) can err by several typeset pages. Such frailty can expose lexicographers and ter-minologists to spurious concordances, feed noisy training data into statistical translation models, and degrade the performance of corpus-based machine translation. Some multilingual NLP tasks, such as automatic validation of terminological consistency (Macklovitch 1995) and automatic detection of omissions in translations (implemented for the first time in (Melamed 1996)), have been technologically impossible until now, because they are highly sensitive to large errors in the bitext map.",
"cite_spans": [
{
"start": 578,
"end": 596,
"text": "(Macklovitch 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The Smooth Injective Map Recognizer (SIMR) is a greedy algorithm for mapping bitext correspondence. SIMR borrows several insights from previous work. Like and Brown et al. (1991) , SIMR relies on the high correlation between the lengths of mutual translations. Like char_.align (Church 1993) , SIMR infers bitext maps from likely points of correspondence between the two texts, points that are plotted in a two-dimensional space of possibilities. Unlike previous methods, SIMR searches for only a handful of points of correspondence at a time.",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "Brown et al. (1991)",
"ref_id": "BIBREF0"
},
{
"start": 278,
"end": 291,
"text": "(Church 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Each set of correspondence points is found in two steps. First, SIMR generates a number of possible points of correspondence between the two texts, as described in Section 3.1. Second, SIMR selects those points whose geometric arrangement most resembles the typical arrangement of true points of correspondence. This selection involves localized pattern recognition heuristics, which Section 3.2 refers to collectively as the chain recognition heuristic. SIMR then interpolates between successive selected points to produce a bitext map, as described in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Several key terms will help to explain SIMR. First, a bitext (Harris 1988) comprises two versions of a text, such as a text in two different languages. Translators create a bitext each time they translate a text. Second, each bitext defines a rectangular bitext space, such as Figure 1 . The width and height of the rectangle are the lengths of the two component texts, in characters. The lower left corner of the rectangle is the origin of the bitext space and represents the two texts' beginnings. The upper right corner is the terminus and represents the texts' ends. The line between the origin and the terminus is the main diagonal. The slope of the main diagonal is the bitext slope.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 285,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2."
},
{
"text": "Each bitext space contains a number of true points of correspondence (TPCs), other than the origin and the terminus. For example, if a token at position p on the x-axis and a token at position q on the y-axis are translations of each other, then the coordinate (p, q) in the bitext space is a TPC 1. TPCs also exist at corresponding boundaries of text units such as sentences, paragraphs, and sections. Groups of TPCs with a roughly linear arrangement in the bitext space are called chains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2."
},
{
"text": "Bitext maps are bijective functions in bitext spaces. For each bitext, the true bitext map (TBM) is the shortest bitext map that runs through all the TPCs. The purpose of a bitext mapping algorithm is to produce bitext maps that are the best possible approximations of each bitext's TBM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2."
},
{
"text": "Most of SIMR's effort is spent searching for TPCs, one short chain at a time. The search for each chain begins in a small rectangular region of the bitext space, whose dimensions are proportional to those of the whole bitext space. Within this search 1Since distances in the bitext space are measured in characters, the position of a token is defined to be the mean position of its characters. rectangle, the search alternates between a generation phase and a recognition phase, which are described in more detail in Sections 3.1 and 3.2. In the generation phase, SIMR generates all the points of correspondence that satisfy the supplied matching predicate (explained below). In the recognition phase, SIMR calls the chain recognition heuristic to search for suitable chains among the generated points. If no suitable chains are found, the search rectangle is proportionally expanded up and to the right and the generationrecognition cycle is repeated. The rectangle keeps expanding until at least one acceptable chain is found. If more than one chain is found, SIMR accepts the chain whose points are least dispersed around its least-squares line. Then, SIMR selects another region of the bitext space to search for the next chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SIMR",
"sec_num": "3."
},
{
"text": "SIMR employs a simple heuristic to select regions of the bitext space to search. To a first approximation, TBMs are monotonically increasing functions. This means that if SIMR accepts a chain, it should look for others either above and to the right or below and to the left of the one it has just located. All SIMR needs is a place to start the trace, and a good place to start is at the beginning. The origin of the bitext space is always a TPC. So, the first search rectangle is anchored at the origin. Subsequent search rectangles are anchored at the top right corner of the previously found chain, as shown in Figure 2 . The expanding-rectangle search strategy makes SIMR robust in the face of TBM discontinuities. Figure 2 shows a segment of the TBM trace that contains a vertical gap (an omission in the text on the x-axis). As the search rectangle grows, it will eventually pick up the TBM's trail, even if the discontinuity is quite large (Melamed 1996) . Section 3.8 explains why SIMR will not be led astray by false points of correspondence.",
"cite_spans": [
{
"start": 947,
"end": 961,
"text": "(Melamed 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 614,
"end": 622,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 719,
"end": 727,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "SIMR",
"sec_num": "3."
},
{
"text": "A matching predicate is a heuristic for guessing whether a given point in the bitext space is a TPC. I have considered only token-based matching predicates, which can only return TRUE for a point (x, y) if x is the position of a token e on the x-axis and y is the position of a token f on the yaxis. For each such point, the matching predicate must decide whether the e and f are likely to be mutual translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Generation",
"sec_num": "3.1"
},
{
"text": "Various knowledge sources can be brought to bear on the decision. The most universal knowledge source is a translation lexicon. Translation lexicons can be extracted from machinereadable bilingual dictionaries (MRBDs), in the rare cases where MRBDs are available. In other cases, they can be induced automatically using any of several existing methods (Dagan et al. 1993 , Fung ~ Church 1991 , Melamed 1995 . Since the matching predicate does not require perfect accuracy, the induced lexicons need not be perfect. When a large translation lexicon is not available, a small hand-constructed translation lexicon for the key terms in a given bitext may suffice to produce a rough map for that bitext.",
"cite_spans": [
{
"start": 352,
"end": 370,
"text": "(Dagan et al. 1993",
"ref_id": "BIBREF0"
},
{
"start": 371,
"end": 391,
"text": ", Fung ~ Church 1991",
"ref_id": "BIBREF2"
},
{
"start": 392,
"end": 406,
"text": ", Melamed 1995",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Point Generation",
"sec_num": "3.1"
},
{
"text": "If the languages involved have similar alphabets, then it may be possible to construct a matching predicate with very little effort, using the method of cognates. Cognates are words with a common etymology and a similar meaning in different languages. The etymological similarity is often reflected in the words' orthography and/or pronunciation. Languages that are closely related will often share a large number of cognates. For example, in the non-technical Canadian Hansards (parliamentary debate transcripts available in English and French), cognates can be found for roughly one quarter of all text tokens (Melamed 1995). A cognate-based matching predicate will generate more points for more similar language pairs, and for text genres where more word borrowing occurs, such as technical texts. For English and French, such a matching predicate can generate enough points in the bitext space to obviate the need for a translation lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Generation",
"sec_num": "3.1"
},
{
"text": "Phonetic cognates can be used to map between language pairs with dissimilar alphabets, even when the languages are not closely related. When language L1 borrows a word from language L2, the word is usually written in L1 similarly to the way it sounds in L2. Thus, French and Russian /p~rtmone/ are cognates, as are English /sIstom/and Japanese/~isutemu/. For many lan- guages, it is not difficult to construct an approximate mapping from the orthography to its underlying phonological form. Given such a mapping for L1 and L2, it is possible to identify cognates despite incomparable orthographies. SIMR was tested on French and English with two different matching predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Generation",
"sec_num": "3.1"
},
{
"text": "The first matching predicate relies on orthographic cognates and a stop-list of closed-class words for both languages. SIMR judges the cognateness of each token pair by their Longest Common Subsequence Ratio (LCSR). The LCSR of a token pair is the number of characters that appear in the same order in both tokens divided by the length of the longer token (Melamed 1995). The common characters need not be contiguous. The matching predicate considers a token pair cognates if their LCSR exceeds a certain threshold. The LCSR threshold was optimized together with SIMR's other parameters, as described in Section 3.7. The stop-list of closed-class words made the matching predicate more accurate, because closed-class words are unlikely to have cognates. On the contrary, they often produce spurious matches. Examples for French and English include a, an, on and par.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Generation",
"sec_num": "3.1"
},
{
"text": "The second matching predicate was just like the first, except that it also evaluated to TRUE whenever the input token pair appeared as an entry in a translation lexicon. The translation lexicon was automatically extracted from an MRBD (Cousin et al. 1991).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Generation",
"sec_num": "3.1"
},
{
"text": "As illustrated in Figure 3 , even short sequences of TPCs form characteristic patterns. In particular, TPCs have the following properties:",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Point Selection",
"sec_num": "3.2"
},
{
"text": "\u2022 Linearity: TPCs tend to line up straight. Sets of points with a roughly linear arrangement are called chains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Selection",
"sec_num": "3.2"
},
{
"text": "\u2022 Constant Slope: The slope of a TPC chain is rarely much different from the bitext slope.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Selection",
"sec_num": "3.2"
},
{
"text": "\u2022 Injectivity: No two points in a chain of TPCs can have the same x-or y-co-ordinates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Selection",
"sec_num": "3.2"
},
{
"text": "SIMR exploits these properties to decide which chains in the scatterplot might be TPC chains. The chain recognition heuristic involves two threshold parameters: maximum point dispersal and maximum angle deviation. Each threshold is used to filter candidate chains. First, the linearity of each chain is judged by measuring the root mean squared distance of the chain's points from the chain's least-squares line. If this distance exceeds the maximum point dispersal threshold, the chain is rejected. Second, the angle of each chain's least-squares line is compared to the arctangent of the bitext slope. If the difference exceeds the maximum angle deviation threshold, the chain is rejected. Lastly, chains that lack the injectivity property are rejected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Point Selection",
"sec_num": "3.2"
},
{
"text": "In a region of the scatterplot containing n points, there are 2 n possible chains --too many to search by brute force. The.properties of TPCs listed above provide two ways to constrain the search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Search Space",
"sec_num": "3.3"
},
{
"text": "The Linearity property leads to a constraint on the chain size. Chains of only a few points are unreliable, because they often line up straight by coincidence. Chains that are too big will span too long a segment of the TBM to be well approximated by a line. SIMR chooses a fixed chain size k, 6 < k < 9. Fixing the chain size at k reduces the number of candidate chains to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Search Space",
"sec_num": "3.3"
},
{
"text": "k (n Fortypicalvaluesofnandk, ( n ) k can still",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Search Space",
"sec_num": "3.3"
},
{
"text": "reach into the millions. The Constant Slope property suggests another constraint: SIMR should consider only chains that are roughly parallel to the main diagonal. Two lines are parallel if the perpendicular displacement between them is constant. So, if we want to find chains that are roughly parallel to the main diagonal, we should look for chains whose points all have roughly the same displacement 2 from the main diagonal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Search Space",
"sec_num": "3.3"
},
{
"text": "Points with similar displacement can be grouped together by sorting, as illustrated in Figure 4 . Then, chains that are most parallel to the main 2Displacement can be negative.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reducing the Search Space",
"sec_num": "3.3"
},
{
"text": "subsequence 1 ~i mnail~J (points 1 thru 6) a~ subsequence 8 (points 5 thru 10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "\" \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "Figure 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "The points of correspondence are numbered according to their displacement from the main diagonal. The chain most parallel to the main diagonal is always one of the contiguous subsequences of this ordering. For a fixed chain size of 6, there are 13 -6 + 1 = 8 contiguous subsequences in this region of 13 points. Of these 8, subsequence 5 is the best chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "diagonal will be contiguous subsequences of the sorted point sequence. In a region of the scatterplot containing n points, there will be only n-k+l such subsequences of length k. Sorting the points by their displacement is the most computationally expensive step in the recognition process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "SIMR's chain recognition heuristic accepts non-monotonic chains. This is a desirable property, because even languages with similar syntax, like French and English, have well-known differences in word order. For example, English (adjective, noun) pairs usually correspond to French (noun, adjective) pairs. Such inversions result in chains that contain a pattern like points 5 and 9 in Figure 4 . SIMR has no problem accepting the inverted points, unlike bitext mapping algorithms that try to minimize the distance between TPCs. To my knowledge, no other bitext mapping algorithm allows non-monotonic map segments.",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 393,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "You may wonder how SIMR will fare with languages that are less closely related, which have even more word order variation. This is an open question, but there is reason to be optimistic. To accommodate language pairs with vastly different word order, it may suffice for SIMR to increase the maximum point dispersal threshold, relaxing the linearity constraint on TPC chains. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "The Injectivity property also leads to a heuristic which reduces the number of candidate chains, although the chief aim of this heuristic is to increase the signal-to-noise ratio in the scatterplot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Noise",
"sec_num": "3.4"
},
{
"text": "The heuristic was introduced after inspection of several scatterplots in bitext spaces revealed a recurring noise pattern. This noise pattern is illustrated in Figure 5 . It consists of correspondence points that line up in rows or columns associated with frequent token types. Token types like the English article \"a\" can produce one or more correspondence points for almost every sentence in the opposite text. Since only one of these correspondence points can be correct, all but one of the points in each row and column are noise. It's difficult to measure exactly how much noise is generated by frequent tokens, and of course the proportion is different for every bitext. Visual inspection of some scatterplots indicated that frequent tokens are often responsible for the lion's share of the noise. Reducing this source of noise makes it much easier for SIMR to stay on track.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Reducing Noise",
"sec_num": "3.4"
},
{
"text": "Other bitext mapping algorithms mitigate this source of noise either by assigning lower weights to correspondence points associated with frequent token types (Church 1993) or by simply deleting frequent token types from the bitext (Dagan et al. 1993 ). However, a frequent token type can be rare in some parts of the text. In those parts, the token type can provide valuable clues to correspondence. On the other hand, many tokens of a relatively rare type can be concentrated in a short segment of the text, resulting in many false correspondence points. The varying concentration of identical tokens suggests that more localized noise filters would be more effective. SIMR's localized search strategy provides the perfect vehicle for a localized noise filter.",
"cite_spans": [
{
"start": 231,
"end": 249,
"text": "(Dagan et al. 1993",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Noise",
"sec_num": "3.4"
},
{
"text": "The filter is based on another threshold parameter, the maximum point ambiguity level (MaxPAL). For each point p = (x, y), let X be the number of points in column x within the search rectangle, and let Y be the number of points in row y within the search rectangle. Then, ambiguity level of p = X + Y -2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Noise",
"sec_num": "3.4"
},
{
"text": "Thus, if p is the only point in its row and column, its ambiguity level is zero. SIMR ignores points whose ambiguity level exceeds the MaxPAL threshold. What makes this a localized filter is that only points within the search rectangle count towards each other's ambiguity level. This means that the ambiguity level of a given point can increase as the search rectangle expands; the set of points that SIMR ignores can change dynamically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Noise",
"sec_num": "3.4"
},
{
"text": "A bitext map can be derived from a set of correspondence points by linear interpolation. The only complication is that linear interpolation is not well-defined for non-monotonic sets of points. It would be incorrect to simply connect the dots left to right, because the resulting function may not be one-to-one. To interpolate injeetive bitext maps, non-monotonic segments must be encapsulated in Minimum Enclosing Rectangles (MERs), as shown in Figure 6 . A unique bitext map can be interpolated by using the lower left and upper right corners of the MER, instead of using the non-monotonic correspondence points.",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 454,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interpolation",
"sec_num": "3.5"
},
{
"text": "There are many possible enhancements to the algorithm outlined above. The following subsections describe but two of the more interesting extensions in the current implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements",
"sec_num": "3.6"
},
{
"text": "Large Non-monotonic Segments SIMR has no problem with small non-monotonic segments inside chains. However, the expanding rectangle search strategy can miss larger non-monotonic segments, which cannot fit inside one chain. If a more precise map is desired, these larger non-monotonic segments can be easily recovered during a second sweep through the bitext space\u2022 Non-monotonic TBM segments result in a characteristic map pattern, as a consequence of the injectivity of bitext maps. In Figure 7 , the vertical range of segment j corresponds to a vertical gap in SIMR's first-pass map. The horizontal range of segment j corresponds to a horizontal gap in SIMR's first-pass map. Similarly, any non-monotonic segment of the TBM will occupy the intersection of a vertical gap and a horizontal gap in the monotonic first-pass map. Furthermore, switched segments are almost always adjacent and relatively short. Therefore, to recover non-monotonic segments of the TBM, SIMR needs only to search gap intersections that are close to 6 the first-pass map. There are usually very few such intersections that are also large enough to accommodate new chains, so the second-pass search requires only a small fraction of the computational effort of the first pass.",
"cite_spans": [],
"ref_spans": [
{
"start": 486,
"end": 494,
"text": "Figure 7",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Enhancements",
"sec_num": "3.6"
},
{
"text": "Local Slope Variation To ensure that SIMR rejects spurious chains, the maximum angle deviation threshold must be set low. However, like any heuristic filter, this one will reject some perfectly valid candidates. The injectivity of bitext maps enables a method for recovering some of the rejected valid chains. Valid chains that are rejected by the angle deviation filter sometimes occur between two accepted chains, as shown in Figure 8 . search space Figure 8 : Chain X is perfectly valid, even though it has a highly deviant slope. Such chains can be recovered by re-searching regions between accepted chains. The slope of the local main diagonal can be quite different from the slope of the global main diagonal.",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 436,
"text": "Figure 8",
"ref_id": null
},
{
"start": 452,
"end": 460,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Enhancements",
"sec_num": "3.6"
},
{
"text": "slope of the TBM between the end of Chain C and the start of Chain D must be much closer to the slope of Chain X than to the slope of the main diagonal. Chain X should be accepted. When SIMR makes its second-pass search for non-monotonic segments, it also searches for sandwiched chains in any space between two accepted chains that is large enough to accommodate another chain. This subspace of the bitext space will have its own main diagonal. The slope of this local main diagonal can be quite different from the slope of the global main diagonal. Another source of local slope variation is \"non-linguistic\" text, such as white space or tables of numbers. Usually, such text is copied \"as is\" during translation, resulting in regions of bitext space where the slope of the TBM is exactly 1. The problem is that these regions can be large enough to severely skew the slope of the main diagonal. Thus, they can fool SIMR into searching the whole bitext space for TPC chains whose slope is close to 1, even though most of the bitext It should not be difficult to recognize bitext sections that consist of \"non-linguistic\" text. Then, SIMR will be better able to follow the variations in the slope of the TBM. This extension to SIMR is next in line.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enhancements",
"sec_num": "3.6"
},
{
"text": "The standard method of evaluating bitext mapping algorithms is to compare their output to a hand-constructed reference set of TPCs. Michel Simard of CITI graciously provided me with several such reference sets for French-English bitexts, including the same \"easy\" and \"hard\" Hansard bitexts that have been used to evaluate other bitext mapping and alignment algorithms in the literature (Church 1993 , Simard et al. 1992 , Dagan et al. 1993 . A non-Hansard reference set was used for SIMR's development. All of SIMR's parameters, namely the thresholds for maximum point dispersal, maximum angle deviation, maximum point ambiguity, and the LCSR used in the matching predicate, as well as the fixed chain size, were simultaneously optimized on this data set using simulated annealing (Vidal 1993) . Different parameter settings considered by the optimization process resulted in different bitext maps for the development bitext. Each set of parameter values was scored according to the root mean squared error between the resulting bitext map and the reference set of TPCs. The best-scoring set of parameter values was used to evaluate SIMR. SIMR was evaluated on the \"easy\" and \"hard\" Hansard bitexts. Note that these bitexts are so named because one was easier than the other for the alignment algorithm that was first evaluated on them. There is no a priori reason to believe that one or the other will be easier for SIMR. Table 1 compares SIMR's error distribution on these bitexts with that of the previous front-runner, char._al:i.gn, as reported by Church 7 (1993). SIMR's RMS error is lower by more than a factor of 4. SIMR is also much more robust: it rarely errs by more than half the length of an average sentence. Such robustness has enabled at least one new commercial-quality application -automatic detection of omissions in translations (Melamed 1996) . This task was impossible until now, because it cannot tolerate even a few wild errors, such as those produced by an independent implementation of char_al:i.gn (Simard 1995) .",
"cite_spans": [
{
"start": 387,
"end": 399,
"text": "(Church 1993",
"ref_id": "BIBREF0"
},
{
"start": 400,
"end": 420,
"text": ", Simard et al. 1992",
"ref_id": null
},
{
"start": 421,
"end": 440,
"text": ", Dagan et al. 1993",
"ref_id": "BIBREF0"
},
{
"start": 782,
"end": 794,
"text": "(Vidal 1993)",
"ref_id": "BIBREF7"
},
{
"start": 1850,
"end": 1864,
"text": "(Melamed 1996)",
"ref_id": "BIBREF6"
},
{
"start": 2026,
"end": 2039,
"text": "(Simard 1995)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1424,
"end": 1431,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.7"
},
{
"text": "Note that the error between a bitext map and each reference point can be defined as the horizontal distance, the vertical distance, or the distance perpendicular to the main diagonal. The latter distance will always be shortest, on average. Church (1993) did not specify which metric he used. Of the three possibilities, Table 1 conservatively reports the highest error estimates for SIMR. The lowest estimates for SIMR without the translation lexicon are an RMS error of 6.1 for the \"easy\" bitext and 5.4 for the \"hard\" bitext. With the translation lexicon, the lowest error estimates drop to 6.0 for the \"easy\" bitext and 4.6 for the \"hard\" bitext.",
"cite_spans": [],
"ref_spans": [
{
"start": 321,
"end": 328,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.7"
},
{
"text": "One concern about greedy algorithms is that if they wander off track, they may not be able to find their way back. There is no guarantee that this will never happen with SIMR. However, there is evidence that it is extremely unlikely. First, SIMR can wander off the right track only if there is an alternative (wrong) track. The noise reduction heuristics mentioned in Section 3.5 ensure that very few points of correspondence can be generated away from the TBM trace. Those points that are generated are extremely unlikely to be sufficiently linear and to have the proper slope to fool the chain recognition heuristic. The fixed chain size parameter also plays a role. The longer the chain, the less probable it is that a set of false points of correspondence will take on a valid-looking arrangement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.8"
},
{
"text": "The development bitext used in the simulated annealing parameter optimization contained over 40000 words. During the optimization, SIMR occasionally veered off course when the fixed chain size was 5 or less. It rarely got lost with a fixed chain size of 6 and never with a fixed chain size of 7 or more. The optimal fixed chain size with respect to the RMS error metric was 9 when the translation lexicon was used, and 8 when it was not. The chances of 8 or 9 false points of correspondence satisfying the maximum point dispersal, maximum angle deviation, and maximum point ambiguity level thresholds are negligible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.8"
},
{
"text": "Finally, if SIMR does get lost, the resulting bitext map will contain telltale discontinuities. Such discontinuities can be automatically detected with high reliability (Melamed 1996) . With this sanity check in place, manual verification should never be necessary.",
"cite_spans": [
{
"start": 169,
"end": 183,
"text": "(Melamed 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.8"
},
{
"text": "SIMR has no idea that words are often used to make sentences. It just outputs a series of corresponding token positions, leaving users free to draw their own conclusions about how the texts' larger units correspond. However, many existing translators' tools and machine translation strategies are based on aligned sentences. What can SIMR do for them?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment",
"sec_num": "4."
},
{
"text": "There are several papers in the literature about bitext alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment",
"sec_num": "4."
},
{
"text": "The algorithms that seem to work best rely on the high correlation between the lengths of corresponding sentences (Brown et al. 1991 . However, these algorithms can fumble in bitext sections that contain many sentences of very similar length, like this vote record: A set of points of correspondence leads to alignment more directly than a translation model or a translation lexicon, because points of corre-spondence are a relation between token instances, not between token types. Moreover, a set of correspondence points, supplemented with sentence boundary information, expresses sentence correspondence, which is a richer representation than sentence alignment. Figure 9 illustrates how sentence boundaries form a grid over the bitext space 3. Each cell in the grid represents the intersection of two sentences, one from each component text. A point of correspondence inside cell (X,y) indicates that some token in sentence X corresponds with some token in sentence y; i.e. sentences X and y correspond. Thus, Figure 9 indicates that sentence e corresponds with sentences G and H.",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "(Brown et al. 1991",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 667,
"end": 675,
"text": "Figure 9",
"ref_id": "FIGREF11"
},
{
"start": 1015,
"end": 1023,
"text": "Figure 9",
"ref_id": "FIGREF11"
}
],
"eq_spans": [],
"section": "Alignment",
"sec_num": "4."
},
{
"text": "In contrast to a correspondence relation, \"an alignment is a segmentation of the two texts such that the nth segment of one text is the translation of the nth segment of the other.\" (Simard et al. 1992) For example, given the token correspondences in Figure 9 , the segment (G, H) should be aligned with the segment (e, f). If sentences (Xi,...,Xn) align with sentences (yl,...,y,~), then ((X1,...,X,~),(yl,...,y,~)) is an aligned block. In geometric terms, aligned blocks are rectangular regions of the bitext space, such that the sides of the rectangles coincide with sentence boundaries, and such that no two rectangles overlap either vertically or horizontally. The aligned blocks in Figure 9 are outlined with solid lines.",
"cite_spans": [
{
"start": 182,
"end": 202,
"text": "(Simard et al. 1992)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 251,
"end": 259,
"text": "Figure 9",
"ref_id": "FIGREF11"
},
{
"start": 688,
"end": 696,
"text": "Figure 9",
"ref_id": "FIGREF11"
}
],
"eq_spans": [],
"section": "Alignment",
"sec_num": "4."
},
{
"text": "SIMR's initial output has more expressive power than the alignment that can be derived from it. One illustration of this difference is that sentence correspondence can express inversions, but sentence alignment cannot. Inversions occur surprisingly often in real bitexts, even for sentencesize text units. Figure 9 provides another illustration. If, instead of the point in cell (I-I,e), there was a point in cell (G,f), the correct alignment for that region would still be ((G, g), (e, f)). If there were points of correspondence in both (HI,e) and (G,f), the correct alignment would still be the same. Yet, the three cases are clearly different. If a lexicographer wanted to see a word in sentence G in its bilingual context, it would be useful to know whether sentence f is relevant.",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 314,
"text": "Figure 9",
"ref_id": "FIGREF11"
}
],
"eq_spans": [],
"section": "Alignment",
"sec_num": "4."
},
{
"text": "Converting from sentence correspondence to sentence alignment is of dubious practical value. Nevertheless, in order to facilitate comparison of the geometric approach with other alignment algorithms, I have designed the Geometric Sentence Alignment (GSA) algorithm to reduce 3The techniques presented in this section can be applied equally well to paragraphs, lists of items, or any other text units for which boundary information is available. I I I I I I I @ I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I sets of correspondence points to alignments. The algorithm's first step is to perform a transitive closure over the input correspondence relation. For instance, if the input contains (G,e), (H,e), and (H,f), then GSA adds the pairing (G,f). Next, GSA forces all segments to be contiguous: If sentence Y corresponds with sentences x and z, but not y, the pairing (Y,y) is added. In geometric terms, these two operations arrange all cells that contain points of correspondence into nonoverlapping rectangles, while adding as few cells as possible. The result is an alignment relation. A complete set of TPCs, together with appropriate boundary information, guarantees a perfect alignment. Alas, the points of correspondence postulated by SIMR are neither complete nor noisefree. Fortunately, the noise in SIMR's output causes alignment errors in very predictable ways. GSA employs a couple of backing-off heuristics to elimninate most of the errors. SIMR makes errors of omission and errors of commission. Typical errors of commission are stray points of correspondence like the one in cell (H, e) in Figure 9 . This point indicates that (G, H) and (e, f) should form a 2x2 aligned block, whereas the lengths of the component sentences suggest that a pair of lxl blocks is more likely. In a separate development bitext, I have found that SIMR is usually wrong in these cases. To combat such errors, GSA re-aligns any aligned block that is not lxl, using the Gale & Church lengthbased alignment algorithm , Simard 1995 . Whenever the component sentence lengths suggest a more fine-grained alignment, SIMR's output is not trusted. Figure 9 by the complete absence of correspondence points between sentences (B,C,D) and (b, c) . This block of sentences is sandwiched between aligned blocks. It is highly likely that at least some of these sentences are mutual translations, despite SIMR's failure to find any points of correspondence between them. Therefore, GSA treats all empty blocks just like aligned blocks. If an empty block is not lxl, GSA re-aligns it using a length-based algorithm, just like it would re-align any other many-to-many aligned block.",
"cite_spans": [
{
"start": 2112,
"end": 2125,
"text": ", Simard 1995",
"ref_id": null
},
{
"start": 2325,
"end": 2331,
"text": "(b, c)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 445,
"end": 489,
"text": "I I I I I I I @ I I I I I I I",
"ref_id": null
},
{
"start": 490,
"end": 609,
"text": "I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I",
"ref_id": null
},
{
"start": 1709,
"end": 1717,
"text": "Figure 9",
"ref_id": "FIGREF11"
},
{
"start": 2237,
"end": 2245,
"text": "Figure 9",
"ref_id": "FIGREF11"
},
{
"start": 2313,
"end": 2320,
"text": "(B,C,D)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignment",
"sec_num": "4."
},
{
"text": "The most difficult problem occurs when an error of omission occurs next to an error of commission, like in blocks ((), (h)) and ((J, K), (i)). If the point in cell (J,i) should really be in cell (J,h), realignment inside the erroneous blocks would not solve the problem. A naive solution is to merge these blocks and then to re-align them using a length-based method. Unfortunately, this kind of alignment pattern, i.e. 0xl followed by 2xl, is surprisingly often correct. Length-based methods assign very low probabilities to such pattern sequences and usually get them wrong. Therefore, GSA also considers the confidence level with which the length-based alignment algorithm reports its re-alignment. If this confidence level is sufficiently high, GSA accepts the length-based re-alignment; otherwise, the alignment indicated by SIMR's points of correspondence is retained. The minimum confidence at which GSA trusts the length-based re-alignment is a GSA parameter, which has been optimized on a separate development bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typical errors of omission are illustrated in",
"sec_num": null
},
{
"text": "Due to the paucity of development resources at my disposal, GSA's backing-off heuristics are somewhat ad hoc. Even so, GSA performs at least as well as other alignment algorithms, and usually better. Table 2 compares SIMR's accuracy on the \"easy\" and \"hard\" reference bitexts with the accuracy of two other alignment algorithms, as reported by Simard et al. (1992) . The error metric counts one error for each aligned block in the reference alignment that is missing from the test alignment. The hard constraints correspond to paragraph boundaries.",
"cite_spans": [
{
"start": 344,
"end": 364,
"text": "Simard et al. (1992)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 200,
"end": 207,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Typical errors of omission are illustrated in",
"sec_num": null
},
{
"text": "More important than GSA's current performance is GSA's potential performance. With a bigger development bitext, more effective backingoff heuristics can be developed. More precise input would also make a big difference: GSA's performance will improve whenever SIMR's performance improves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typical errors of omission are illustrated in",
"sec_num": null
},
{
"text": "Although GSA sometimes backs off to a quadratic-time alignment algorithm, in practice its running time is linear in the number of input sentences. The points of correspondence in SIMR's output are sufficiently dense and precise that GSA backs off only for very small aligned blocks. When the translation lexicon was used in SIMR's matching predicate, the largest aligned block that needed to be re-aligned in the \"easy\" and \"hard\" test bitexts was 5x5. Without the translation lexicon, the largest re-aligned block was 7x7. So, GSA's running time is O(kn), where n is the number of input sentences and k is a small constant proportional to the size of the largest realigned block.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typical errors of omission are illustrated in",
"sec_num": null
},
{
"text": "Admittedly, GSA is only useful when a good bitext map is available. In such cases, there are three reasons to favor GSA over other options for alignment: One, it is simply more accurate. Two, its running time is linear in the number of sentences, faster than dynamic programming methods. Therefore, three, it is not necessary to manually segment the component texts into smaller units before input to GSA. GSA works almost as well without such \"hard constraints.\" Hard constraints are necessary for alignment algorithms that use dynamic programming, in order to maintain an acceptable running time on longer bitexts , Simard et al. 1992 . SIMR produced bitext maps for 200 megabytes of the Canadian Hansards. GSA converted these maps into alignments. The Linguistic Data Consortium plans to publish both the maps and the alignments in the near future. ]0",
"cite_spans": [
{
"start": 616,
"end": 636,
"text": ", Simard et al. 1992",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Typical errors of omission are illustrated in",
"sec_num": null
},
{
"text": "The Smooth Injective Map Recognizer (SIMR) has five advantages over previous bitext mapping algorithms. First, it lowers average errors by more than a factor of 4. Second, it avoids very large errors, improving robustness to a level that enables new commercial-quality applications. Third, it does not require large amounts of computer memory to run. Fourth, it accepts non-monotonic segments to account for inversions and word order differences. Fifth, its output can be converted quickly and easily into an accurate sentence alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "There are many possible extensions to this work. One interesting observation is that aligned sentences can be used to induce translation lexicons, and translation lexicons are an important information source for bitext mapping and alignment (Kay & RSscheisen 1993 , Chen 1993 . I plan to explore an interactive loop between SIMR, GSA and my algorithm for inducing translation lexicons (Melamed 1995) .",
"cite_spans": [
{
"start": 241,
"end": 263,
"text": "(Kay & RSscheisen 1993",
"ref_id": null
},
{
"start": 264,
"end": 275,
"text": ", Chen 1993",
"ref_id": "BIBREF0"
},
{
"start": 385,
"end": 399,
"text": "(Melamed 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "It would also be interesting to experiment with SIMR and GSA on language pairs that are not as closely related as English and French. The only technique for mapping between more disparate languages that has been rigorously evaluated (Wu 1994 ) relies on length correlations sprinkled with some lexical information. From this point of view, Wu's technique is similar to the one used by Simard et al. (1992) . So, I am eager to see whether the geometric approach will compare as favorably to Wu's results on English and Chinese as it has to Simard et al.'s results on English and French.",
"cite_spans": [
{
"start": 233,
"end": 241,
"text": "(Wu 1994",
"ref_id": "BIBREF8"
},
{
"start": 385,
"end": 405,
"text": "Simard et al. (1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "This research began while I was a visitor at the Centre d'Innovation en Technologies de l'Information in Laval, Canada. I am indebted to Pierre Isabelle for informing me that the bitext mapping problem is far from being solved. This paper has benefited tremendously from the insights and comments of the following people: Mike Collins, Jason Eisner, George Foster, Pierre Isabelle, Elliott Macklovitch, Mitch Marcus, Adwait Ratnaparkhi, Michel Simard, Eero Simoncelli, Matthew Stone, Lyle Ungar and three anonymous reviewers. My work was partially funded by ARO grant DAAL03-89-C0031 PRIME and by ARPA grants N00014-90-J-1863 and N6600194C 6043.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Char_align: A Program for Aligning Parallel Texts at the Character Level",
"authors": [
{
"first": "[",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, available from the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References [Brown et al. 1991] P. F. Brown, J. C. Lai ~ R. L. Mercer, \"Aligning Sentences in Parallel Cor- pora,\" Proceedings of the 29th Annual Meet- ing of the Association for Computational Lin- guistics, Berkeley, CA, 1991. [Catizone et al. 1989] R. Catizone, G. Russell & S. Warwick \"Deriving Translation Data from Bilingual Texts,\" Proceedings of the First International Lexical Acquisition Workshop, Detroit, MI, 1993. [Chen 1993] S. Chen, \"Aligning Sentences in Bilingual Corpora Using Lexical Informa- tion,\" Proceedings of the 31st Annual Meet- ing of the Association for Computational Lin- guistics, Columbus, OH, 1993. [Church 1993] K. W. Church, \"Char_align: A Program for Aligning Parallel Texts at the Character Level,\" Proceedings of the 31st An- nual Meeting of the Association for Compu- tational Linguistics, Columbus, OH, 1993. [Cousin et al. 1991] P. g. Cousin, L. Sinclair, J. F. Allain & C. E. Love, The Collins Paper- back French Dictionary, Harper Collins Pub- lishers, Glasgow, 1991. [Dagan et al. 1993] I. Dagan, K. Church, & W. Gale, \"Robust Word Alignment for Machine Aided Translation,\" Proceedings of the Work- shop on Very Large Corpora: Academic and Industrial Perspectives, available from the ACL, 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Appariement des Phrases de Textes Bilingues",
"authors": [
{
"first": ";",
"middle": [
"F"
],
"last": "Debili & Sammouda",
"suffix": ""
},
{
"first": "&",
"middle": [
"E"
],
"last": "Debili",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sammouda",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debili & Sammouda 1992] F. Debili & E. Sam- mouda \"Appariement des Phrases de Textes Bilingues,\" Proceedings of the 14th Interna- tional Conference on Computational Linguis- tics, Nantes, France, 1992.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Program for Aligning Sentences in Bilingual Corpora",
"authors": [
{
"first": ";",
"middle": [
"P"
],
"last": "Fung",
"suffix": ""
},
{
"first": "",
"middle": [
"W"
],
"last": "Fung & K",
"suffix": ""
},
{
"first": ";",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "",
"middle": [
"W"
],
"last": "Gale & K",
"suffix": ""
},
{
"first": ";",
"middle": [
"B"
],
"last": "Church",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics",
"volume": "54",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fung L: Church 1991] P. Fung & K. W. Church, \"K-vec: A New Approach for Aligning Par- allel Texts,\" Proceedings of the 15th Interna- tional Conference on Computational Linguis- tics, Kyoto, Japan, 1994. [Gale & Church 1991] W. Gale & K. W. Church, \"A Program for Aligning Sentences in Bilin- gual Corpora,\" Proceedings of the 29th An- nual Meeting of the Association for Compu- tational Linguistics, Berkeley, CA, 1991. [Harris 1988] B. Harris, \"Bi-Text, a New Concept in Translation Theory,\" Language Monthly #54, 1988.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Text-Translation Alignment",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kay & M. R6scheisen",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kay & M. R6scheisen \"Text-Translation Alignment,\" Computational Linguistics 19:1, Boston, MA, 1995.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic Evaluation and Uniform Filter Cascades for Inducing N-best Translation Lexicons",
"authors": [
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Macklovitch ; I",
"middle": [
"D D"
],
"last": "Melamed ; I",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Fourth International Conference on Theorelical and Methodological Issues in Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Macklovitch 1995] E, Macklovitch, \"Peut-on ver- ifier automatiquement la coherence ter- minologique?\" Proceedings of the IV e~ Journdes scientifiques, Lexicommalique et Dictionnairiques, organized by AUPELF- UREF, Lyon, France, 1995. [Melamed 1995] I. D. Melamed \"Automatic Eval- uation and Uniform Filter Cascades for In- ducing N-best Translation Lexicons,\" Pro- ceedings of the Third Workshop on Very Large Corpora, Boston, MA, 1995. [Melamed 1996] I. D. Melamed \"Automatic De- tection of Omissions in Translations,\" Pro- ceedings of the 16th International Conference on Computational Linguistics, Copenhagen, Denmark, 1996. [Simard et al. 1992] M. Simard, G. F. Foster & P. Isabelle, \"Using Cognates to Align Sentences in Bilingual Corpora,\" in Proceedings of the Fourth International Conference on Theorel- ical and Methodological Issues in Machine Translation, Montreal, Canada, 1992. [Simard 1995] M. Simard, personal communica- tion, 1995.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Applied Simulated Annealing",
"authors": [
{
"first": ";",
"middle": [
"R V V"
],
"last": "Vidal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidal 1993] R. V. V. Vidal, Applied Simulated Annealing, Springer-Verlag, Heidelberg, Ger- many, 1993.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Aligning a Parallel English-Chinese Corpus Statistically with Lexical Criteria",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "1994] D. Wu, \"Aligning a Parallel English- Chinese Corpus Statistically with Lexical Cri- teria,\" Proceedings of the 32nd Annual Meet- ing of the Association for Computational Lin- guistics, Las Cruces, NM, 1994.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "a bi~ext space"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "SIMR's \"expanding rectangle\" search strategy. The search reclangle is anchored at the top right corner of the previously found chain. Its diagonal remains parallel to the main diagonal."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Part of a typical scatterplot in bitext space, the true points of correspondence trace the true bitext map parallel to the main diagonal."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Frequent token types cause false points of correspondence that line up in rows and columns."
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "........ Sentence A ............I"
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "......................... ; ............... ......... ................ .. i~s~.g.~i_ii_ii~~!.i i= i ............."
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Segments i and j switched placed during translation. If a more precise map is desired, these larger non-monotonic segments can be easily recovered during a second sweep through the bitext space. Any non-monotonic segment of the TBM will occupy the intersection of a vertical gap and a horizontal gap in the monotonic first-pass map."
},
"FIGREF8": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "If chains C and D are accepted as valid, then the e > maximum angle . mam. deviation threshold cllagonal/\u2022..ii ,,."
},
"FIGREF11": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Sentence boundaries form a grid over the bitext space. Each cell in the grid represents the product of two sentences, one from each component text. A point of correspondence inside cell (X, y) indicates that some token in sentence X corresponds with some token in sentence y; i.e. the sentences X and y correspond. So, for example, sentence E corresponds with sentence d. The aligned blocks are outlined with solid lines."
},
"TABREF1": {
"text": "Comparison of error distributions for SIMR and char_align, in characters.",
"num": null,
"content": "<table><tr><td/><td/><td>median</td><td>99th</td><td>root mean</td></tr><tr><td>bitext</td><td>algorithm</td><td>absolute error</td><td>percentile</td><td>squared error</td></tr><tr><td>\"easy\"</td><td>char_align</td><td>not reported</td><td>200</td><td>57</td></tr><tr><td>Hansard</td><td>SIMR</td><td>0.49</td><td>50</td><td>13</td></tr><tr><td>(7123 ref. pts.)</td><td>SIMR with MRBD</td><td>0.61</td><td>49</td><td>13</td></tr><tr><td>\"hard\"</td><td>char_align</td><td>18</td><td>200</td><td>46</td></tr><tr><td>Hansard</td><td>SIMR</td><td>0.48</td><td>55</td><td>9.8</td></tr><tr><td>(2693 ref. pts.)</td><td>SIMR with MRBD</td><td>0.60</td><td>44</td><td>8.6</td></tr><tr><td colspan=\"2\">map between \"linguistic\" parts of the bitext has a</td><td/><td/><td/></tr><tr><td colspan=\"2\">very different slope. Sometimes, the translation of</td><td/><td/><td/></tr><tr><td colspan=\"2\">non-linguistic text is completely erratic, especially</td><td/><td/><td/></tr><tr><td colspan=\"2\">where white space is concerned. Not surprisingly,</td><td/><td/><td/></tr><tr><td colspan=\"2\">SIMR cannot perform well on such text.</td><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Comparison of alignment algorithms. One error is counted for each aligned block in the reference alignment that is missing from the test alignment.",
"num": null,
"content": "<table><tr><td>errors, given</td><td>errors, not given</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}