ACL-OCL / Base_JSON /prefixE /json /E06 /E06-1036.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E06-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:34:25.453660Z"
},
"title": "Recognizing Textual Parallelisms with edit distance and similarity degree",
"authors": [
{
"first": "Marie",
"middle": [],
"last": "Gu\u00e9gan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Paris-Sud",
"location": {
"country": "France"
}
},
"email": "guegan@aist.enst.fr"
},
{
"first": "Nicolas",
"middle": [],
"last": "Hernandez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Paris-Sud",
"location": {
"country": "France"
}
},
"email": "hernandez@limsi.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Detection of discourse structure is crucial in many text-based applications. This paper presents an original framework for describing textual parallelism which allows us to generalize various discourse phenomena and to propose a unique method to recognize them. With this prospect, we discuss several methods in order to identify the most appropriate one for the problem, and evaluate them based on a manually annotated corpus.",
"pdf_parse": {
"paper_id": "E06-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "Detection of discourse structure is crucial in many text-based applications. This paper presents an original framework for describing textual parallelism which allows us to generalize various discourse phenomena and to propose a unique method to recognize them. With this prospect, we discuss several methods in order to identify the most appropriate one for the problem, and evaluate them based on a manually annotated corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Detection of discourse structure is crucial in many text-based applications such as Information Retrieval, Question-Answering, Text Browsing, etc. Thanks to a discourse structure one can precisely point out an information, provide it a local context, situate it globally, link it to others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The context of our research is to improve automatic discourse analysis. A key feature of the most popular discourse theories (RST (Mann and Thompson, 1987) , SDRT (Asher, 1993) , etc.) is the distinction between two sorts of discourse relations or rhetorical functions: the subordinating and the coordinating relations (some parts of a text play a subordinate role relative to other parts, while some others have equal importance).",
"cite_spans": [
{
"start": 130,
"end": 155,
"text": "(Mann and Thompson, 1987)",
"ref_id": "BIBREF10"
},
{
"start": 163,
"end": 176,
"text": "(Asher, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus our attention on a discourse feature we assume supporting coordination relations, namely the Textual Parallelism. Based on psycholinguistics studies (Dubey et al., 2005) , our intuition is that similarities concerning the surface, the content and the structure of textual units can be a way for authors to explicit their intention to consider these units with the same rhetorical importance.",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "(Dubey et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Parallelism can be encountered in many specific discourse structures such as continuity in information structure (Kruijff-Korbayov\u00e1 and Kruijff, 1996) , frame structures (Charolles, 1997) , VP ellipses (Hobbs and Kehler, 1997) , headings (Summers, 1998) , enumerations (Luc et al., 1999) , etc. These phenomena are usually treated mostly independently within individual systems with ad-hoc resource developments.",
"cite_spans": [
{
"start": 113,
"end": 150,
"text": "(Kruijff-Korbayov\u00e1 and Kruijff, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 170,
"end": 187,
"text": "(Charolles, 1997)",
"ref_id": "BIBREF2"
},
{
"start": 202,
"end": 226,
"text": "(Hobbs and Kehler, 1997)",
"ref_id": "BIBREF6"
},
{
"start": 238,
"end": 253,
"text": "(Summers, 1998)",
"ref_id": "BIBREF11"
},
{
"start": 269,
"end": 287,
"text": "(Luc et al., 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we argue that, depending on description granularity we can proceed, computing syntagmatic (succession axis of linguistic units) and paradigmatic (substitution axis) similarities between units can allow us to generically handle such discourse structural phenomena. Section 2 introduces the discourse parallelism phenomenon. Section 3 develops three methods we implemented to detect it: a similarity degree measure, a string editing distance (Wagner and Fischer, 1974 ) and a tree editing distance 1 (Zhang and Shasha, 1989) . Section 4 discusses and evaluates these methods and their relevance. The final section reviews related work.",
"cite_spans": [
{
"start": 454,
"end": 479,
"text": "(Wagner and Fischer, 1974",
"ref_id": "BIBREF12"
},
{
"start": 512,
"end": 536,
"text": "(Zhang and Shasha, 1989)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our notion of parallelism is based on similarities between syntagmatic and paradigmatic representations of (constituents of) textual units. These similarities concern various dimensions from shallow to deeper description: layout, typography, morphology, lexicon, syntax, and semantics. This account is not limited to the semantic dimension as defined by (Hobbs and Kehler, 1997) who consider text fragments as parallel if the same predicate can be inferred from them with coreferential or similar pairs of arguments.",
"cite_spans": [
{
"start": 354,
"end": 378,
"text": "(Hobbs and Kehler, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual parallelism",
"sec_num": "2"
},
{
"text": "We observe parallelism at various structural levels of text: among heading structures, VP ellipses and others, enumerations of noun phrases in a sentence, enumerations with or without markers such as frame introducers (e.g. \"In France, . . . In Italy, . . . \") or typographical and layout markers. The underlying assumption is that parallelism between some textual units accounts for a rhetorical coordination relation. It means that these units can be regarded as equally important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual parallelism",
"sec_num": "2"
},
{
"text": "By describing textual units in a two-tier framework composed of a paradigmatic level and syntagmatic level, we argue that, depending on the description granularity we consider (potentially at the character level for item numbering), we can detect a wide variety of parallelism phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual parallelism",
"sec_num": "2"
},
{
"text": "Among parallelism properties, we note that the parallelism of a given number of textual units is based on the parallelism of their constituents. We also note that certain semantic classes of constituents, such as item numbering, are more effective in marking parallelism than others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual parallelism",
"sec_num": "2"
},
{
"text": "The following example is extracted from our corpus (see section 4.1). In this case, we have an enumeration without explicit markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "For the purposes of chaining, each type of link between WordNet synsets is assigned a direction of up, down, or horizontal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "Upward links correspond to generalization: for example, an upward link from apple to fruit indicates that fruit is more general than apple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "Downward links correspond to specialization: for example, a link from fruit to apple would have a downward direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "Horizontal links are very specific specializations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "The parallelism pattern of the first two items is described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "[JJ + suff =ward] links correspond to [NN + suff = alization] : for example , X link from Y to Z .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "This pattern indicates that several item constituents can be concerned by parallelism and that similarities can be observed at the typographic, lexical and syntactic description levels. Tokens (words or punctuation marks) having identical shallow descriptions are written in italics. The X, Y and Z variables stand for matching any nonparallel text areas between contiguous parallel textual units. Some words are parallel based on their syntactic category (\"JJ\" / adjectives, \"NN\" / nouns) or suffix specifications (\"suff\" attribute). The third item is similar to the first two items but with a simpler pattern:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "JJ links U [NN + suff =alization] W .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "Parallelism is distinguished by these types of similarities between sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of parallelism",
"sec_num": "2.1"
},
{
"text": "Three methods were used in this study. Given a pair of sentences, they all produce a score of similarity between these sentences. We first present the preprocessing to be performed on the texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The texts were automatically cut into sentences. The first two steps hereinafter have been applied for all the methods. The last third was not applied for the tree editing distance (see 3.3). Punctuation marks and syntactic labels were henceforward considered as words. 1. Text homogenization: lemmatization together with a semantic standardization. Lexical chains are built using WordNet relations, then words are replaced by their most representative synonym:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prior processing applied on the texts",
"sec_num": "3.1"
},
{
"text": "Horizontal links are specific specializations. horizontal connection be specific specialization .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prior processing applied on the texts",
"sec_num": "3.1"
},
{
"text": "2. Syntactic analysis by (Charniak, 1997) 's parser:",
"cite_spans": [
{
"start": 25,
"end": 41,
"text": "(Charniak, 1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior processing applied on the texts",
"sec_num": "3.1"
},
{
"text": "( S1 ( S ( NP ( JJ Horizontal ) (NNS links ) ( VP ( AUX are) ( NP ( ADJP ( JJ specific ) ( NNS specializations ) ( SENT .))))))) 3. Syntactic structure flattening: S1 S NP JJ Horizontal NNS links VP AUX are NP ADJP JJ specific NNS specializations SENT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prior processing applied on the texts",
"sec_num": "3.1"
},
{
"text": "This method is based on Wagner & Fischer's string edit distance algorithm (Wagner and Fischer, 1974) , applied to sentences viewed as strings of words. It computes a sentence edit distance, using edit operations on these elementary entities.",
"cite_spans": [
{
"start": 74,
"end": 100,
"text": "(Wagner and Fischer, 1974)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wagner & Fischer's string edit distance",
"sec_num": "3.2"
},
{
"text": "The idea is to use edit operations to transform sentence S 1 into S 2 . Similarly to (Wagner and Fischer, 1974) , we considered three edit operations:",
"cite_spans": [
{
"start": 85,
"end": 111,
"text": "(Wagner and Fischer, 1974)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wagner & Fischer's string edit distance",
"sec_num": "3.2"
},
{
"text": "1. replacing word x \u2208 S 1 by y \u2208 S 2 : (x \u2192 y) 2. deleting word x \u2208 S 1 : (x \u2192 \u03bb) 3. inserting word y \u2208 S 2 into S 1 : (\u03bb \u2192 y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wagner & Fischer's string edit distance",
"sec_num": "3.2"
},
{
"text": "By definition, the cost of a sequence of edit operations is the sum of the costs 2 of the elementary operations, and the distance between S 1 and S 2 is the cost of the least cost transformation of S 1 into S 2 . Wagner & Fischer's method provides a simple and effective way (O(|S 1 ||S 2 |)) to compute it. To reduce size effects, we normalized by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wagner & Fischer's string edit distance",
"sec_num": "3.2"
},
{
"text": "|S 1 |+|S 2 | 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wagner & Fischer's string edit distance",
"sec_num": "3.2"
},
{
"text": "Zhang & Shasha's method (Zhang and Shasha, 1989; Dulucq and Tichit, 2003) generalizes Wagner & Fischer's edit distance to trees: given two trees T 1 and T 2 , it computes the least-cost sequence of edit operations that transforms T 1 into T 2 . Elementary operations have unitary costs and apply to nodes (labels and words in the syntactic trees). These operations are depicted below: substitution of node c by node g (top figure), insertion of node d (middle fig.) , and deletion of node d (bottom fig.) , each read from left to right.",
"cite_spans": [
{
"start": 24,
"end": 48,
"text": "(Zhang and Shasha, 1989;",
"ref_id": "BIBREF13"
},
{
"start": 49,
"end": 73,
"text": "Dulucq and Tichit, 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 460,
"end": 465,
"text": "fig.)",
"ref_id": null
},
{
"start": 499,
"end": 504,
"text": "fig.)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Zhang & Shasha's algorithm",
"sec_num": "3.3"
},
{
"text": "Tree edit distance d(T 1 , T 2 ) is determined after a series of intermediate calculations involving special subtrees of T 1 and T 2 , rooted in keyroots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zhang & Shasha's algorithm",
"sec_num": "3.3"
},
{
"text": "Given a certain node x, L(x) denotes its leftmost leaf descendant. L is an equivalence relation over nodes and keyroots (KR) are by definition the equivalence relation representatives of highest postfix index. Special subtrees (SST) are the subtrees rooted in these keyroots. Consider a tree T postfix indexed (left figure hereinafter) and its three SSTs (right figure).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyroots, special subtrees and forests",
"sec_num": "3.3.1"
},
{
"text": "SST(k 1 ) rooted in k 1 is denoted: T [L(k 1 ), L(k 1 ) + 1, . . . , k 1 ]. E.g: SST(3) = T [1, 2, 3] is the subtree containing nodes a, b, d. A forest of SST(k 1 ) is defined as: T [L(k 1 ), L(k 1 ) + 1, . . . , x]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyroots, special subtrees and forests",
"sec_num": "3.3.1"
},
{
"text": ", where x is a node of SST(k 1 ). E.g: SST(3) has 3 forests :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyroots, special subtrees and forests",
"sec_num": "3.3.1"
},
{
"text": "T [1] (node a), T [1, 2]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyroots, special subtrees and forests",
"sec_num": "3.3.1"
},
{
"text": "(nodes a and b) and itself. Forests are ordered sequences of subtrees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyroots, special subtrees and forests",
"sec_num": "3.3.1"
},
{
"text": "The algorithm computes the distance between all pairs of SSTs taken in T 1 and T 2 , rooted in increasingly-indexed keyroots. In the end, the last SSTs being the full trees, we have d(T 1 , T 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "In the main routine, an N 1 \u00d7 N 2 array called TREEDIST is progressively filled with values TREEDIST(i, j) equal to the distance between the subtree rooted in T 1 's i th node and the subtree rooted in T 2 's j th node. The bottom right-hand cell of TREEDIST is therefore equal to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "d(T 1 , T 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "Each step of the algorithm determines the edit distance between two SSTs rooted in keyroots (k 1 , k 2 ) \u2208 (T 1 \u00d7 T 2 ). An array FDIST is initialized for this step and contains as many lines and columns as the two given SSTs have nodes. The array is progressively filled with the distances between increasing forests of these SSTs, similarly to Wagner & Fischer's method. The bottom right-hand value of FDIST contains the distance between the SSTs, which is then stored in TREEDIST in the appropriate cell. Calculations in FDIST and TREEDIST rely on the double recurrence formula depicted below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "The first formula is used to compute the distance between two forests (a white one and a black one), each of which is composed of several trees. The small circles stand for the nodes of highest postfix index. Distance between two forests is defined as the minimum cost operation between three possibilities: replacing the rightmost white tree by the rightmost black tree, deleting the white node, or inserting the black node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "The second formula is analogous to the first one, in the special case where the forests are reduced to a single tree. The distance is defined as the minimum cost operation between: replacing the white node with the black node, deleting the white node, or inserting the black node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "It is important to notice that the first formula takes the left context of the considered subtrees into account 3 : ancestor and left sibling orders are preserved. It is not possible to replace the white node with the black node directly, the whole subtree rooted in the white node has to be replaced. The good thing is, the cost of this operation has already been computed and stored in TREEDIST.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "Let's see why all the computations required at a given step of the recurrence formula have already been calculated. Let two SSTs of T 1 and T 2 be rooted in pos 1 and pos 2 . Considering the symmetry of the problem, let's only consider what happens with T 1 . When filling FDIST(pos1, pos 2 ), all nodes belonging to SST(pos 1 ) are run through, according to increasing postfix indexes. Consider \u2022 If x is a keyroot, since the algorithm runs through keyroots by increasing order, TREEDIST(T [L(x), . . . , x], \u2212) has already been computed. \u2022 If x is not a keyroot, then there exists a node z such that : x < z < pos 1 , z is a keyroot and L(z) = L(x). Therefore x belongs to the leftmost branch of T [L(z), . . . , z], which means TREEDIST(T [L(z), . . . , x], \u2212) has already been computed. Complexity for this algorithm is :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "x \u2208 T [L(pos 1 ), . . . , pos 1 ]: If L(x) = L(pos 1 ), then x belongs to the left- most branch of T [L(pos 1 ), . . . , pos 1 ] and forest T [L(pos 1 ), . . . , x] is reduced to a single tree. By construction, all FDIST(T [L(pos 1 ), . . . , y], \u2212) for y \u2264 x \u2212 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "O(|T1| \u00d7 |T2| \u00d7 min(p(T1), f (T1)) \u00d7 min(p(T2), f (T2)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "where d(T i ) is the depth T i and f (T i ) is the number of terminal nodes of T i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "3 The 2 nd formula does too, since left context is empty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An idea of how it works",
"sec_num": "3.3.2"
},
{
"text": "This final method computes a degree of similarity between two sentences, considered as lists of syntactic (labels) and lexical (words) constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our proposal: a degree of similarity",
"sec_num": "3.4"
},
{
"text": "Because some constituents are more likely to indicate parallelism than others (e.g: the list item marker is more pertinent than the determiner \"a\"), a crescent weight function p(x) \u2208 [0, 1] w.r.t. pertinence is assigned to all lexical and syntactic constituents x. A set of special subsentences is then generated: the greatest common divisor of S 1 and S 2 , gcd(S 1 , S 2 ), is defined as the longest list of words common to S 1 and S 2 . Then for each sentence S i , the set of special subsentences is computed using the words of gcd(S 1 , S 2 ) according to their order of appearance in S i . For example, if S 1 = cabcad and S 2 = acbae, gcd(S 1 , S 2 ) = {c, a, b, a}. The set of subsentences for S 1 is {caba, abca} and the set for S 2 is reduced to {acba}. Note that any generated subsentence is exactly the size of gcd(S 1 , S 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our proposal: a degree of similarity",
"sec_num": "3.4"
},
{
"text": "For any two subsentences s 1 and s 2 , we define a degree of similarity D(s 1 , s 2 ), inspired from string edit distances: The further a constituent from s 1 is from its symmetric occurrence in s 2 , the more similar the compared subsentences are. Eventually, the degree of similarity between sentences S 1 and S 2 is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our proposal: a degree of similarity",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(s1, s2) = n X i=1 \" dmax \u2212 d(xi) dmax \u00d7 p(xi) \u00ab 8 > > > > > > > < > > > > > > > :",
"eq_num": "n"
}
],
"section": "Our proposal: a degree of similarity",
"sec_num": "3.4"
},
{
"text": "D(S1, S2) = 2 |S1| + |S2| \u00d7 max s1,s2 D(s1, s2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our proposal: a degree of similarity",
"sec_num": "3.4"
},
{
"text": "Consider S 1 = cabcad and S 2 = acbae, along with their subsentences s 1 = caba and s 1 = abca for S 1 , and s 2 = acba for S 2 . The degrees of parallelism between s 1 and s 2 , and between s 1 and s 2 are computed. The mapping between the parallel constituents is shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "D(s1, s2) = 4 X i=1 \" 3 \u2212 d(xi) 3 \u00d7 p(xi) \u00ab = 2/3p(c) + 2/3p(a) + p(b) + p(a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "Assume p(b) = p(c) = 1 2 and p(a) = 1. Then D(s 1 , s 2 ) = 2.5 and, similarly D(s 1 , s 2 ) 2.67. Therefore the normalized degree of parallelism is D(S 1 , S 2 ) = 2 5+6 \u00d7 2.67, which is about 0.48.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "This section describes the methodology employed to evaluate performances. Then, after a preliminary study of our corpus, results are presented successively for each method. Finally, the behavior of the methods is analyzed at sentence level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Our parallelism detection is an unsupervised clustering application: given a set of pairs of sentences, it automatically classifies them into the class of the parallelisms and the remainders class. Pairs were extracted from 5 scientific articles written in English, each containing about The idea was to compute for each pair a parallelism score indicating the similarity between the sentences. Then the choice of a threshold determined which pairs showed a score high enough to be classified as parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "Evaluation was based on a manual annotation we proceeded over the texts. In order to reduce computational complexity, we only considered the parallelism occurring between consecutive sentences. For each sentence, we indicated the index of its parallel sentence. We assumed transitivity of parallelism : if S 1 //S 2 and S 2 //S 3 , then S 1 //S 3 . It was thus considered sufficient to indicate the index of S 1 for S 2 and the index of S 2 for S 3 to account for a parallelism between S 1 , S 2 and S 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "We annotated pairs of sentences where textual parallelism led us to rhetorically coordinate them. The decision was sometimes hard to make. Yet we annotated it each time to get more data and to study the behavior of the methods on these examples, possibly penalizing our applications. In the end, 103 pairs were annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "We used the notions of precision (correctness) and recall (completeness). Because efforts in im-proving one often result in degrading the other, the F-measure (harmonic mean) combines them into a unique parameter, which simplifies comparisons of results. Let P be the set of the annotated parallelisms and Q the set of the pairs automatically classified in the parallelisms after the use of a threshold. Then the associated precision p, recall r and F-measure f are defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "p = |P \u2229 Q| |Q| r = |P \u2229 Q| |P | f = 2 1/p + 1/q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "As we said, the unique task of the implemented methods was to assign parallelism scores to pairs of sentences, which are collected in a list. We manually applied various thresholds to the list and computed their corresponding F-measure. We kept as a performance indicator the best F-measure found. This was performed for each method and on each text, as well as on the texts all gathered together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "This paragraph underlines some of the characteristics of the corpus, in particular the distribution of the annotated parallelisms in the texts for adjacent sentences. Green and Oakes show significantly more parallelisms than the other texts. Therefore, if we consider a lazy method that would put all pairs in the class of parallelisms, Green and Oakes will yield a priori better results. Precision is indeed directly related to the percentage of parallelisms in the text. In this case, it is exactly this percentage, and it gives us a minimum value of the F-measure our methods should at least reach: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminary corpus study",
"sec_num": "4.2"
},
{
"text": "We first present the results of a very simple and thus very fast method. This baseline counts the words sentences S 1 and S 2 have in common, and normalizes the result by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A baseline: counting words in common",
"sec_num": "4.3"
},
{
"text": "|S 1 |+|S 2 | 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A baseline: counting words in common",
"sec_num": "4.3"
},
{
"text": "in order to reduce size effects. No syntactic analysis nor lexical homogenization was performed on the texts. Results for this method are summarized in the following table. The last column shows the loss (%) in F-measure after applying a generic threshold (the optimal threshold found when all texts are gathered together) on each text. We first note that results are twice as good as with the lazy approach, with Green and Oakes far above the rest. Yet this is not sufficient for a real application. Furthermore, the optimal threshold is very different from one text to another, which makes the learning of a generic threshold able to detect parallelisms for any text impossible. The only advantage here is the simplicity of the method: no prior treatment was performed on the texts before the search, and the counting itself was very fast.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A baseline: counting words in common",
"sec_num": "4.3"
},
{
"text": "We present the results for the 1 st method below: Green and Oakes still yield the best results, but the other texts have almost doubled theirs. Results for Oakes are especially good: an F-measure of 82% guaranties high precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String edit distance",
"sec_num": "4.4"
},
{
"text": "In addition, the use of a generic threshold on each text had little influence on the value of the F-measure. The greatest loss is for Sand and only corresponds to the adjunction of four pairs of sentences in the class of parallelisms. The selection of a unique generic threshold to predict parallelisms should therefore be possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String edit distance",
"sec_num": "4.4"
},
{
"text": "The algorithm was applied using unitary edit costs. Since it did not seem natural to establish mappings between different levels of the sentence, edit operations between two constituents of different nature (e.g: substitution of a lexical by a syntactic element) were forbidden by a prohibitive cost (1000). However, this banning only improved the results shyly, unfortunately. As illustrated in the table above, results are comparable to those previously found. We note an especially good F-measure for Sand: 52%, against 47% for the string edit distance. Optimal thresholds were quite similar from one text to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree edit distance",
"sec_num": "4.5"
},
{
"text": "Because of the high complexity of this method, a heuristic was applied. The generation of the subsentences is indeed in C k i n i , k i being the number of occurrences of the constituent x i in gcd, and n i the number of x i in the sentence. We chose to limit the generation to a fixed amount of subsentences. The constituents that have a great C k i n i bring too much complexity: we chose to eliminate their (n i \u2212 k i ) last occurrences and to keep their k i first occurrences only to generate subsequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Degree of similarity",
"sec_num": "4.6"
},
{
"text": "An experiment was conducted in order to determine the maximum amount of subsentences that could be generated in a reasonable amount of time without significant performance loss and 30 was a sufficient number. In another experiment, different parallelism weights were assigned to lexical constituents and syntactic labels. The aim was to understand their relative importance for parallelisms detection. Results show that lexical constituents have a significant role, but conclusions are more difficult to draw for syntactic labels. It was decided that, from now on, the lexical weight should be given the maximum value, 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Degree of similarity",
"sec_num": "4.6"
},
{
"text": "Finally, we assigned different weights to the syntactic labels. Weights were chosen after counting the occurrences of the labels in the corpus. In fact, we counted for each label the percentage of occurrences that appeared in the gcd of the parallelisms with respect to those appearing in the gcd of the other pairs. Percentages were then rescaled from 0 to 1, in order to emphasize differences between labels. The obtained parallelism values measured the role of the labels in the detection of parallelism. Results for this experiment appear in the table below. The optimal F-measures were comparable to those obtained in 4.4 and the corresponding thresholds were similar from one text to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Degree of similarity",
"sec_num": "4.6"
},
{
"text": "This section showed how the three proposed methods outperformed the baseline. Each of them yielded comparable results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Degree of similarity",
"sec_num": "4.6"
},
{
"text": "The next section presents the results at sentence level, together with a comparison of these three methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Degree of similarity",
"sec_num": "4.6"
},
{
"text": "The different methods often agreed but sometimes reacted quite differently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis at sentence level",
"sec_num": "4.7"
},
{
"text": "Some parallelisms were found by each method with no difficulty: they were given a high degree of parallelism by each method. Typically, such sentences presented a strong lexical and syntactic similarity, as in the example in section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Well retrieved parallelisms",
"sec_num": null
},
{
"text": "Other parallelisms received very low scores from each method. This happened when the annotated parallelism was lexically and syntactically poor and needed either contextual information or external semantic knowledge to find keywords (e.g: \"first\", \"second\", . . . ), paraphrases or patterns (e.g: \"X:Y\" in the following example (Kan)):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelisms hard to find",
"sec_num": null
},
{
"text": "Rear: a paragraph in which a link just stopped occurring the paragraph before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelisms hard to find",
"sec_num": null
},
{
"text": "No link: any remaining paragraphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelisms hard to find",
"sec_num": null
},
{
"text": "Eventually, we present some parallelisms that obtained very different scores, depending on the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "First, it seems that a different ordering of the parallel constituents in the sentences alter the performances of the edit distance algorithms (3.2; 3.3). The following example (Green) received a low score with both methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "When we consider AnsV as our dependent variable, the model for the High Web group is still not significant, and there is still a high probability that the coefficient of LI is 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "For our Low Web group, who followed significantly more intra-article links than the High Web group, the model that results is significant and has the following equation: <EQN/>. This is due to the fact that both algorithms do not allow the inversion of two constituents and thus are unable to find all the links from the first sentence to the other. The parallelism measure is robust to inversion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "Sometimes, the syntactic parser gave different analyses for the same expression, which made mapping between the sentences containing this expression more difficult, especially for the tree edit distance. The syntactic structure has less importance for the other methods, which are thus more insensitive to an incorrect analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "Finally, the parallelism measure seems more adapted to a diffuse distribution of the parallel constituents in the sentences, whereas edit distances seem more appropriate when parallel constituents are concentrated in a certain part of the sentences, in similar syntactic structures. The following example (Green) obtained very high scores with the edit distances only:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "Strong relations are also said to exist between words that have synsets connected by a single horizontal link or words that have synsets connected by a single IS-A or INCLUDES relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "A regular relation is said to exist between two words when there is at least one allowable path between a synset containing the first word and a synset containing the second word in the Word-Net database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different methods, different results",
"sec_num": null
},
{
"text": "Experimental work in psycholinguistics has shown the importance of the parallelism effect in human language processing. Due to some kind of priming (syntactic, phonetic, lexical, etc.) , the comprehension and the production of a parallel utterance is made faster (Dubey et al., 2005) .",
"cite_spans": [
{
"start": 148,
"end": 184,
"text": "(syntactic, phonetic, lexical, etc.)",
"ref_id": null
},
{
"start": 263,
"end": 283,
"text": "(Dubey et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "So far, most of the works were led in order to acquire resources and to build systems to retrieve specific parallelism phenomena. In the field of information structure theories, (Kruijff-Korbayov\u00e1 and Kruijff, 1996) implemented an ad-hoc system to identify thematic continuity (lexical relation between the subject parts of consecutive sentences). (Luc et al., 1999) described and classified markers (lexical clues, layout and typography) occurring in enumeration structures. (Summers, 1998) also described the markers required for retrieving heading structures. (Charolles, 1997) was involved in the description of frame introducers.",
"cite_spans": [
{
"start": 178,
"end": 215,
"text": "(Kruijff-Korbayov\u00e1 and Kruijff, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 348,
"end": 366,
"text": "(Luc et al., 1999)",
"ref_id": "BIBREF9"
},
{
"start": 476,
"end": 491,
"text": "(Summers, 1998)",
"ref_id": "BIBREF11"
},
{
"start": 563,
"end": 580,
"text": "(Charolles, 1997)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Integration of specialized resources dedicated to parallelism detection could be an improvement to our approach. Let us not forget that our final aim remains the detection of discourse structures. Parallelism should be considered as an additional feature which among other discourse features (e.g. connectors).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Regarding the use of parallelism, (Hernandez and Grau, 2005) proposed an algorithm to parse the discourse structure and to select pairs of sentences to compare.",
"cite_spans": [
{
"start": 34,
"end": 60,
"text": "(Hernandez and Grau, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Confronted to the problem of determining textual entailment 4 (the fact that the meaning of one expression can be inferred from another) (Kouylekov and Magnini, 2005) applied the (Zhang and Shasha, 1989) 's algorithm on the dependency trees of pairs of sentences (they did not consider syntactic tags as nodes but only words). They encountered problems similar to ours due to pre-treatment limits. Indeed, the syntactic parser sometimes represents in a different way occurrences of similar expressions, making it harder to apply edit transformations. A drawback concerning the tree-edit distance approach is that it is not able to observe the whole tree, but only the subtree of the processed node.",
"cite_spans": [
{
"start": 137,
"end": 166,
"text": "(Kouylekov and Magnini, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 179,
"end": 203,
"text": "(Zhang and Shasha, 1989)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Textual parallelism plays an important role among discourse features when detecting discourse structures. So far, only occurrences of this phenomenon have been treated individually and often in an adhoc manner. Our contribution is a unifying framework which can be used for automatic processing with much less specific knowledge than dedicated techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In addition, we discussed and evaluated several methods to retrieve them generically. We showed that simple methods such as (Wagner and Fischer, 1974) can compete with more complex approaches, such as our degree of similarity and the (Zhang and Shasha, 1989) 's algorithm.",
"cite_spans": [
{
"start": 124,
"end": 150,
"text": "(Wagner and Fischer, 1974)",
"ref_id": "BIBREF12"
},
{
"start": 234,
"end": 258,
"text": "(Zhang and Shasha, 1989)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Among future works, it seems that variations such as the editing cost of transformation for edit distance methods and the weight of parallel units (depending their semantic and syntactic characteristics) can be implemented to enhance performances. Combining methods also seems an interesting track to follow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "For all measures, elementary units considered are syntactic tags and word tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used unitary costs in this study",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Compared to entailment, the parallelism relation is bidirectional and not restricted to semantic similarities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reference to abstract objects in discourse",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher. 1993. Reference to abstract objects in discourse. Kluwer, Dordrecht.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical parsing with a contextfree grammar and word statistics",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1997,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak. 1997. Statistical parsing with a context- free grammar and word statistics. In AAAI.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "L'encadrement du discoursunivers, champs, domaines et espaces. Cahier de recherche linguistique",
"authors": [
{
"first": "M",
"middle": [],
"last": "Charolles",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Charolles. 1997. L'encadrement du discours - univers, champs, domaines et espaces. Cahier de recherche linguistique, 6.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parallelism in coordination as an instance of syntactic priming: Evidence from corpus-based modeling",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Sturt",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2005,
"venue": "HLTC and CEMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Dubey, Patrick Sturt, and Frank Keller. 2005. Parallelism in coordination as an instance of syntac- tic priming: Evidence from corpus-based modeling. In HLTC and CEMNLP, Vancouver.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "RNA Secondary Structure Comparison: Exact Analysis of the Zhang-Shasha Tree Edit Algorithm",
"authors": [
{
"first": "S",
"middle": [],
"last": "Dulucq",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Tichit",
"suffix": ""
}
],
"year": 2003,
"venue": "Theoretical Computer Science",
"volume": "306",
"issue": "1-3",
"pages": "471--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Dulucq and L. Tichit. 2003. RNA Secondary Structure Comparison: Exact Analysis of the Zhang- Shasha Tree Edit Algorithm. Theoretical Computer Science, 306(1-3):471-484.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "D\u00e9tection automatique de structures fines du discours",
"authors": [
{
"first": "N",
"middle": [],
"last": "Hernandez",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Grau",
"suffix": ""
}
],
"year": 2005,
"venue": "TALN",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Hernandez and B. Grau. 2005. D\u00e9tection au- tomatique de structures fines du discours. In TALN, France.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A theory of parallelism and the case of vp ellipsis",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kehler",
"suffix": ""
}
],
"year": 1997,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Hobbs and A. Kehler. 1997. A theory of paral- lelism and the case of vp ellipsis. In ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recognizing Textual Entailment with Tree Edit Distance Algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kouylekov",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "PASCAL Challenges on RTE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kouylekov and B. Magnini. 2005. Recognizing Textual Entailment with Tree Edit Distance Algo- rithms. PASCAL Challenges on RTE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Identification of topic-focus chains",
"authors": [
{
"first": "I",
"middle": [],
"last": "Kruijff-Korbayov\u00e1",
"suffix": ""
},
{
"first": "G.-J",
"middle": [
"M"
],
"last": "Kruijff",
"suffix": ""
}
],
"year": 1996,
"venue": "In DAARC",
"volume": "8",
"issue": "",
"pages": "165--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Kruijff-Korbayov\u00e1 and G.-J. M. Kruijff. 1996. Iden- tification of topic-focus chains. In DAARC, vol- ume 8, pages 165-179. University of Lancaster, UK.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A linguistic approach to some parameters of layout: A study of enumerations",
"authors": [
{
"first": "C",
"middle": [],
"last": "Luc",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mojahid",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Virbel",
"suffix": ""
},
{
"first": "Cl",
"middle": [],
"last": "Garcia-Debanc",
"suffix": ""
},
{
"first": "M.-P",
"middle": [],
"last": "P\u00e9ry-Woodley",
"suffix": ""
}
],
"year": 1999,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Luc, M. Mojahid, J. Virbel, Cl. Garcia-Debanc, and M.-P. P\u00e9ry-Woodley. 1999. A linguistic approach to some parameters of layout: A study of enumera- tions. In AAAI, North Falmouth, Massachusets.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Rhetorical structure theory: A theory of text organisation",
"authors": [
{
"first": "W",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. C. Mann and S. A. Thompson. 1987. Rhetori- cal structure theory: A theory of text organisation. Technical report isi/rs-87-190.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic Discovery of Logical Document Structure",
"authors": [
{
"first": "K",
"middle": [
"M"
],
"last": "Summers",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. M. Summers. 1998. Automatic Discovery of Logi- cal Document Structure. Ph.D. thesis, U. of Cornell.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The String-to-String Correction Problem",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "Wagner",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Fischer",
"suffix": ""
}
],
"year": 1974,
"venue": "Journal of the ACM",
"volume": "21",
"issue": "1",
"pages": "168--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.A. Wagner and M.J. Fischer. 1974. The String-to- String Correction Problem. Journal of the ACM, 21(1):168-173.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simple fast algorithms for the editing distance between trees and related problems",
"authors": [
{
"first": "K",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Shasha",
"suffix": ""
}
],
"year": 1989,
"venue": "SIAM Journal on Computing",
"volume": "18",
"issue": "6",
"pages": "1245--1262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Zhang and D. Shasha. 1989. Simple fast algo- rithms for the editing distance between trees and related problems. SIAM Journal on Computing, 18(6):1245-1262.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "have already been computed. If things are the same for the current node in SST(pos 2 ), then TREEDIST(T [L(pos 1 ), . . . , x], \u2212) can be calculated directly, using the appropriate FDIST values previously computed. If L(x) = L(pos 1 ), then x does not belong to the leftmost branch of T [L(pos 1 ), . . . , pos 1 ] and therefore x has a non-empty left context T [L(pos 1 ), . . . , L(x) \u2212 1]. Let's see why computing FDIST(T [L(pos 1 ), . . . , x], \u2212) requires values which have been previously obtained.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "size of all subsentences xi i th constituent of s1 dmax max possible dist. between any xi \u2208 s1 and its parallel constituent in s2, i.e. dmax = n \u2212 1 d(xi) distance between current constituent xi in s1 and its parallel constituent in s2 p(xi) parallelism weight of xi",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "200 sentences: Green (ACL'98), Kan (Kan et al. WVLC'98), Mitkov (Coling-ACL'98), Oakes (IRSG'99) and Sand (Sanderson et al. SIGIR'99).",
"num": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Parallelisms Nb of pairs</td></tr><tr><td>Green</td><td>39 (14.4 %)</td><td>270</td></tr><tr><td>Kan</td><td>12 (6 %)</td><td>200</td></tr><tr><td>Mitkov</td><td>13 (8.4 %)</td><td>168</td></tr><tr><td>Oakes</td><td>22 (13.7 %)</td><td>161</td></tr><tr><td>Sand</td><td>17 (7.7 %)</td><td>239</td></tr><tr><td colspan=\"2\">All gathered 103 (9.9 %)</td><td>1038</td></tr></table>",
"num": null,
"html": null,
"text": "The following table gives the percentage of parallelisms for each text:"
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td/><td>Loss</td></tr><tr><td>Green</td><td>55</td><td>59</td><td>51</td><td>0.329</td><td>2</td></tr><tr><td>Kan</td><td>47</td><td>80</td><td>33</td><td>0.354</td><td>5</td></tr><tr><td>Mitkov</td><td>35</td><td>40</td><td>31</td><td>0.355</td><td>0</td></tr><tr><td>Oakes</td><td>76</td><td>80</td><td>73</td><td>0.324</td><td>4</td></tr><tr><td>Sand</td><td>29</td><td>20</td><td>59</td><td>0.271</td><td>0</td></tr><tr><td>All</td><td>50</td><td>59</td><td>43</td><td>0.335</td><td>-</td></tr></table>",
"num": null,
"html": null,
"text": "F-meas. Prec. Recall Thres."
}
}
}
}