ACL-OCL / Base_JSON /prefixW /json /W13 /W13-0123.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W13-0123",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:49:44.635833Z"
},
"title": "Subgraph-based Classification of Explicit and Implicit Discourse Relations",
"authors": [
{
"first": "Versley",
"middle": [],
"last": "Yannick",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of T\u00fcbingen",
"location": {}
},
"email": "versley@sfs.uni-tuebingen.de"
},
{
"first": "",
"middle": [],
"last": "Sfb",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of T\u00fcbingen",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current approaches to recognizing discourse relations rely on a combination of shallow, surfacebased features (e.g., bigrams, word pairs), and rather specialized hand-crafted features. As a way to avoid both the shallowness of word-based representations and the lack of coverage of specialized linguistic features, we use a graph-based representation of discourse segments, which allows for a more abstract (and hence generalizable) notion of syntactic (and partially of semantic) structure. Empirical evaluation on a hand-annotated corpus of German discourse relations shows that our graphbased approach not only provides a suitable representation for the linguistic factors that are needed in disambiguating discourse relations, but also improves results over a strong state-of-the-art baseline by more accurately identifying Temporal, Comparison and Reporting discourse relations.",
"pdf_parse": {
"paper_id": "W13-0123",
"_pdf_hash": "",
"abstract": [
{
"text": "Current approaches to recognizing discourse relations rely on a combination of shallow, surfacebased features (e.g., bigrams, word pairs), and rather specialized hand-crafted features. As a way to avoid both the shallowness of word-based representations and the lack of coverage of specialized linguistic features, we use a graph-based representation of discourse segments, which allows for a more abstract (and hence generalizable) notion of syntactic (and partially of semantic) structure. Empirical evaluation on a hand-annotated corpus of German discourse relations shows that our graphbased approach not only provides a suitable representation for the linguistic factors that are needed in disambiguating discourse relations, but also improves results over a strong state-of-the-art baseline by more accurately identifying Temporal, Comparison and Reporting discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Discourse relations between textual spans capture essential structural and semantic/pragmatic aspects of text structure. Besides anaphora and referential structure, discourse relations are a key ingredient in understanding a text beyond single clauses or sentences. The automatic recognition of discourse relations is therefore an important task; approaches to the solution of this problem range from heuristic approaches that use reliable indicators (Marcu, 2000) to modern machine learning approaches such as Lin et al. (2009) that apply broad shallow features in cases without such indicators.",
"cite_spans": [
{
"start": 451,
"end": 464,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF10"
},
{
"start": 511,
"end": 528,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Especially on implicit discourse relations, where no discourse connective could provide a reliable indication, broad, shallow features such as bigrams or word pairs conceivably lack the precision that would be needed to improve disambiguation results beyond a certain level. Conversely, hand-crafted linguistic features allow one to encode certain relevant aspects, but they have often limited coverage. Encoding detailed linguistic information in a structured representation, as in the work presented here, allows us to bridge this divide and potentially find a golden middle between linguistic precision and broad applicability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a graph-based representation of discourse segments as a way to overcome both the shallowness of a word-based representation and the non-specificity or lack of coverage of specialized linguistic features. In the rest of the paper, section 2 discusses the current state of the art in discourse relation classification. Section 3 introduces feature graphs as a general representation and learning mechanism, and section 4 provides an overview of the used corpus, as well as feature-based and graph-based representations for discourse relations. Section 5 presents empirical evaluation results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most early work on recognizing discourse relations was tailored towards unambiguously marked, explicit discourse relations, such as those introduced by because (e.g. in \"[Peter despises Mary] because [she stole his yoghurt]\") since connectives unambiguously signal one particular relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "In other cases, a connective can be ambiguous, as in the case of German 'nachdem' (as/after/since). Nachdem can signal multiple types of discourse relations (e.g. purely temporal or temporal and causal), as in 1 Researchers concerned with classifying the explicit discourse relations signalled by ambiguous discourse connectives, such as Miltsakaki et al. (2005) or , claim that a small number of linguistic indicators (e.g., tense or syntactic context) can be used for successful disambiguation of discourse connectives, while Versley (2011) claims that additional semantic and structural information can help improving the classification accuracy in such cases.",
"cite_spans": [
{
"start": 338,
"end": 362,
"text": "Miltsakaki et al. (2005)",
"ref_id": "BIBREF11"
},
{
"start": 528,
"end": 542,
"text": "Versley (2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "In the case of implicit discourse relations, the absence of overt clues suggests that a combination of weak linguistic indicators and world knowledge is needed for successful disambiguation. Sporleder and Lascarides (2008) use positional and morphological features, as well as subsequences of words, lemmas or POS tags to disambiguate implicit relations in a reannotated subset of the RST discourse treebank (Carlson et al., 2003) . Sporleder and Lascarides also show that (despite the corpus size of about 1000 examples) actual annotated relations are more useful than artificial examples derived from non-ambiguous explicit discourse relations.",
"cite_spans": [
{
"start": 191,
"end": 222,
"text": "Sporleder and Lascarides (2008)",
"ref_id": "BIBREF20"
},
{
"start": 408,
"end": 430,
"text": "(Carlson et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "Research using the implicit discourse relations annotated in the second release of the Penn Discourse Treebank (Prasad et al., 2008) shows a focus on shallow features: find that the most important feature in their work on implicit discourse relations are word pairs. Lin et al. (2009) identify production rules from the constituent parse, as well as word pairs, to be the most important features in the system, with dependency triples not being useful as a features, and information from surrounding (gold-standard) discourse relations having only a minimal impact.",
"cite_spans": [
{
"start": 111,
"end": 132,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 267,
"end": 284,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "Most recent research, such as Feng and Hirst (2012) , who classify a mixture of explicit and implicit discourse relations in the RST Discourse Treebank (Carlson et al., 2003) , or Park and Cardie (2012) , use these shallow features as their mainstay, adding surrounding relations and either semantic similarity (Feng and Hirst) or verb classes (Park and Cardie) , leaving open the question how to incorporate more general linguistic information.",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "Feng and Hirst (2012)",
"ref_id": "BIBREF2"
},
{
"start": 152,
"end": 174,
"text": "(Carlson et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 180,
"end": 202,
"text": "Park and Cardie (2012)",
"ref_id": "BIBREF13"
},
{
"start": 311,
"end": 327,
"text": "(Feng and Hirst)",
"ref_id": null
},
{
"start": 344,
"end": 361,
"text": "(Park and Cardie)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "3 Feature-Node Graphs Different information sources extract features that are relevant to subparts of an argument clause (e.g., information status and semantic class of a noun phrase), extracting features locally loses the information on each part. In contrast, we hope to maintain the information contained in these local features by representing them in feature-node graphs. This formalism also allows us to take into account more Figure 1 : Example Feature-Node Graph (i), its backbone (ii), and its expansion (iii) structure than n-grams (which are limited to relatively shallow information) or dependency triples (which would be too sparse in the case of typical discourse corpora). 3 Formally, a feature-node graph consists of a set V of vertices with labels",
"cite_spans": [
{
"start": 688,
"end": 689,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 433,
"end": 441,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "(i) Y r s \u2190 X u a \u2192 Z (ii) Y \u2190 X a \u2192 Z (iii) r \u2190 Y \u2192 s \u2191 u \u2190 X \u2192 a Z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "L V : V \u2192 L, a set of edges E \u2286 V \u00d7 V with labels L E : E \u2192 L,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "with the addition of a set F : V \u2192 P(L) that assigns to each vertex a set of feature labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "The backbone of a feature-node graph is simply the labeled directed graph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "(V, L V , E, L E ), without any features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "The expansion of a feature-node graph is the labeled directed graph ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "(V , L V , E , L E ) built by ex- panding the set of nodes to V = V {(v, l) \u2208 V \u00d7 L|l \u2208 F (v)} with labels L V (v) = L V (v) for all v \u2208 V and L V ((v, l)) = l for all v \u2208 V, l \u2208 F (v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "E = E {(v, (v, l))|l \u2208 F (v)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": ", with a special symbol for the labels of newly introduced edges, i.e. L E (v, (v, l)) = . Figure 1 gives an example of a feature-node graph with the vertices X, Y and Z with F (X) = {u}, F (Y ) = {r, s}, and",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "F (Z) = \u2205, edges E = {(X, Y ), (X, Z)} and edge labels L E ((X, Y )) = \u03b5, L E ((X, Z)) = a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "Representing desired information as features (instead of, e.g., using words, or POS tags, as the node labels in a dependency graph) is advantageous because that two feature-node graphs of similar structures will have a common substructure as long as the backbone of that structure is identical. In the case of words as node labels, any non-identical word would prevent the detection of the common substructure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "Machine Learning on Feature-Node Graphs Using an attributed graph representation, we can apply general substructure mining and structured learning approaches to extract good candidates for informative substructures. In contrast to other fields where these approaches have been used (computational chemistry, computer vision), computational linguistics problems tend to have both larger data sets as well as larger structures. As a consequent, the na\u00efve application of these structure mining algorithms would suffer from combinatorial explosion. In particular, a star-shaped graph (i.e., the typical case of a node with a large number of features) has exponentially many substructures, which would lead to both efficiency and performance problems, while an explicit distinction between features and backbone nodes can help by explicitly or implicitly limiting the number of features that a substructure can have in order to be considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "In general, all approaches to learn from structure fall into one of three groups: linearization approaches, which decompose a structure into parts that can be presented to a linear classifier as a binary feature, structure boosting approaches, which determine the set of included substructures as an integral part of the learning task, and kernel-based methods which use dynamic programming for computing the dot product in an implied vector space of substructures. Kernel-based methods on trees have been used in the re-ranking of answers in a question answering system (Moschitti and Quarteroni, 2011) , whereas Kudo et al. (2004) use boosting of graphs for a sentiment task (classifying reviews into positive/negative instances). Arora et al. (2010) use subgraph features in a linearization-based approach to sentiment classification.",
"cite_spans": [
{
"start": 571,
"end": 603,
"text": "(Moschitti and Quarteroni, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 614,
"end": 632,
"text": "Kudo et al. (2004)",
"ref_id": "BIBREF8"
},
{
"start": 733,
"end": 752,
"text": "Arora et al. (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "For simplicity reasons, we use a linearization-based approach based on subgraph mining. Generating candidate subgraphs is done using a version of gSpan (Yan and Han, 2002) %implicit: proportion of relation instances that are implicit, rather than explicit. % rel: percentage of given relation among all implicit. About 10% of the implicit instances have multiple labels (e.g. Result+Narration). Gastel et al. (2011) guish between 'backbone' nodes and features, and restrict the search space to subgraphs with at most three feature nodes by stopping the expansion of a subgraph pattern whenever it exceeds this limit.",
"cite_spans": [
{
"start": 152,
"end": 171,
"text": "(Yan and Han, 2002)",
"ref_id": "BIBREF23"
},
{
"start": 395,
"end": 415,
"text": "Gastel et al. (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Discourse Relations",
"sec_num": "2"
},
{
"text": "In order to test our approach to discourse relation classification, we rely on two German data sets annotated with discourse relations: The first contains explicit discourse relations signalled by ambiguous temporal connectives (in particular nachdem -corresponding to English 'after/as/since' as the most ambiguous connective in that dataset), with an annotation scheme that has been described by Simon et al. (2011) . The corpus contains 294 instances of nachdem, along with other, less ambiguous connectives. The second data set stems from a subcorpus that has received full annotation for all discourse relations, according to an annotation scheme described by Gastel et al. (2011) . This corpus contains 803 implicit discourse relations that are not marked by a connective (using the criteria set forth by Pasch et al., 2003) . As can be seen from tables 1 and 2, the two annotation schemes include overlapping groups of relations (Causal, Temporal and Comparison relations), but the implicit relations cover a broader set of relations, whereas the temporal connectives are annotated with a finer granularity. 2011Among the most frequent unmarked relations are Restatement and Background from the Expansion/Elaboration group, which predominantly occur as implicit discourse relations, as well as Result and Explanation, which occur unmarked in about two thirds of the cases. In other cases, such as Consequence, Concession (is limited to cases of contraexpectation) and ConcessionC (which also includes more pragmatic concession relations), only a minority of relation instances is implicit whereas the majority is marked by an explicit connective.",
"cite_spans": [
{
"start": 398,
"end": 417,
"text": "Simon et al. (2011)",
"ref_id": "BIBREF19"
},
{
"start": 665,
"end": 685,
"text": "Gastel et al. (2011)",
"ref_id": "BIBREF3"
},
{
"start": 811,
"end": 830,
"text": "Pasch et al., 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguating Discourse Relations",
"sec_num": "4"
},
{
"text": "Relations that are typically marked, such as Contrast -see example (3) -or Concession/ConcessionC -see example (4) -often contain weak indicators for the occurring discourse relation, such as the opposition policemen-demonstrators in the first case, or the negation of a reference to Arg1 (\"this wish will not be fullfilled soon\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguating Discourse Relations",
"sec_num": "4"
},
{
"text": "( Improving the performance on explicit discourse relations beyond the easiest cases, especially in the case of the notoriously ambiguous temporal connectives, is only possible by exploiting weak indicators for a relation. Features exploiting these weak indicators are a key ingredient to successfully predicting both implicit discourse relations and the non-majority readings of explicit discourse relations with ambiguous temporal connectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguating Discourse Relations",
"sec_num": "4"
},
{
"text": "We implemented a group of specialized linguistic features, which are inspired by those that were successfully used in related literature (Sporleder and Lascarides, 2008; Versley, 2011) . As implicit discourse relations can occur intra-as well as intersententially, the topological relation between the arguments is classified by syntactic embedding (if one argument is in the pre-or post-field of the other), or as one preceding, succeeding or embedding the other.",
"cite_spans": [
{
"start": 137,
"end": 169,
"text": "(Sporleder and Lascarides, 2008;",
"ref_id": "BIBREF20"
},
{
"start": 170,
"end": 184,
"text": "Versley, 2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": "4.1"
},
{
"text": "Several features reproduce simple morphosyntactic properties: One feature signals the presence or of negation in either argument, either as a negating adverb (English not), determiners (no), or pronouns (none). A negated Arg1 would be tagged 1N+, a non-negated one as 1N-. Tense and mood of clauses in either argument are also incorporated as features (e.g. 1tense=t for an Arg1 in pas(t) tense). The head lemma(s) of each argument, which is normally the main verb, is also included as a feature (e.g. 1Lverletzen for the Arg1 of example 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": "4.1"
},
{
"text": "We also mark the semantic type of adjuncts present in either relation argument, with categories for temporal, causal, or concessive adverbials, conjunctive focus adverbs (also, as well), and commentary adverbs (doubtlessly, actually, probably . . . ). As an example, an Arg1 containing \"despite the cold\" would receive a feature 1adj concessive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": "4.1"
},
{
"text": "The detection of cotaxonomic relations between words in both arguments using the German wordnet GermaNet (Henrich and Hinrichs, 2010) . Such pairs of contrasting lemmas, such as hot-cold or policeman-demonstrator commonly indicate a parallel or contrast relation. If two words share a common hyperonym (excluding the uppermost three levels of the noun hierarchy, which are not informative enough), feature values indicating the least-common-subsumer synset (such as temperature adjective) and up to two hyperonyms are added.",
"cite_spans": [
{
"start": 105,
"end": 133,
"text": "(Henrich and Hinrichs, 2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": "4.1"
},
{
"text": "A sentiment feature uses the lists of emotional words and of 'shifting' words (which invert the emotional value of the phrase) by Klenner et al. (2009) as well as the most reliable emotional words from Remus et al. (2010) . The combination of emotional words and shifting words into a feature is similar to : according to the presence of positive-or negative-emotion words, each relation argument is tagged as POS, NEG or AMB. When a negator or shifting expression is present, a \"-NEG\" is added to the tag, yielding, e.g. \"1 pol NEG-NEG\" for an Arg1 phrase containing the words 'not bad'.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "Klenner et al. (2009)",
"ref_id": "BIBREF7"
},
{
"start": 202,
"end": 221,
"text": "Remus et al. (2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": "4.1"
},
{
"text": "As mentioned in section 2, shallow lexical features empirically constitute a very important ingredient in the automatic classification of implicit (and ambiguous explicit) discourse relations, despite the fact that they lack most -semantic or structural -generalization capabilities. We implemented three groups of features that have been identified as important in the prior work of Sporleder and Lascarides (2008) , Lin et al. (2009) and .",
"cite_spans": [
{
"start": 384,
"end": 415,
"text": "Sporleder and Lascarides (2008)",
"ref_id": "BIBREF20"
},
{
"start": 418,
"end": 435,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Features",
"sec_num": "4.2"
},
{
"text": "A first group of features captures (unigrams and) bigrams of words, lemmas, and part-of-speech tags. In this fashion, the bigram \"Zahlen\u00fcber\" from Arg2 of (3) would be represented by word forms 2w Zahlen\u00fcber, lemmas 2l Zahl\u00fcber and POS tags 2p NN APPR. 4 Word pairs, i.e., pairs consisting of one word from each of the discourse relation arguments, have been identified as a very useful feature for the classification of implicit discourse relations in the Penn Discourse Treebank (Lin et al., 2009; , and, quite surprisingly, also for smaller datasets such as the discourse relations in the RST Discourse Treebank targeted by Feng and Hirst (2012) or the ambiguous connective dataset used by Versley (2011). 5 Because of the morphological richness of German, we use lemma pairs across sentences; for example (3), the lemma Polizist from Arg1 and the lemma DemonstrantIn from Arg1, among others, would be combined into a feature value wp Polizist DemonstrantIn.",
"cite_spans": [
{
"start": 253,
"end": 254,
"text": "4",
"ref_id": null
},
{
"start": 481,
"end": 499,
"text": "(Lin et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 627,
"end": 648,
"text": "Feng and Hirst (2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Features",
"sec_num": "4.2"
},
{
"text": "Finally, CFG productions were used by Lin et al. (2009) to capture structural information, including parallelism. Context-free grammar expansions are extracted from the subtrees of the relation arguments and used as features by marking whether the corresponding rule type occurs only in one, or in both, arguments. In example (3), the CFG rule 'PX \u2192 APPR NX' for prepositional phrases occurs in both arguments, yielding a feature \"pr B PX=APPR-NX\", whereas the preterminal rule \"APPR \u2192\u00fcber\" only occurs in Arg2 (yielding \"pr 2 APPR=\u00fcber\"). Figure 2 : The complete graphs built from the implicit relation arguments \"Nun will ich endlich in Frieden leben.\" and \"Dieser Wunsch Ahmet Zeki Okcuoglus wird so bald nicht in Erf\u00fcllung gehen.\" -cf. ex. (4).",
"cite_spans": [
{
"start": 38,
"end": 55,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 540,
"end": 548,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Shallow Features",
"sec_num": "4.2"
},
{
"text": "The backbone of the graph is built using nodes for a clause (S), and including children nodes for any clause adjuncts (MOD), verb arguments (ARG). In the case of relation arguments being in a (syntactic) matrix clause -subclause relationship (e.g. [ arg1 Peter wears his blue pullover,] [ arg2 which he bought last year]), the graph corresponding to the matrix clause receives a special node (SUB-CL, or REL-CL for relative clauses). This is universally the case for the explicit relations in the case of nachdem, but may also occur in the case of unmarked relations. For example, Background relations are frequently realized by relative clauses. Non-referring noun phrases (which are tagged as 'expletive' or 'inherent reflexive' in the referential layer of T\u00fcBa-D/Z), receive a node label expletive instead of ARG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "In each of the adjunct/argument nodes, we include syntactic information such as the category of the node (nominal/prepositional/adverbial phrase, e.g. cat:NX for a noun phrase), the topological field (cf. H\u00f6hle, 1986 , e.g. fd:MF for a constituent occurring in the middle field) and, for clause arguments, the grammatical function (subject, accusative or dative object or predicative complement -e.g., gf:OA for the accusative object). Clauses nodes contain features for tense and mood based on the main and auxiliary/modal verb(s) of that clause (e.g., mood=i, tense=s for an indicative/past clause).",
"cite_spans": [
{
"start": 205,
"end": 216,
"text": "H\u00f6hle, 1986",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "In the realm of semantic information, we use the heuristics of Versley (2011) to identify semantic classes of adverbials, in particular temporal, causal or concessive adverbials, conjunctive focus adverbs, and commentary adverbs. As the backbone of our graph structure abstracts from syntactic categories and only distinguishes adjuncts and arguments, it is possible to learn generalizations over different realizations of the same type of adjunct: for example, temporal adjuncts may be realized as a noun phrase (next Monday), a prepositional phrase (in the last week), an adverb (later), or a clause (when Peter was ill).",
"cite_spans": [
{
"start": 63,
"end": 77,
"text": "Versley (2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "Noun phrase arguments are annotated with information pertaining to their information status, marking them either as old (if their referent has already been introduced), mediated (if a modifier -e.g. the genitive John's in John's hat -has been previously introduced), or new (if neither the phrase nor any of its modifiers has a previous mention). Additionally, we use a semantic categorization into persons (PER), organizations (ORG), locations (LOC), events (EVT) and other entities. In the case of named entities, this information is derived from the existing named entity annotation in the T\u00fcBa-D/Z treebank (by simply mapping the GPE label to LOC); for phrases with a nominal head, this information is derived using the heuristics of Versley (2006) , which use information from GermaNet, semantic lexicons, and heuristics based on surface morphology. Clauses as well as arguments and adjuncts are annotated with their semantic head; prepositional phrases are, in addition, annotated with the semantic head of the preposition's argument (in the next year).",
"cite_spans": [
{
"start": 738,
"end": 752,
"text": "Versley (2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "From the graph representations of relation arguments that are created in this step, frequent subgraphs are extracted. The subgraphs must occur at least five times in either the Arg1 or Arg2 graph, have at most seven nodes, of which at least two must be backbone nodes, and at most three can be feature nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "For the learning task, features are created by concatenating an identifier for the subgraph (e.g. graph1234) with a suffix specifying whether it occurs only in the main clause ( 1), only in the subclause ( 2), or in both clauses ( 12). Detecting subgraphs that occur in both clauses allows the system to take into account parallelism in terms of syntactic and/or semantic properties of parts of each clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "Both the shallow features and the subgraph features are subject to supervised feature selection: In each fold of the 10-fold crossvalidation, the training portion is used to score each feature and only include the most informatives one in each fold. For this, an association measure between the examples from that training portion and, for each relation label, the examples in the training portion that the label occurs in, is determined. The best score over all the labels is kept, and is used to filter out features that score less than the top-N features of that group. Supervised feature selection has been used by Lin et al. (2009) , using pointwise mutual information (PMI) on candidate productions and word pairs, and in the work of Arora et al. (2010) using Pearson's \u03c7 2 statistic on candidate subgraphs. We tried PMI, \u03c7 2 and the Dice coefficient 2|A\u2229B|",
"cite_spans": [
{
"start": 619,
"end": 636,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "|A||B| as association measures, and empirically found that the Dice coefficient worked best in the case of implicit discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "4.3"
},
{
"text": "For both the 294 explicit nachdem relations and the 803 implicit discourse relations, we use a 10-fold cross-validation scheme where, successively, one tenth of the data is automatically labeled by a model from the remaining nine tenth of the data. Multiple relation labels are predicted by using binary classifiers (one-vs-all reduction) and using confidence values to choose one or several labels among those that have the most confident positive classification. In the case of multiple positive classifications (e.g., if Reporting, Temporal and Expansion all receive a positive classification), relations are only considered for the 'second' label if the most-confident label and the potential second label have been seen together in the training data (e.g. Contingency and Temporal can occur together, but Reporting will not be extended by a second relation labels). In a second step, the coarse grained relation label (or labels) is extended up to the finest taxonomy level (e.g., an initial coarse-grained Contingency label is extended to Contingency.Causal.Explanation). In our experiments, we use SVMperf, an SVM implementation that is able to train classifiers optimized for performance on positive instances (Joachims, 2005) .",
"cite_spans": [
{
"start": 1218,
"end": 1234,
"text": "(Joachims, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "Tables 3 and 4 provide evaluation figures for different subsets of the presented features, using aggregate measures over relations both at the coarsest level (for implicit discourse relations, the five categories Contingency, Expansion, Temporal, Comparison, Reporting), and the finest level (which contains twenty-one relations in the case of implicit relations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "For each level of granularity, we can measure the quality of the classifier's predictions in terms of an average over relation tokens, giving partial credit for partially matching labelings (e.g., a system prediction of Narration or Narration+Comparison, instead of gold-standard Narration+Result). This measure, the dice score, assigns partial credit for a relation token when system and/or gold standard contain multiple labels and both label sets overlap, calculated as 2|G\u2229S|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "|G|+|S| -an exact match would be scored as 1.0, whereas guessing a sub-or superset (e.g. only Result instead of Result+Narration) would give a contribution of 0.66 for that example, and overlapping predictions (Result+Comparison instead of Result+Narration) would get a partial credit of 0.5. As an average over relation types, we can also calculate an average of the F-score over all relations, yielding the macro-averaged F-score (MAFS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "Because the label distribution is heavily skewed -some relations, such as Restatement, are relatively frequent with 140 occurrences, while, e.g., Contrast with 26 occurrences, is much less frequent -a classification that is biased towards the more frequent relations will receive higher token-weighted (dice) scores and lower type-weighted (MAFS) scores, whereas an unbiased system would receive lower dice and higher macro-averaged F scores. 2011, as ling, a system additionally using word pairs and CFG (with unsupervised feature selection), as Ver11, and finally versions including the graph representation (gr and Ver11+gr). Shaded rows indicate variants using the graph representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "Disambiguating nachdem For the disambiguation of the ambiguous temporal connective nachdem, we use a set of linguistic and shallow features to reproduce the results of Versley (2011), similar to that described in section 3, but with very few exceptions. 6 Looking at the aggregate measures, we see that the graph-based features in isolation already perform quite well, surpassing a version with linguistic features, but no word pairs or CFG productions. Adding subgraph features with appropriate feature selection to the complete system (including linguistic and shallow features) yields a further improvement over a relatively strong baseline. Table 4 presents both aggregate measures (Dice, macro-averaged F-measure) as well as scores for the most important coarse-grained relations. We provide results for the full graph (grA), a version with all features except information status (grB), and finally a minimal version that excludes all semantic features and lemmas (grC). In general, both the linguistic features and the graph features perform much better than the shallow features (with the best single source of information being the complete graph), and also that a combination of linguistic and all shallow features (all-gr) suffers from",
"cite_spans": [],
"ref_spans": [
{
"start": 645,
"end": 652,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "In the second section of the table, the influence of different information sources is detailed. We see that, despite the skewed distribution of relations, all information sources outperform the most-frequentsense baseline by themselves. By providing a higher precision on Expansion relations, and generally better performance on Reporting relations, the graph-based representation performs better than any of the other information sources, and is the only information source to provide enough information for the identification of Comparison relations. The third group of rows, showing combinations of the linguistic features with the shallow information sources and with the graph representation, shows that, while the addition of specialized features to the shallow ones yields a general improvement, the graph-based representation still works best; for Temporal relations, we see that the noise brought in by the shallow features hinders their identification more than in the case of the graph-based representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit relations",
"sec_num": null
},
{
"text": "The last part of table 4 provides evaluation results for a system using the complete set of information sources (all), for systems leaving out one of the shallow information sources (all-bi, all-wp, all-pr), and a system using only linguistic and shallow features but no graph information (all-gr). We see that, in general, the identification of rare relations such as Temporal, Comparison, and Reporting is helped by the graph representation (the full system obtains the best MAFS scores of 0.438 and 0.208, for coarse-and fine-grained relations, respectively, against 0.388 and 0.145 for the system without graph information). System variants with graph information also obtain higher coarse-grained dice scores (0.564-0.571) than the version without graph information (0.551 for all-gr). In the same vein, we see that the parsimonious grC graph gives the best combination result (allC-pr, including linguistic, word pair, unigram/bigram, and graph features) despite the more informative grA giving the best results in isolation. Table 4 : Implicit discourse relations: specialized linguistic features (ling), word/lemma/pos bigrams (bi), word pairs (wp), CFG productions (pr), and different methods for constructing graphs (grA, grB and grC). Shaded rows indicate variants using the graph representation.",
"cite_spans": [],
"ref_spans": [
{
"start": 1032,
"end": 1039,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implicit relations",
"sec_num": null
},
{
"text": "In this article, we presented a novel way to identify discourse relations using feature-node graphs to represent rich linguistic information. We evaluated our approach on two datasets: one dataset containing implicit discourse relations and one containing explicit discourse relations with the ambiguous temporal connective nachdem. We showed in both cases that using the graph-based representation, with appropriate heuristics for supervised feature selection, yields an improvement even over a strong state-of-the-art system using linguistic and shallow features. Besides applying the techniques on other corpora, issues for future work would include the use of unlabeled data to improve the generalization capability of the classifier, or the use of reranking techniques to combine local decisions into a global labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "T\u00fcBa-D/Z corpus, sentence 7462 2 T\u00fcBa-D/Z corpus, sentence 448",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For reasons of efficiency as well as learnability, the structures we use to represent each discourse unit are simpler and more compact than the annotated corpus data from which they are derived.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sporleder and Lascarides (2008) use a Boosting classifier (BoosTexter) that can extract and use arbitrary-length subsequences from its training data. As our dataset is small enough that we do not expect a significant contribution from longer sequences, we approximate the sequence boosting by extracting unigrams and bigrams. As with the other shallow features, unigrams and bigrams are subject to the same supervised feature selection that is also applied to subgraph features.5 For an illustration of the differences in size, consider that the Penn Discourse Treebank contains about 20 000 implicit discourse relations in 2159 articles, and the RST Discourse Treebank contains a lower number of 385 documents; Sporleder and Lascarides used a sample of 1 051 annotated implicit relations which were derived from the RST Discourse Treebank but manually relabeled according to an SDRT-like annotation scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The nachdem relations are predicted without sentiment feature, but with the earlier system's punctuation and compatible pronouns features. The shallow features ofVersley (2011) include word pairs and context-free rules, with unsupervised feature selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements The author is grateful to the Deutsche Forschungsgemeinschaft (DFG) for funding as part of SFB 833, and to Corina Dima, Erhard Hinrichs, Emily Jamison and Verena Henrich, as well as the three anonymous reviewers, for suggestions and constructive comments on earlier versions of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentiment classification using automatically extracted subgraph features",
"authors": [
{
"first": "S",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Penstein-Ros\u00e9",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Nyberg",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arora, S., E. Mayfield, C. Penstein-Ros\u00e9, and E. Nyberg (2010). Sentiment classification using automat- ically extracted subgraph features. In NAACL 2010.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory",
"authors": [
{
"first": "L",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2003,
"venue": "Current Directions in Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlson, L., D. Marcu, and M. E. Okurowski (2003). Building a discourse-tagged corpus in the frame- work of rhetorical structure theory. In Current Directions in Discourse and Dialogue. Kluwer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Text-level discourse parsing with rich linguistic features",
"authors": [
{
"first": "V",
"middle": [
"W"
],
"last": "Feng",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, V. W. and G. Hirst (2012). Text-level discourse parsing with rich linguistic features. In ACL 2012.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Annotation of implicit discourse relations in the T\u00fcBa-D/Z treebank",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gastel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Schulze",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hinrichs",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gastel, A., S. Schulze, Y. Versley, and E. Hinrichs (2011). Annotation of implicit discourse relations in the T\u00fcBa-D/Z treebank. In GSCL 2011.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "GernEdiT -the GermaNet editing tool",
"authors": [
{
"first": "V",
"middle": [],
"last": "Henrich",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hinrichs",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC 2010)",
"volume": "",
"issue": "",
"pages": "2228--2235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henrich, V. and E. Hinrichs (2010). GernEdiT -the GermaNet editing tool. In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC 2010), pp. 2228-2235.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Der Begriff \"Mittelfeld",
"authors": [
{
"first": "T",
"middle": [],
"last": "H\u00f6hle",
"suffix": ""
}
],
"year": 1985,
"venue": "Akten des Siebten Internationalen Germanistenkongresses",
"volume": "",
"issue": "",
"pages": "329--340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H\u00f6hle, T. (1986). Der Begriff \"Mittelfeld\", Anmerkungen\u00fcber die Theorie der topologischen Felder. In Akten des Siebten Internationalen Germanistenkongresses 1985, pp. 329-340.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A support vector method for multivariate performance measures",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T. (2005). A support vector method for multivariate performance measures. In Proceedings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Robust compositional polarity classification",
"authors": [
{
"first": "M",
"middle": [],
"last": "Klenner",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Petrakis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fahrni",
"suffix": ""
}
],
"year": 2009,
"venue": "Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klenner, M., S. Petrakis, and A. Fahrni (2009). Robust compositional polarity classification. In Recent Advances in Natural Language Processing (RANLP 2009).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An application of boosting to graph classification",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Maeda",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kudo, T., E. Maeda, and Y. Matsumoto (2004). An application of boosting to graph classification. In NIPS 2004.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Recognizing implicit discourse relations in the Penn Discourse Treebank",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "M.-Y",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Z., M.-Y. Kan, and H. T. Ng (2009). Recognizing implicit discourse relations in the Penn Discourse Treebank. In EMNLP 2009.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The rhetorical parsing of unrestricted texts: A surface-based approach",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcu, D. (2000). The rhetorical parsing of unrestricted texts: A surface-based approach. Computational Linguistics 26, 3.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Experiments on sense annotations and sense disambiguation of discourse connectives",
"authors": [
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2005,
"venue": "TLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miltsakaki, E., N. Dinesh, R. Prasad, A. Joshi, and B. Webber (2005). Experiments on sense annotations and sense disambiguation of discourse connectives. In TLT 2005.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linguistic kernels for answer re-ranking in question answering systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Quarteroni",
"suffix": ""
}
],
"year": 2011,
"venue": "Information Processing and Management",
"volume": "47",
"issue": "",
"pages": "825--842",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, A. and S. Quarteroni (2011). Linguistic kernels for answer re-ranking in question answering systems. Information Processing and Management 47, 825-842.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving implicit discourse relation recognition through feature set optimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2012,
"venue": "SIGDIAL 2012",
"volume": "",
"issue": "",
"pages": "108--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Park, J. and C. Cardie (2012). Improving implicit discourse relation recognition through feature set optimization. In SIGDIAL 2012, pp. 108-112.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Handbuch der deutschen Konnektoren",
"authors": [
{
"first": "R",
"middle": [],
"last": "Pasch",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Brau\u00dfe",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Breindl",
"suffix": ""
},
{
"first": "U",
"middle": [
"H"
],
"last": "Wa\u00dfner",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pasch, R., U. Brau\u00dfe, E. Breindl, and U. H. Wa\u00dfner (2003). Handbuch der deutschen Konnektoren. Berlin / New York: Walter de Gruyter.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pitler, E., A. Louis, and A. Nenkova (2009). Automatic sense prediction for implicit discourse relations in text. In ACL-IJCNLP 2009.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using syntax to disambiguate explicit discourse connectives in text",
"authors": [
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL 2009 short papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pitler, E. and A. Nenkova (2009). Using syntax to disambiguate explicit discourse connectives in text. In ACL 2009 short papers.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Penn Discourse Treebank 2.0",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prasad, R., N. Dinesh, A. Lee, E. Miltsakaki, L. Robaldo, A. Joshi, and B. Webber (2008). The Penn Discourse Treebank 2.0. In Proceedings of LREC 2008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SentiWS -a publicly available German-language resource for sentiment analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Quasthoff",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Heyer",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Remus, R., U. Quasthoff, and G. Heyer (2010). SentiWS -a publicly available German-language resource for sentiment analysis. In Proceedings of LREC 2010.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Handbuch zur Annotation expliziter und impliziter Diskursrelationen im Korpus der T\u00fcbinger Baumbank des Deutschen (T\u00fcBa-D/Z) Teil I: Diskurskonnektoren",
"authors": [
{
"first": "S",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hinrichs",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Schulze",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2011,
"venue": "Seminar f\u00fcr Sprachwissenschaft",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon, S., E. Hinrichs, S. Schulze, and Y. Versley (2011). Handbuch zur Annotation expliziter und impliziter Diskursrelationen im Korpus der T\u00fcbinger Baumbank des Deutschen (T\u00fcBa-D/Z) Teil I: Diskurskonnektoren. Technical report, Seminar f\u00fcr Sprachwissenschaft, Universit\u00e4t T\u00fcbingen.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using automatically labelled examples to classify rhetorical relations: An assessment",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2008,
"venue": "Natural Language Engineering",
"volume": "14",
"issue": "3",
"pages": "369--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sporleder, C. and A. Lascarides (2008). Using automatically labelled examples to classify rhetorical relations: An assessment. Natural Language Engineering 14(3), 369-416.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A constraint-based approach to noun phrase coreference resolution in German newspaper text",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2006,
"venue": "Konferenz zur Verarbeitung Nat\u00fcrlicher Sprache",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Versley, Y. (2006). A constraint-based approach to noun phrase coreference resolution in German news- paper text. In Konferenz zur Verarbeitung Nat\u00fcrlicher Sprache (KONVENS 2006).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multilabel tagging of discourse relations in ambiguous temporal connectives",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Versley, Y. (2011). Multilabel tagging of discourse relations in ambiguous temporal connectives. In Proceedings of Recent Advances in Natural Language Processing (RANLP 2011).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "gSpan: Graph-based substructure pattern mining",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings fo the Second IEEE Conference on Data Mining (ICDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan, X. and J. Han (2002). gSpan: Graph-based substructure pattern mining. In Proceedings fo the Second IEEE Conference on Data Mining (ICDM 2002).",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td>: 1</td></tr><tr><td colspan=\"3\">(1) [ arg1 Another type of discourse relations are implicit discourse relations, which can occur between neighbour-</td></tr><tr><td colspan=\"2\">ing spans of text without any discourse connective signaling them: 2</td></tr><tr><td>(2)</td><td>[ arg1 Mittlerweile ist das jedoch selbstverst\u00e4ndlich]</td></tr><tr><td/><td>[ arg2 Die gemeinsame Arbeit hilft, den anderen zu verstehen.]</td></tr><tr><td/><td>[ arg1 In the meantime, this has become a matter of course] (implied:since)</td><td>(Explanation)</td></tr><tr><td/><td>[</td></tr></table>",
"text": "Nachdem sowohl das Verwaltungsgericht als auch das Oberverwaltungsgericht das Verbot best\u00e4tigt hatten,] [ arg2 rief die NPD am Freitag nachmittag das Bundesverwaltungsgericht an]. [ arg1 After both the Administrative Court and the Higher Administrative Court had confirmed the interdiction,] [ arg2 the NPD appealed to the Federal Administrative Court.] (Temporal+cause)",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table/>",
"text": ") and correspondingly adding edges to get the complete set",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"text": "Frequencies of discourse relations in the nachdem data from Simon et al.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF8": {
"content": "<table/>",
"text": "Results for disambiguation of nachdem. Rows include the specialized linguistic features of Versley",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}