ACL-OCL / Base_JSON /prefixY /json /Y09 /Y09-1038.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y09-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:42:30.679085Z"
},
"title": "Using Tree Kernels for Classifying Temporal Relations between Events * * * *",
"authors": [
{
"first": "Seyed",
"middle": [
"Abolghasem"
],
"last": "Mirroshandel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University of Technology Azadi Ave",
"location": {
"postCode": "11155-9517",
"settlement": "Tehran",
"country": "Iran"
}
},
"email": "mirroshandel@ce.sharif.edu"
},
{
"first": "Gholamreza",
"middle": [],
"last": "Ghassem-Sani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University of Technology Azadi Ave",
"location": {
"postCode": "11155-9517",
"settlement": "Tehran",
"country": "Iran"
}
},
"email": ""
},
{
"first": "Mahdy",
"middle": [],
"last": "Khayyamian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sharif University of Technology Azadi Ave",
"location": {
"postCode": "11155-9517",
"settlement": "Tehran",
"country": "Iran"
}
},
"email": "khayyamian@ce.sharif.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The ability to accurately classify temporal relations between events is an important task in a large number of natural language processing and text mining applications such as question answering, summarization, and language specific information retrieval. In this paper, we propose an improved way of classifying temporal relations, using support vector machines (SVM). Along with gold-standard corpus features, the proposed method aims at exploiting useful syntactic features, which are automatically generated, to improve accuracy of the SVM classification method. Accordingly, a number of novel kernel functions are introduced and evaluated for temporal relation classification. Our evaluations clearly demonstrate that adding syntactic features results in a considerable performance improvement over the state of the art method, which merely employs gold-standard features.",
"pdf_parse": {
"paper_id": "Y09-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "The ability to accurately classify temporal relations between events is an important task in a large number of natural language processing and text mining applications such as question answering, summarization, and language specific information retrieval. In this paper, we propose an improved way of classifying temporal relations, using support vector machines (SVM). Along with gold-standard corpus features, the proposed method aims at exploiting useful syntactic features, which are automatically generated, to improve accuracy of the SVM classification method. Accordingly, a number of novel kernel functions are introduced and evaluated for temporal relation classification. Our evaluations clearly demonstrate that adding syntactic features results in a considerable performance improvement over the state of the art method, which merely employs gold-standard features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, many progresses have been made in natural language processing (NLP). Combining statistical and symbolic methods plays a significant role in these advances. Tasks such as part-of-speech tagging, morphological analysis, parsing, and named entity recognition have been addressed with satisfactory results (Mani et al., 2006) . Problems such as temporal information processing that requires a deep semantic analysis are yet to be addressed.",
"cite_spans": [
{
"start": 319,
"end": 338,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lately, the increasing attention in practical NLP applications such as question answering, information extraction, and summarization have resulted in an increasing demand for temporal information processing (Tatu and Srikanth, 2008) . In question answering, one would expect the system to answer questions such as \"when an event occurred\", or \"what is the chronological order between some desired events\". In text summarization, especially in multi-document type, knowing the order of events is a useful source for merging related information correctly. It is also the case that in some information extraction applications, the temporal information between events can be very useful and effective (Alonso, 2009) .",
"cite_spans": [
{
"start": 207,
"end": 232,
"text": "(Tatu and Srikanth, 2008)",
"ref_id": "BIBREF20"
},
{
"start": 697,
"end": 711,
"text": "(Alonso, 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Temporal information is usually encoded in the textual description of some events. For a given ordered pair of components ( ) 2 1 , x x , where 1 x and 2 x are times or events, a temporal information processing system tries to identify the type of relation i r that temporally links 1 x to 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "x . The type of relation i r can be one of the 13 types proposed by James Allen (Allen, 1984) . For example, in the sentence \"Ocean Drilling said (e22) it will offer (e23) 15% to 20% of the contract-drilling business through an initial public offering (e25) in the near future (t67). (wsj_313), there are some relations between pairs (e23, e25), and (e25, t67). The task is to automatically tag these pairs with relations INCLUDES and BEFORE, respectively.",
"cite_spans": [
{
"start": 80,
"end": 93,
"text": "(Allen, 1984)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With recent construction of the Timebank corpus (Pustejovsky et al, 2003) , the efficiency of different machine learning methods can now be compared. The recent work with Timebank has disclosed that six-class classification of temporal relations is a very complicated task, even for human annotators. In this paper, we propose an improved way of classifying temporal relations, using a machine learning approach. Support vector classification using effective kernel functions are specifically applied to two types of features: corpus gold-standard event features and underlying syntactic features of the contextual sentence. To capture either type of features, we apply an event-kernel to the gold-standard event features, and a convolution tree-kernel to syntactic features. The event kernel has been implemented according to (Mani et al., 2006) and some novel tree kernels have been employed as our syntactic tree kernel. Experimental results on Timebank validate the proposed method by showing 6% improvement over the state of the art method that merely uses gold-standard features.",
"cite_spans": [
{
"start": 48,
"end": 73,
"text": "(Pustejovsky et al, 2003)",
"ref_id": "BIBREF18"
},
{
"start": 827,
"end": 846,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organized as follows: section 2 is about previous approaches to temporal relation classification. Section 3 explains our proposed method. Section 4 briefly presents characteristic of the corpus that we have used. Section 5 demonstrates evaluation of the proposed algorithm. Finally, paper is concluded in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are numerous ongoing researches focused on temporal relation classification. These efforts can be divided into three categories: 1) Pattern based; 2) Rule based, and 3) Anchor based. These categories are discussed in the next three sub-sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Works",
"sec_num": "2"
},
{
"text": "This group of methods tries to extract some generic lexico-syntactic patterns for events cooccurrence. Extracting these patterns can be done manually or automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Based Methods",
"sec_num": "2.1"
},
{
"text": "Perhaps the simplest pattern based method is the one that was developed using a knowledge resource called VerbOcean (Chklovski and Pantel, 2005) . VerbOcean has a small number of manually selected generic patterns. The style of patterns is in the form of <Verb-X> and then <Verb-Y>. After manually creating these patterns, this method can obtain some of existing semantic relations between events. Similar to other manual methods, a major drawback of this method is its tendency to have a high recall but a low precision. One way to overcome this weakness is to create more specific patterns; however it is clear that this would be very hard and time consuming.",
"cite_spans": [
{
"start": 116,
"end": 144,
"text": "(Chklovski and Pantel, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Extraction of Patterns",
"sec_num": null
},
{
"text": "Another way of resolving the low precision problem is using an additional component for pruning extracted relations. Many researches have tried to address this issue by a variety of approaches. In some studies, several heuristics have been employed to resolve the low precision problem (Chklovski and Pantel, 2005; Torisawa, 2006) . Another solution is incorporating a classifier that is trained on a related corpus (Inui et al., 2003) and is used to refine the results.",
"cite_spans": [
{
"start": 286,
"end": 314,
"text": "(Chklovski and Pantel, 2005;",
"ref_id": "BIBREF6"
},
{
"start": 315,
"end": 330,
"text": "Torisawa, 2006)",
"ref_id": "BIBREF21"
},
{
"start": 416,
"end": 435,
"text": "(Inui et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Extraction of Patterns",
"sec_num": null
},
{
"text": "These methods use machine learning techniques for pattern extraction. They try to learn a classifier from an annotated corpus, and attempt to improve classification accuracy by feature engineering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Extraction of Patterns",
"sec_num": null
},
{
"text": "MaxEnt classifier is a good example of this group (Mani et al., 2006) . MaxEnt assigns one of six relations to each pair of events from an augmented Timebank corpus. This classifier uses perfect features, which have been hand-tagged in the corpus, including tense, aspect, modality, polarity, and event class. In addition to these features, it relies on two additional features including pairwise agreement of tense and aspect. In this paper, we propose a new technique to improve this particular method.",
"cite_spans": [
{
"start": 50,
"end": 69,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Extraction of Patterns",
"sec_num": null
},
{
"text": "There is another approach in this group that trains an event classifier for intra-sentential events, and builds a corpus that contains sentences with at least two events, one of which is triggered by a key time word (e.g., after, before, etc.). The classifier is based on syntax and clausal ordering features (Lapata and Lascarides, 2006) .",
"cite_spans": [
{
"start": 309,
"end": 338,
"text": "(Lapata and Lascarides, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Extraction of Patterns",
"sec_num": null
},
{
"text": "The state of the art in this group is very similar to the MaxEnt classifier. It relies on features extracted automatically from some raw text, and works 3% better than MaxEnt. This classifier tries to learn event attributes and event-event features in two consecutive stages. Event attributes are the same as that of MaxEnt, but event-event features are new and include part of speeches, event-event syntactic properties, prepositional phrase, and temporal discourses (Chambers et al, 2007) . This method also uses some extra resources like WordNet to find words' synsets.",
"cite_spans": [
{
"start": 468,
"end": 490,
"text": "(Chambers et al, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Extraction of Patterns",
"sec_num": null
},
{
"text": "There are also other methods that have used some machine learning techniques for acquisition of semantic relations between events (Abe et al., 2008) . Such techniques can be applied to temporal relation classification as well. In addition to these methods, there is an SVM-based method which has been shown satisfactory results in event-time relation classification (Mirroshandel et al., 2009) .",
"cite_spans": [
{
"start": 130,
"end": 148,
"text": "(Abe et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 366,
"end": 393,
"text": "(Mirroshandel et al., 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Extraction of Patterns",
"sec_num": null
},
{
"text": "The common idea behind rule based methods is to find some rules for classifying temporal relations. In most existing works, these rules are determined manually and are based on Allen's interval algebra (Allen, 1984) .",
"cite_spans": [
{
"start": 202,
"end": 215,
"text": "(Allen, 1984)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Based Methods",
"sec_num": "2.2"
},
{
"text": "In a study, rules of temporal transitivity were applied to increase the training set by a factor of 10. Next, the MaxEnt classifier was trained on this enlarged corpus. The test accuracy on this enlarged corpus was very encouraging. There was nearly 32% progress in accuracy (Mani et al., 2006) .",
"cite_spans": [
{
"start": 275,
"end": 294,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Based Methods",
"sec_num": "2.2"
},
{
"text": "Reasoning with determined rules is another usage of rules. In (Tatu and Srikanth, 2008) , a rich set of rules (axioms) was created. Then by using a first order logic based theorem prover, they tried to find a proof of each temporal relation by refutation.",
"cite_spans": [
{
"start": 62,
"end": 87,
"text": "(Tatu and Srikanth, 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Based Methods",
"sec_num": "2.2"
},
{
"text": "Anchor based methods use information of argument fillers (i.e., anchors) of every event expression as a valuable clue for recognizing temporal relations between events. They are based on the distributional hypothesis (Harris, 1968) and by looking at a set of event expressions whose argument fillers have a similar distribution, they try to recognize synonymous event expressions. Algorithms such as DIRT (Lin and Pantel, 2001) , TE/ASE (Szpektor et al., 2004) , and that of Pekar's system (Pekar, 2006) are some examples of anchor based methods.",
"cite_spans": [
{
"start": 217,
"end": 231,
"text": "(Harris, 1968)",
"ref_id": "BIBREF9"
},
{
"start": 405,
"end": 427,
"text": "(Lin and Pantel, 2001)",
"ref_id": "BIBREF13"
},
{
"start": 437,
"end": 460,
"text": "(Szpektor et al., 2004)",
"ref_id": "BIBREF19"
},
{
"start": 490,
"end": 503,
"text": "(Pekar, 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Based Methods",
"sec_num": "2.3"
},
{
"text": "It has been shown that one can gain more accuracy by combining some of these three different methods. For example, pattern and rule based methods were merged (Mani et al., 2006) , and the new system showed to be more efficient than each of the base methods. In the other study, pattern and anchor based methods were combined (Chklovski and Pantel, 2005; Abe et al., 2008) . However, there has been an exception: merging pattern and anchor based methods did not gain any improvement (Torisawa, 2006) .",
"cite_spans": [
{
"start": 158,
"end": 177,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 325,
"end": 353,
"text": "(Chklovski and Pantel, 2005;",
"ref_id": "BIBREF6"
},
{
"start": 354,
"end": 371,
"text": "Abe et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 482,
"end": 498,
"text": "(Torisawa, 2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Based Methods",
"sec_num": "2.3"
},
{
"text": "Syntactic features have been shown to be a great source of information in various NLP and text mining applications such as relation extraction, semantic role labeling, and co-reference resolution. Current works in temporal relation classification have not sufficiently utilized such features. Here, we aim at taking advantage of syntactic features. Because of promising results of Support Vector Machines (SVM) (Boser et al., 1992; Cortes and Vapnik 1995) in related works, it has been chosen as our classification algorithm. To incorporate syntactic features into SVM, convolution tree kernels are applied. More specifically, these tree kernels have been combined with a simple event kernel. In the next sub-section, the simple event kernel is briefly discussed. Then the convolution tree kernels are described, followed by the explanation of ways of combining these kernels.",
"cite_spans": [
{
"start": 411,
"end": 431,
"text": "(Boser et al., 1992;",
"ref_id": "BIBREF3"
},
{
"start": 432,
"end": 455,
"text": "Cortes and Vapnik 1995)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernel Based Temporal Relation Classification",
"sec_num": "3"
},
{
"text": "This is a linear kernel that exclusively uses gold-standard features of events. For each event, there are five temporal attributes which have been tagged in Timebank: 1) tense; 2) grammatical aspect; 3) modality; 4) polarity, and 5) event class. Tense and grammatical aspect define temporal location and event structure; thus, they are necessary in any method of temporal relation classification. Modality and polarity specify non-occurring (or hypothetical) situations. The event class shows the type of event. The range of values for these attributes is based on (Pustejovsky et al, 2003) .",
"cite_spans": [
{
"start": 565,
"end": 590,
"text": "(Pustejovsky et al, 2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Event Kernel",
"sec_num": "3.1"
},
{
"text": "In addition to these five attributes, it uses part of speech tags of event as an extra feature. This kernel can be defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Event Kernel",
"sec_num": "3.1"
},
{
"text": "( ) ( ) \u2211 = = 2 , 1 2 1 2 1 . , . , i i i E TR E TR E TR K TR TR K (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Event Kernel",
"sec_num": "3.1"
},
{
"text": "where 1 TR and 2 TR stand for two temporal relation instances, i E is the th i event of a temporal relation instance, and E K is a simple kernel function over the features of event instances:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Event Kernel",
"sec_num": "3.1"
},
{
"text": "( ) ( ) \u2211 = i i i E f E f E C E E K . , . , 2 1 2 1 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Event Kernel",
"sec_num": "3.1"
},
{
"text": "where i f means the th i event feature; function C returns 1 if the two feature values are identical, and returns 0 otherwise. In essence, E K returns the number of common feature values of two event instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Event Kernel",
"sec_num": "3.1"
},
{
"text": "In (Khayyamian et al., 2009) , a generalized version of convolution tree kernel (Collins and Duffy, 2001 ) was proposed by associating generic weights to the nodes and sub-trees of the parse tree. In this paper, some customized versions of this kernel are used to capture syntactic features.",
"cite_spans": [
{
"start": 3,
"end": 28,
"text": "(Khayyamian et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 80,
"end": 104,
"text": "(Collins and Duffy, 2001",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree kernels",
"sec_num": "3.2"
},
{
"text": "A generalized convolution tree kernel was proposed in (Khayyamian et al., 2009) . In order to explain the kernel, first a feature vector over the parse tree is defined in equation 3. In this vector, the th i feature equals to the sum of weighted number of occurrences of sub-tree type th i in the parse tree. is the sub-tree instance of type th i which is rooted in node n. As it is shown in equation 4, function tw(T) (which denotes \"tree weight\") assigns a weight to tree T, which is the product of all its node weights. in(T) and en(T) are respectively sets of internal and external nodes of T. ",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "(Khayyamian et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Convolution Tree Kernel",
"sec_num": null
},
{
"text": ") ))] ( ( ) ( [ ,..., ))] ( ( ) ( [ ))],..., ( ( ) ( [ ( ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Convolution Tree Kernel",
"sec_num": null
},
{
"text": "( ) ( ) ( ) ( ) ( ) \u220f \u220f \u2208 \u2208 \u00d7 = T en n T in n n enw n inw T tw (4) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2208 \u2208 \u2208 \u2208 \u2208 \u2208 = \u00d7 \u00d7 \u00d7 = \uf8f7 \uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ec \uf8ed \uf8eb \u00d7 \u00d7 \uf8f7 \uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ec \uf8ed \uf8eb \u00d7 = = 1 1 2 2 1 1 2 2 2 2 1 1 2 1 2 1 2 1 2 2 1 1 2 1 2 1 , ))] ( ( )) ( ( ) ( ) ( [ , , T n T n i i T n T n i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Convolution Tree Kernel",
"sec_num": null
},
{
"text": "T T K i i i i (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Convolution Tree Kernel",
"sec_num": null
},
{
"text": "Because each node of the entire parse tree can either occur as an internal or as an external node of a specific sub-tree (provided that it exists in the sub-tree), two weighted types are respectively associated with the nodes by inw(n) and enw(n) functions (these stand for \"internal node weight\" and \"external node weight\"). For example, in Figure 1 , while the node with label PP is an external node of sub-trees (1) and 7, it is regarded as an internal node of sub-trees 3and (4).",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 350,
"text": "Figure 1",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Generalized Convolution Tree Kernel",
"sec_num": null
},
{
"text": "As shown in equation 5, a method similar to that of (Collins and Duffy, 2001 ) can be employed to devise a kernel function for the calculation of dot products of H(T) vectors. According to equation 5 ",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Collins and Duffy, 2001",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Convolution Tree Kernel",
"sec_num": null
},
{
"text": "In (Khayyamian et al., 2009) , four sub-kernels of the generalized convolution tree kernel were proposed. It seems that these kernels can be applied to temporal relation classification. Using weighting functions of the generalized kernel, the customized kernels differentiate among subtrees based on how their nodes interact with the event arguments.",
"cite_spans": [
{
"start": 3,
"end": 28,
"text": "(Khayyamian et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Customization for Temporal Relation Classification",
"sec_num": null
},
{
"text": "Since the whole syntactic parse tree of the sentence that holds the event arguments contains plenty of misleading features, as in (Zhang et al., 2006) , Path-enclosed Tree (PT) is chosen as our tree portion for applying tree kernels. PT is a portion of parse tree that is enclosed by the shortest path between two event arguments.",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "(Zhang et al., 2006)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Customization for Temporal Relation Classification",
"sec_num": null
},
{
"text": "By setting \u03bb \u03b1 = = ) (n inw and enw(n)=1 for all nodes, the generalized kernel can be converted to the kernel proposed in (Collins and Duffy, 2001) . In their paper, parameter",
"cite_spans": [
{
"start": 122,
"end": 147,
"text": "(Collins and Duffy, 2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 The Original Collins and Duffy Kernel",
"sec_num": null
},
{
"text": "1 0 \u2264 < \u03bb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 The Original Collins and Duffy Kernel",
"sec_num": null
},
{
"text": "is a decaying parameter used to retain the kernel values within a fairly small range. Without this parameter, the value of the kernel for identical trees becomes much larger than its value for different trees, which slows down SVM convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 The Original Collins and Duffy Kernel",
"sec_num": null
},
{
"text": "Definition of weighting functions is as follows. Parameter",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Argument Ancestor Path Kernel (AAP)",
"sec_num": null
},
{
"text": "1 0 \u2264 < \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Argument Ancestor Path Kernel (AAP)",
"sec_num": null
},
{
"text": "is a decaying parameter analogous to \u03bb . This weighting method is equivalent to applying original Collins and Duffy kernel on a portion of the parse tree that exclusively includes the arguments ancestor nodes and their direct children. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Argument Ancestor Path Kernel (AAP)",
"sec_num": null
},
{
"text": "Function AAPDist(n, arg) computes the distance of node n from ancestor path of event argument arg on the parse tree as depicted in Figure 2 . MAXDIST is used for normalization, and is the maximum value of AAPDist in the whole tree. Using this weighting approach, the closer a node is to one of the arguments ancestor path, the less it is decayed by the weighting function.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 139,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "\u2022 Argument Ancestor Path Kernel (AAP)",
"sec_num": null
},
{
"text": "\u2022 Argument Distance Kernel (AD) Weighting functions of this kernel, which have identical definitions, are shown as follows. Their definitions are similar to the previous kernel functions, though they use a different distance function which measures the distance of a node from an event argument rather than its ancestor path (see Figure 2 ). \u2022 Threshold Sensitive Argument Ancestor Path Distance Kernel (TSAAPD) This kernel is intuitively similar to AAPD kernel; except that instead of using a smooth decaying method, it employs a threshold based technique. Weighting functions are as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 338,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "\u2022 Argument Ancestor Path Kernel (AAP)",
"sec_num": null
},
{
"text": "( ) ( ) ( ) ( ) \uf8f3 \uf8f2 \uf8f1 > \u2264 = = Threshold n AAPDist Threshold n AAPDist n enw n inw \u03b1 1 (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Argument Ancestor Path Kernel (AAP)",
"sec_num": null
},
{
"text": "In this section, two types of composition are proposed: linear composition and polynomial composition (Zhang et al., 2006) .",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Zhang et al., 2006)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Composite Kernels for Temporal Relation Extraction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) ( ) ( ) 2 1 2 2 1 1 2 1 , 1 , , TR TR K TR TR K TR TR K l \u03b1 \u03b1 \u2212 + =",
"eq_num": "(11)"
}
],
"section": "Linear Composite Kernel",
"sec_num": null
},
{
"text": "where 1 K can be a normalized form of one of the mentioned convolution tree kernels. A kernel K(X, Y) can be normalized by dividing it by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Composite Kernel",
"sec_num": null
},
{
"text": "( ) ( ) Y Y K X X K , . , . 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Composite Kernel",
"sec_num": null
},
{
"text": "K is the normalized form of simple event kernel. \u03b1 is the composition coefficient. Based on five tree kernels that have been introduced, five linear composite kernels can be accordingly produced. is the polynomial expansion of 2 K with degree d (in this work, we have assumed d=2) and is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Composite Kernel",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) d P K K 2 21 + =",
"eq_num": "(13)"
}
],
"section": "Polynomial Composite Kernel",
"sec_num": null
},
{
"text": "Five different polynomial composite kernels can also be constructed in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Composite Kernel",
"sec_num": null
},
{
"text": "We used Timebank (v 1.2) with 183 newswire documents and 64077 words, and for comparison with previous works, we added 73 documents of the Opinion Corpus (Mani et al., 2006) , which has 38709 words. These two datasets have been released based on TimeML (Pustejovsky et al, 2003) . There are 14 temporal relations in TLink (Event-Event and Event-Time relations) class of TimeML. Similar to (Mani et al., 2006; Tatu and Srikanth, 2008; Mani et al., 2007) , we used a normalized version of these 14 temporal relations that contains 6 temporal relations RelTypes = {SIMULTANEOUS, IBEFORE, BEFORE, BEGINS, ENDS, INCLUDES}. For converting 14 relations to 6, the inverse relations were omitted, and SIMULTAENOUS and IDENTITY, as well as DURING and IS_INCLUDED, were collapsed.",
"cite_spans": [
{
"start": 154,
"end": 173,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 253,
"end": 278,
"text": "(Pustejovsky et al, 2003)",
"ref_id": "BIBREF18"
},
{
"start": 389,
"end": 408,
"text": "(Mani et al., 2006;",
"ref_id": "BIBREF15"
},
{
"start": 409,
"end": 433,
"text": "Tatu and Srikanth, 2008;",
"ref_id": "BIBREF20"
},
{
"start": 434,
"end": 452,
"text": "Mani et al., 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Description",
"sec_num": "4"
},
{
"text": "In our experiments, we merged two Timebank and Opinion datasets to generate a single corpus called OTC. Table 1 shows the normalized TLink class distribution over OTC. As it is shown in table 1, relation \"BEFORE\" is the most frequent relation; thus it forms the majority class, and has been used as the baseline of experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Corpus Description",
"sec_num": "4"
},
{
"text": "We have used LIBSVM (Chang and Lin, 2001 ) java source for the SVM classification (oneversus-one multi class strategy), Stanford NLP package (available at http://nlp.stanford.edu/software/index.shtml) for tokenization, sentence segmentation, and parsing.",
"cite_spans": [
{
"start": 20,
"end": 40,
"text": "(Chang and Lin, 2001",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Since tree kernels can be more appropriately applied to the event pairs that reside on the same sentence, the corpus data have been accordingly split into two intra-sentential and intersentential parts. The proposed kernels have been evaluated on the intra-sentential instances, while the simple event kernel has been exclusively used for the inter-sentential instances. The results reported for the whole corpus has been produced by combining those results. All the results are the outcome of a 5-fold cross validation. In order to find the appropriate value for parameters, 1000 event pairs have been randomly chosen as development set. Table 2 shows the accuracy results of employing different tree kernels. In our evaluation, baseline was the majority class (BEFORE relation) of the evaluated corpus. Mani is the state of the art method, which exclusively uses gold-standard features (Mani et al., 2006) . The other methods were described in the subsection 3.2.",
"cite_spans": [
{
"start": 888,
"end": 907,
"text": "(Mani et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 639,
"end": 646,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The results show that using syntactic structure of sentences can be very effective. Comparing with other methods, AAPD kernel has achieved the best results. It showed 3% improvement over Mani's method on Timebank and 1% over OTC. The other tree kernels showed satisfactory results, too. As it is demonstrated in table 3, the effective exploitation of syntactic and simple event features in the linear composite kernels (subsection 3.3) resulted in a noticeable improvement of accuracy. Here, AAPD linear composite kernel was the most successful kernel, which gained over 6% improvement on Timebank, and 3% progress in accuracy on OTC. Table 4 shows the accuracy results of applying five polynomial composite kernels (subsection 3.3) to Timebank and OTC. The results of applying polynomial composite kernels reveal that these methods work better than their linear counterparts. On Timebank, AD polynomial composite kernel achieved the best result (i.e., over 6.2% improvement). On the other hand, on OTC, AAPD gained the best results with 3.45% improvement. Unfortunately there are not a lot of researches on pattern based event-event relation classification, and we have to compare our work only with Mani algorithm. Regarding the hardness of the problem, it can be said, that the improvement is considerable.",
"cite_spans": [],
"ref_spans": [
{
"start": 635,
"end": 642,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In this paper, we have addressed the problem of extracting temporal relations between events, which has been a topic of interest since early days of natural language processing. Although syntactic features seem to be potentially useful in various text classification tasks, they have not yet been effectively exploited in temporal relation classification. We have tried to take advantage of such features to enhance classification performance. Support Vector Machines (SVM) has been chosen as our classification algorithm, due to its promising results in related works. Using SVM, two types of composite kernels have been proposed by combining convolution tree kernels and a simple event kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The results of applying the new method, without using any extra annotated data, show a noticeable improvement over related works in the area of pattern based methods (including the state of the art method) in terms of accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "It seems that using dependency structure of sentences or creating better kernels for SVM might be even further improve the accuracy of system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "23rd Pacific Asia Conference on Language, Information and Computation, pages 355-364",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Two-phased event relation acquisition coupling the relation-oriented and argument-oriented approaches",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abe, S., K. Inui, and Y. Matsumoto. 2008. Two-phased event relation acquisition coupling the relation-oriented and argument-oriented approaches. In Proceedings of the 22nd International Conference on Computational Linguistics, pp.1-8.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Towards a general theory of action and time",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
}
],
"year": 1984,
"venue": "Artificial Intelligence",
"volume": "23",
"issue": "2",
"pages": "123--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen, J. F. 1984. Towards a general theory of action and time. Artificial Intelligence, 23(2):123-154.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The value of time in unstructured data for IR",
"authors": [
{
"first": "R",
"middle": [],
"last": "Alonso",
"suffix": ""
}
],
"year": 2009,
"venue": "CIDR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alonso, R. 2009. The value of time in unstructured data for IR. In CIDR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A training algorithm for optimal margin classifiers",
"authors": [
{
"first": "B",
"middle": [
"E"
],
"last": "Boser",
"suffix": ""
},
{
"first": "I",
"middle": [
"M"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "V",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of 5 th workshop on Computational learning theory",
"volume": "",
"issue": "",
"pages": "144--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boser, B. E., I. M. Guyon, and V. N. Vapnik. 1992. A training algorithm for optimal margin classifiers. In Proceedings of 5 th workshop on Computational learning theory, pp.144-152.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Classifying temporal relations between events",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL-45",
"volume": "",
"issue": "",
"pages": "173--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chambers, N., S. Wang, and D. Jurafsky. 2007. Classifying temporal relations between events. In Proceedings of ACL-45, pp.173-176.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Libsvm: a library for support vector machines",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Chang",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, C. C. and C. J. Lin. 2001. Libsvm: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Global path-based refinement of noisy graphs applied to verb semantics",
"authors": [
{
"first": "T",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceeding of Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "792--803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chklovski, T. and P. Pantel. 2005. Global path-based refinement of noisy graphs applied to verb semantics. In Proceeding of Joint Conference on Natural Language Processing, pp.792-803.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolution kernels for natural language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "14",
"issue": "",
"pages": "625--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, M. and N. Duffy. 2001. Convolution kernels for natural language. In Advances in Neural Information Processing Systems 14, pp.625-632. MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Support-vector networks",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "",
"issue": "",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cortes, C. and V. Vapnik. 1995. Support-vector networks. Machine Learning, pp.273-297.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mathematical structure of language",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harris, Z. 1968. Mathematical structure of language. John Wiley Sons, New York.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What kinds and amounts of causal knowledge can be acquired from text by using connective markers as clues?",
"authors": [
{
"first": "T",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of 6th International Conference on Discovery Science",
"volume": "",
"issue": "",
"pages": "180--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inui, T., K. Inui, and Y. Matsumoto. 2003. What kinds and amounts of causal knowledge can be acquired from text by using connective markers as clues? In Proceedings of 6th International Conference on Discovery Science, pp.180-193.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Syntactic tree-based relation extraction using a generalization of Collins and Duffy convolution tree kernel",
"authors": [
{
"first": "M",
"middle": [],
"last": "Khayyamian",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Mirroshandel",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Abolhassani",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khayyamian, M., S. A. Mirroshandel, and H. Abolhassani. 2009. Syntactic tree-based relation extraction using a generalization of Collins and Duffy convolution tree kernel. In Proceedings of Human Language Technologies, pp.66-71.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning sentence-internal temporal relations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artificial Intelligence Research",
"volume": "27",
"issue": "",
"pages": "85--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lapata, M. and A. Lascarides. 2006. Learning sentence-internal temporal relations. Journal of Artificial Intelligence Research, 27:85-117.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dirt-discovery of inference rules from text",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 7th ACM SIGKDD Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. and P. Pantel. 2001. Dirt-discovery of inference rules from text. In Proceedings of the 7th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp.323-328.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Three approaches to learning tlinks in timeml",
"authors": [
{
"first": "I",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mani, I., B. Wellner, M. Verhagen, and J. Pustejovsky. 2007. Three approaches to learning tlinks in timeml. Technical Report CS-07-268. Computer Science Department, Brandeis University, Waltham, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Machine learning of temporal relations",
"authors": [
{
"first": "I",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "C",
"middle": [
"M"
],
"last": "Lee",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL-44",
"volume": "",
"issue": "",
"pages": "753--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mani, I., V. Marc, B. Wellner, C. M. Lee, and J. Pustejovsky. 2006. Machine learning of temporal relations. In Proceedings of ACL-44, pp.753-760.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Event-Time Temporal Relation Classification Using Syntactic Tree Kernels",
"authors": [
{
"first": "S",
"middle": [
"A"
],
"last": "Mirroshandel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Khayyamian",
"suffix": ""
},
{
"first": "G",
"middle": [
"R"
],
"last": "Ghassem-Sani",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the LTC'09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirroshandel, S. A., M. Khayyamian, and G. R. Ghassem-Sani. 2009. Event-Time Temporal Relation Classification Using Syntactic Tree Kernels. In Proceedings of the LTC'09, Poznan, Poland (To appear).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Acquisition of verb entailment from text",
"authors": [
{
"first": "V",
"middle": [],
"last": "Pekar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pekar, V. 2006. Acquisition of verb entailment from text. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pp.49-56.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The TIMEBANK corpus",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sauri",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sundheim",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lazo",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Corpus Linguistics",
"volume": "",
"issue": "",
"pages": "647--656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pustejovsky, J., P. Hanks, R. Sauri, A. See, R. Gaizauskas, A. Setzer, D. Radev, B. Sundheim, D. Day, L. Ferro, and M. Lazo. 2003. The TIMEBANK corpus. In Proceedings of Corpus Linguistics 2003, pp.647-656.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scaling web-based acquisition of entailment relations",
"authors": [
{
"first": "I",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Tanev",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Szpektor, I., H. Tanev, and I. Dagan. 2004. Scaling web-based acquisition of entailment relations. In Proceedings of EMNLP, pp.41-48.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Experiments with reasoning for temporal relations between events",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tatu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Srikanth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "857--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatu, M. and M. Srikanth. 2008. Experiments with reasoning for temporal relations between events. In Proceedings of the 22nd International Conference on Computational Linguistics, pp.857-864.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Acquiring inference rules with temporal constraints by using Japanese coordinated sentences and noun-verb co-occurrences",
"authors": [
{
"first": "K",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Torisawa, K. 2006. Acquiring inference rules with temporal constraints by using Japanese coordinated sentences and noun-verb co-occurrences. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pp.57-64.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A composite kernel to extract relations between entities with both flat and structured features",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "G",
"middle": [
"D"
],
"last": "Zhou",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and ACL-44",
"volume": "",
"issue": "",
"pages": "825--832",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, M., J. Zhang, J. Su, and G. D. Zhou. 2006. A composite kernel to extract relations between entities with both flat and structured features. In Proceedings of the 21st International Conference on Computational Linguistics and ACL-44, pp.825-832.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": ", the calculation of the kernel eventually leads to the sum of function",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Samples of sub-trees used in convolution tree kernel calculation.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "A syntactic parse tree with AAPDist and ArgDist example. There is a SIMULTAENOUS temporal relation between (move, resign) event pair in this parse tree.\u2022 Argument Ancestor Path Distance Kernel (AAPD)Weighting functions are defined in the following equations. Both functions have identical definitions for this kernel.",
"num": null
},
"TABREF3": {
"text": "The normalized Event-Event TLink distribution in the Timebank and OTC.",
"content": "<table><tr><td>Relation</td><td>Timebank</td><td>OTC</td></tr><tr><td>IBEFORE</td><td>63 (1.8 %)</td><td>131 (2.13 %)</td></tr><tr><td>BEGINGS</td><td>77 (2.21 %)</td><td>160 (2.60 %)</td></tr><tr><td>ENDS</td><td>114 (3.27 %)</td><td>208 (3.38 %)</td></tr><tr><td>SIMULTANEOUS</td><td>1304 (37.46 %)</td><td>1528 (24.86 %)</td></tr><tr><td>INCLUDES</td><td>588 (16.89 %)</td><td>950 (15.45 %)</td></tr><tr><td>BEFORE</td><td>1335 (38.35 %)</td><td>3170 (51.57 %)</td></tr><tr><td>TOTAL</td><td colspan=\"2\">3481 (2387 intra-sentential) 6147 (4377 intra-sentential)</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "The accuracy of tree kernels on Timebank and OTC.",
"content": "<table><tr><td>Method</td><td>Timebank Corpus</td><td>OTC Corpus</td></tr><tr><td>Baseline</td><td>38.35</td><td>51.57</td></tr><tr><td>Mani</td><td>50.97</td><td>62.5</td></tr><tr><td>CollinsDuffy</td><td>51.71</td><td>62.04</td></tr><tr><td>AAP</td><td>53.41</td><td>62.52</td></tr><tr><td>AAPD</td><td>54</td><td>63.44</td></tr><tr><td>AD</td><td>53.3</td><td>62.38</td></tr><tr><td>TSAAPD</td><td>53</td><td>62.53</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"text": "The accuracy of linear composite kernels on Timebank and OTC.",
"content": "<table><tr><td>Method</td><td>Timebank Corpus</td><td>OTC Corpus</td></tr><tr><td>CollinsDuffy Linear</td><td>56.67</td><td>65.27</td></tr><tr><td>AAP Linear</td><td>56.12</td><td>64.88</td></tr><tr><td>AAPD Linear</td><td>56.73</td><td>65.62</td></tr><tr><td>AD Linear</td><td>56.6</td><td>65.34</td></tr><tr><td>TSAAPD Linear</td><td>56.4</td><td>65.24</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"text": "The accuracy of polynomial composite kernels on Timebank and OTC.",
"content": "<table><tr><td>Method</td><td colspan=\"2\">Timebank Corpus OTC Corpus</td></tr><tr><td>CollinsDuffy Polynomial</td><td>56.81</td><td>65.67</td></tr><tr><td>AAP Polynomial</td><td>56.94</td><td>65.76</td></tr><tr><td>AAPD Polynomial</td><td>57.02</td><td>65.95</td></tr><tr><td>AD Polynomial</td><td>57.25</td><td>65.92</td></tr><tr><td>TSAAPD Polynomial</td><td>56.43</td><td>65.32</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}