| { |
| "paper_id": "D15-1020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:27:26.760267Z" |
| }, |
| "title": "Cross-document Event Coreference Resolution based on Cross-media Features", |
| "authors": [ |
| { |
| "first": "Tongtao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Rensselaer Polytechnic Institute", |
| "location": {} |
| }, |
| "email": "zhangt13@rpi.edu" |
| }, |
| { |
| "first": "Hongzhi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "hongzhi.li@columbia.edu" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Rensselaer Polytechnic Institute", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Shih-Fu", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "shih.fu.chang@columbia.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we focus on a new problem of event coreference resolution across television news videos. Based on the observation that the contents from multiple data modalities are complementary, we develop a novel approach to jointly encode effective features from both closed captions and video key frames. Experiment results demonstrate that visual features provided 7.2% absolute F-score gain on stateof-the-art text based event extraction and coreference resolution.", |
| "pdf_parse": { |
| "paper_id": "D15-1020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we focus on a new problem of event coreference resolution across television news videos. Based on the observation that the contents from multiple data modalities are complementary, we develop a novel approach to jointly encode effective features from both closed captions and video key frames. Experiment results demonstrate that visual features provided 7.2% absolute F-score gain on stateof-the-art text based event extraction and coreference resolution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "TV news is the medium that broadcasts events, stories and other information via television. The broadcast is conducted in programs with the name of \"Newscast\". Typically, newscasts require one or several anchors who are introducing stories and coordinating transition among topics, reporters or journalists who are presenting events in the fields and scenes that are captured by cameramen. Similar to newspapers, the same stories are often reported by multiple newscast agents. Moreover, in order to increase the impact on audience, the same stories and events are reported for mutliple times. TV audience passively receives redundant information, and often has difficulty in obtaining clear and useful digest of ongoing events. These properties lead to needs for automatic methods to cluster information and remove redundancy. We propose a new research problem of event coreference resolution across multiple news videos.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To tackle this problem, a good starting point is processing the Closed Captions (CC) which is accompanying videos in newcasts. The CC is either generated by automatic speech recognition (ASR) systems or transcribed by a human stenotype operator who inputs phonetics which are Figure 1 : Similar visual contents improve detection of a coreferential event pair which has a low text-based confidence score. Closed Captions: \"It 's not clear when it was killed.\"; \"Jordan just executed two ISIS prisoners, direct retaliation for the capture of the killing Jordanian pilot.\" instantly and automatically translated into texts, where events can be extracted. There exist some previous event coreference resolution work such as (Chen and Ji, 2009b; Chen et al., 2009; Lee et al., 2012; Bejan and Harabagiu, 2010) . However, they only focused on formally written newswire articles and utilized textual features. Such approaches do not perform well on CC due to (1). the propagated errors from upper stream components (e.g., automatic speech/stenotype recognition and event extraction); (2). the incompleteness of information. Different from written news, newscasts are often limited in time due to fixed TV program schedules, thus, anchors and journalists are trained and expected to organize reports which are comprehensively informative with complementary visual and CC descriptions within a short time. These two sides have minimal overlapped information while they are inter-dependent. For example, anchors and reporters introduce the background story which are not presented in the videos, and thus the events extracted from CC often lack information about participants.", |
| "cite_spans": [ |
| { |
| "start": 720, |
| "end": 740, |
| "text": "(Chen and Ji, 2009b;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 741, |
| "end": 759, |
| "text": "Chen et al., 2009;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 760, |
| "end": 777, |
| "text": "Lee et al., 2012;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 778, |
| "end": 804, |
| "text": "Bejan and Harabagiu, 2010)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 276, |
| "end": 284, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For example, as shown in Figure 1 , these two Conflict.Attack event mentions are coreferential. However, in the first event mention, a mistake in Closed Caption (\"he was killed\" \u2192 \"it was killed\") makes event extraction and text based coreference systems unable to detect and link \"it\" to the entity of \"Jordanian pilot\". Fortunately, videos often illustrate brief descriptions by vivid visual contents. Moreover, diverse anchors, reporters and TV channels tend to use similar or identical video contents to describe the same story, even though they usually use different words and phrases. Therefore, the challenges in coreference resolution methods based on text information can be addressed by incorporating visual similarity. In this example, the visual similarity between the corresponding video frames is high because both of them show the scene of the Jordanian pilot.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 25, |
| "end": 33, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Similar work such as (Kong et al., 2014) , (Ramanathan et al., 2014) , (Motwani and Mooney, 2012) and (Ramanathan et al., 2013) have explored methods of linking visual materials with texts. However, these methods mainly focus on connecting image concepts with entities in text mentions; and some of them do not clearly distinguish entity and event in the documents since the definition of visual concepts often require both of them. Moreover, the aforementioned work mainly focuses on improving visual contents recognition by introducing text features while our work will take the opposite route, which takes advantage of visual information to improve event coreference resolution.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 40, |
| "text": "(Kong et al., 2014)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 43, |
| "end": 68, |
| "text": "(Ramanathan et al., 2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 71, |
| "end": 97, |
| "text": "(Motwani and Mooney, 2012)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 102, |
| "end": 127, |
| "text": "(Ramanathan et al., 2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose to jointly incorporate features from both speech (textual) and video (visual) channels for the first time. We also build a newscast crawling system that can automatically accumulate video records and transcribe closed captions. With the crawler, we created a benchmark dataset which is fully annotated with crossdocument coreferential events 1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1 Dataset can be found at http://www.ee.columbia.edu/dvmm/newDownloads.htm 2 Approach", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Given unstructured transcribed CC, we extract entities and events and present them in structured forms. We follow the terminologies used in ACE (Automatic Content Extraction) (NIST, 2005):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Event Extraction", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 Entity: an object or set of objects in the world, such as person, organization and facility. \u2022 Entity mention: words or phrases in the texts that mention an entity. \u2022 Event: a specific occurrence involving participants. \u2022 Event trigger: the word that most clearly expresses an event's occurrence. \u2022 Event argument: an entity, or a temporal expression or a value that has a certain role (e.g., Time-Within, Place) in an event. \u2022 Event mention: a sentence (or a text span extent) that mentions an event, including a distinct trigger and arguments involved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Event Extraction", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Coreferential events are defined as the same specific occurrence mentioned in different sentences, documents and transcript texts. Coreferential events should happen in the same place and within the same time period, and the entities involved and their roles should be identical. From the perspective of extracted events, each specific attribute and argument from those events should match. However, mentions for the same event may appear in forms of diverse words and phrases; and they do not always cover all arguments or attributes. To tackle these challenges, we adopt a Maximum Entropy (MaxEnt) model as in (Chen and Ji, 2009b) . We consider every pair of event mentions which share the same event type as a candidate and exploit features proposed in (Chen and Ji, 2009b; Chen et al., 2009) . Note that the goal in (Chen and Ji, 2009b; Chen et al., 2009) was to resolve event coreference within the same document, whereas our scenario yields to a crossdocument/video transcript setting, so we remove some improper and invalid features. We also investigated the approaches by (Lee et al., 2012) and (Bejan and Harabagiu, 2010), but the confidence estimation results from these alternative methods are not reliable. Moreover, the input of event coreference are automatic results from event extraction instead of gold standard, so the noise and errors significantly impact the corefer-ence performance, especially for unsupervised approaches (Bejan and Harabagiu, 2010). Nevertheless, we still incorporate features from the aforementioned methods. Table 1 shows the features that constitute the input of the MaxEnt model.", |
| "cite_spans": [ |
| { |
| "start": 612, |
| "end": 632, |
| "text": "(Chen and Ji, 2009b)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 756, |
| "end": 776, |
| "text": "(Chen and Ji, 2009b;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 777, |
| "end": 795, |
| "text": "Chen et al., 2009)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 820, |
| "end": 840, |
| "text": "(Chen and Ji, 2009b;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 841, |
| "end": 859, |
| "text": "Chen et al., 2009)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1080, |
| "end": 1098, |
| "text": "(Lee et al., 2012)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1550, |
| "end": 1557, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Text based Event Coreference Resolution", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Visual content provides useful cues complementary with those used in text-based approach in event coreference resolution. For example, two coreferential events typically show similar or even duplicate scenes, objects, and activities in the visual channel. Coherence of such visual content has been used in grouping multiple video shots into the same video story , but it has not been used for event coreference resolution. Recent work in computer vision has demonstrated tremendous progress in large-scale visual content recognition. In this work, we adopt the state-of-the-art techniques (Krizhevsky et al., 2012) and (Simonyan and Zisserman, 2014 ) that train robust convolutional neural networks (CNN) over millions of web images to detect 20,000 semantic categories defined in ImageNet (Deng et al., 2009 ) from each image. The 2nd to the last layer features from such deep network can be considered as high-level visual representation that can be used to discriminate various semantic classes (scenes, objects, activity). It has been found effective in computing visual similarity between images, by directly computing the L2 distance of such features or through further metric learning. To compute the similarity between videos associated with two candidate event mentions, we sample multiple frames from each video and aggregate the similarity scores of the few most similar image pairs between the videos. Let {f i 1 , f i 2 , ..., , f i l } be the key frames sampled from video V i and {f j 1 , f j 2 , ..., , f j l } be key frames sampled from video V j . All the frames are resized to a fixed resolution of 256 x 256 and fed into our pretrained CNN model. We get the high-level visual representation F m = F C7(f m ) for each frame f m from the output of the 2nd to the last fully connected layer (FC7) of CNN model. F m is a 4096 dimension vector. The visual distance of frames f m and f n is defined by L 2 distance, which is", |
| "cite_spans": [ |
| { |
| "start": 589, |
| "end": 614, |
| "text": "(Krizhevsky et al., 2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 619, |
| "end": 648, |
| "text": "(Simonyan and Zisserman, 2014", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 790, |
| "end": 808, |
| "text": "(Deng et al., 2009", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Visual Similarity", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "D mn = ||F C7(f i m ) \u2212 F C7(f j n )|| 2 .", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Visual Similarity", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The distance of video pair", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Visual Similarity", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(V i , V j ) is computed as D ij = 1 k * (fm,fn) D mn (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Visual Similarity", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": ", where (f m , f n ) is the top k of most similar frame pairs. In our experiment, we use k = 3. Such aggregation method among the top matches is intended to capture similarity between videos that share only partially overlapped content. Each news video story typically starts with an introduction by an anchor person followed by news footages showing the visual scenes or activities of the event. Therefore, when computing visual similarity, it's important to exclude the anchor shot and focus on the story-related clips. Anchor frame detection is a well studied problem. In order to detect anchor frames automatically, a face detector is applied to all I-frames of a video. We can obtain the location and size of each detected face. After checking the temporal consistency of the detected faces within each shot, we get a set of candidate anchor faces. The detected face regions are further extended to regions of interest that may include hair and upper body. All the candidate faces detected from the same video are clustered based on their HSV color histogram. It is reasonable to assume that the most frequent face cluster is the one corresponding to the anchor faces. Once the anchor frames are detected, they are excluded and only the non-anchor frames are used to compute the visual similarity between videos associated with event mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Visual Similarity", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Using the visual distance calculated from Section 2.3, we can rerank the confidence values from Section 2.2 using the text-based MaxEnt model. We use the following empirical equation to adjust the confidence:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Re-ranking", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "W ij = W ij * e \u2212D ij \u03b1 +1 ,", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Joint Re-ranking", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "where W ij denotes the original coreference confidence between event mentions i and j, D ij denotes the visual distance between the corresponding video frames where the event mentions were spoken and \u03b1 is a parameter which is used to adjust the impact of visual distance. In the current implementation, we empirically set it as the average of pair-wised visual distances between videos of all event coreference candidates. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Re-ranking", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We establish a system that actively monitors over 100 U.S. major broadcast TV channels such as ABC, CNN and FOX, and crawls newscasts from these channels for more than two years (Li et al., 2013a) . With this crawler, we retrieve 100 videos and their correspondent transcribed CC with the topic of \"ISIS\" 2 . This system also temporally aligns the CC text with the transcribed text from automatic speech recognition following the methods in . This provides accurate time alignment between the CC text and the video frames. As CC consists of capitalized letters, we apply the true-casing tool from Standford CoreNLP (Manning et al., 2014) on CC. Then we apply a state-of-the-art event extraction system (Li et al., 2013b; to extract event mentions from CC. We asked two human annotators to investigate all event pairs and annotate coreferential pairs as the ground truth. Kappa coefficient for measuring inter-annotator agreement is 2 abbreviation for Islamic State of Iraq and Syria 74.11%. In order to evaluate our system performance, we rank the confidence scores of all event mention pairs and present the results in Precision vs. Detection Depth curve. Finally we find the video frames corresponding to the event mentions, remove the anchor frames and calculate the visual similarity between the videos. Our final dataset consists of 85 videos, 207 events and 848 event pairs, where 47 pairs are considered coreferential.", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 196, |
| "text": "(Li et al., 2013a)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 615, |
| "end": 637, |
| "text": "(Manning et al., 2014)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 702, |
| "end": 720, |
| "text": "(Li et al., 2013b;", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Setting", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We adopt the MaxEnt-based coreference resolution system from (Chen and Ji, 2009b; Chen et al., 2009) as our baseline, and use ACE 2005 English Corpus as the training set for the model. A 5-fold cross-validation is conducted on the training set and the average f-score is 56%. It is lower than results from (Chen and Ji, 2009a ) since we remove some features which are not available for the cross-document scenario.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 81, |
| "text": "(Chen and Ji, 2009b;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 82, |
| "end": 100, |
| "text": "Chen et al., 2009)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 306, |
| "end": 325, |
| "text": "(Chen and Ji, 2009a", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Setting", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The peak F-score for the baseline system is 44.23% while our cross-media method boosts it to 51.43%. Figure 2 shows the improvement after incorporating the visual information. We adopt Wilcoxon signed-rank test to determine the significance between the pairs of precision scores at the same depth. The z-ratio is 3.22, which shows the improvement is significant.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 109, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For example, the event pair \"So why hasn't U.S. air strikes targeted Kobani within the city Figure 2 : Performance comparison between baseline and our cross-media method on top 150 pairs. Circles indicate the peak F-scores.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 92, |
| "end": 100, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "limits\" and \"Our strikes continue alongside our partners.\" was mistakenly considered coreferential by text features. In fact, the former \"strikes\" mentions the airstrike and the latter refers to the war or battle, therefore, they are not coreferential. The corresponding video shots demonstrate two different scenes: the former one shows bombing while the latter shows that the president is giving a speech about the strike. Thus the visual distance successfully corrected this error.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "However, from Figure 2 we can also notice that there are still some errors caused by the visual features. One major error type resides in the negative pairs with both \"relatively\" high textual coreference confidence scores and \"relatively\" high visual similarity. From the text side, the event pair contains similar events, for example: \"The Penn(tagon) says coalition air strikes in and around the Syrian city of Kobani have kill hundreds of ISIS fighters but more are streaming in even as the air campaign intensifies.\" and \"Throughout the day, explosions from coalition air strikes sent plums of smoke towering into the sky.\". They talk about two airstrikes during different time periods and are not coreferential, but the baseline system produces a high rank. Our current approach limits the image frames to those overlapped with the speech of an event mention, and in this error, both videos show \"battle\" scene, yielding a small visual distance. The aforementioned assumption that anchors and journalists tend to use similar videos when describing the same events , which may introduce risk of error caused by similar text event mentions with similar video shots. For such errors, one potential solution is to expand the video frame windows to capture more events and concepts from videos. Expanding the detec-tion range to include visual events in the temporal neighborhood can also differentiate the events.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 22, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "A systematic way of choosing \u03b1 in Equation 3 will be useful. One idea is to adapt the \u03b1 value for different types of events, e.g., we expect some event types are more visually oriented than others and thus use a smaller \u03b1 value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We also notice the impact of the errors from the upstream event extraction system. According to the F-score of event trigger labeling is 65.3%, and event argument labeling is 45%. Missing arguments in events is a main problem, thus the performance on automatically extracted event mentions is significantly worse. About 20 more coreferential pairs could be detected if events and arguments are perfectly extracted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In this paper, we improved event coreference resolution on newscast speech by incorporating visual similarity. We also build a crawler that provides a benchmark dataset of videos with aligned closed captions. This system can also help create more datasets to conduct research on video description generation. In the future, we will focus on improving event extraction from texts by introducing more fine-grained cross-media information such as object, concept and event detection results from videos. Moreover, joint detection of events from both sides is our ultimate goal, however, we need to explore the mapping among events from both text and visual sides, and automatic detection of a wide range of objects and events from news video itself is still challenging.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by the U.S. DARPA DEFT Program No. FA8750-13-2-0041, ARL NS-CTA No. W911NF-09-2-0053, NSF CA-REER Award IIS-1523198, AFRL DREAM project, gift awards from IBM, Google, Disney and Bosch. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Unsupervised event coreference resolution with rich linguistic features", |
| "authors": [ |
| { |
| "first": "Adrian", |
| "middle": [], |
| "last": "Cosmin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanda", |
| "middle": [], |
| "last": "Bejan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harabagiu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1422", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cosmin Adrian Bejan and Sanda Harabagiu. 2010. Unsupervised event coreference resolution with rich linguistic features. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics, pages 1412-1422.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Event coreference resolution: Algorithm, feature impact and evaluation", |
| "authors": [ |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Events in Emerging Text Types (eETTs) Workshop, in conjunction with RANLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zheng Chen and Heng Ji. 2009a. Event coref- erence resolution: Algorithm, feature impact and evaluation. In Proceedings of Events in Emerging Text Types (eETTs) Workshop, in conjunction with RANLP, Bulgaria.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Graph-based event coreference resolution", |
| "authors": [ |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "54--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zheng Chen and Heng Ji. 2009b. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natu- ral Language Processing, TextGraphs-4, pages 54- 57.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A pairwise event coreference model, feature impact and evaluation for event coreference resolution", |
| "authors": [ |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Ji", |
| "middle": [], |
| "last": "Heng", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Haralick", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Workshop on Events in Emerging Text Types, eETTs '09", |
| "volume": "", |
| "issue": "", |
| "pages": "17--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zheng Chen, Heng Ji, and Robert Haralick. 2009. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proceedings of the Workshop on Events in Emerging Text Types, eETTs '09, pages 17-22.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Imagenet: A large-scale hierarchical image database", |
| "authors": [ |
| { |
| "first": "Jia", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Jia", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "248--255", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hi- erarchical image database. In Proceedings of IEEE Conference on Computer Vision and Pattern Recog- nition, 2009., pages 248-255.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Discovery and fusion of salient multimodal features toward news story segmentation", |
| "authors": [ |
| { |
| "first": "Winston", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shih-Fu", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chih-Wei", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lyndon", |
| "middle": [], |
| "last": "Kennedy", |
| "suffix": "" |
| }, |
| { |
| "first": "Ching-Yung", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Giridharan", |
| "middle": [], |
| "last": "Iyengar", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Electronic Imaging", |
| "volume": "", |
| "issue": "", |
| "pages": "244--258", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Winston Hsu, Shih-Fu Chang, Chih-Wei Huang, Lyn- don Kennedy, Ching-Yung Lin, and Giridharan Iyengar. 2003. Discovery and fusion of salient mul- timodal features toward news story segmentation. In Electronic Imaging 2004, pages 244-258.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Automatic closed caption alignment based on speech recognition transcripts", |
| "authors": [ |
| { |
| "first": "Chih-Wei", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Winston", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shin-Fu", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chih-Wei Huang, Winston Hsu, and Shin-Fu Chang. 2003. Automatic closed caption alignment based on speech recognition transcripts. Rapport technique, Columbia.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "What are you talking about? text-to-image coreference", |
| "authors": [ |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Dahua", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Raquel", |
| "middle": [], |
| "last": "Urtasun", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanja", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "3558--3565", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen Kong, Dahua Lin, Mohit Bansal, Raquel Urtasun, and Sanja Fidler. 2014. What are you talking about? text-to-image coreference. In Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 3558-3565.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Imagenet classification with deep convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1097--1105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097-1105.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Joint entity and event coreference resolution across documents", |
| "authors": [ |
| { |
| "first": "Heeyoung", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "" |
| }, |
| { |
| "first": "Angel", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "489--500", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "News rover: Exploring topical structures and serendipity in heterogeneous multimedia news", |
| "authors": [ |
| { |
| "first": "Hongzhi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Brendan", |
| "middle": [], |
| "last": "Jou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jospeh", |
| "middle": [ |
| "G" |
| ], |
| "last": "Ellis", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Morozoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Shih-Fu", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 21st ACM international conference on Multimedia", |
| "volume": "", |
| "issue": "", |
| "pages": "449--450", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hongzhi Li, Brendan Jou, Jospeh G Ellis, Daniel Mo- rozoff, and Shih-Fu Chang. 2013a. News rover: Exploring topical structures and serendipity in het- erogeneous multimedia news. In Proceedings of the 21st ACM international conference on Multimedia, pages 449-450.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Joint event extraction via structured prediction with global features", |
| "authors": [ |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ji", |
| "middle": [], |
| "last": "Heng", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "73--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013b. Joint event extraction via structured prediction with global fea- tures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 73-82.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Constructing information networks using one single model", |
| "authors": [ |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sujian", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1846--1851", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qi Li, Heng Ji, Yu HONG, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1846-1851.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The Stanford CoreNLP natural language processing toolkit", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mc-Closky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55-60.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Wordnet: A lexical database for english", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Commun. ACM", |
| "volume": "38", |
| "issue": "11", |
| "pages": "39--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41, Novem- ber.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Improving video activity recognition using object recognition and text mining", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Tanvi", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond J", |
| "middle": [], |
| "last": "Motwani", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 20th European Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "600--605", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tanvi S Motwani and Raymond J Mooney. 2012. Improving video activity recognition using object recognition and text mining. In Proceedings of the 20th European Conference on Artificial Intelligence, pages 600-605.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The ace 2005 evaluation plan", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nist", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "NIST. 2005. The ace 2005 evaluation plan. http: //www.itl.nist.gov/iad/mig/tests/ ace/ace05/doc/ace05-evaplan.v3.pdf.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Video event understanding using natural language descriptions", |
| "authors": [ |
| { |
| "first": "Vignesh", |
| "middle": [], |
| "last": "Ramanathan", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of 2013 IEEE International Conference on Computer Vision", |
| "volume": "", |
| "issue": "", |
| "pages": "905--912", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vignesh Ramanathan, Percy Liang, and Li Fei-Fei. 2013. Video event understanding using natural language descriptions. In Proceedings of 2013 IEEE International Conference on Computer Vision, pages 905-912.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Linking people in videos with their names using coreference resolution", |
| "authors": [ |
| { |
| "first": "Vignesh", |
| "middle": [], |
| "last": "Ramanathan", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Computer Vision-ECCV 2014", |
| "volume": "", |
| "issue": "", |
| "pages": "95--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vignesh Ramanathan, Armand Joulin, Percy Liang, and Li Fei-Fei. 2014. Linking people in videos with their names using coreference resolution. In Com- puter Vision-ECCV 2014, pages 95-110.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Very deep convolutional networks for large-scale image recognition", |
| "authors": [ |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Simonyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Zisserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1409.1556" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "text": "type and subtype in EM i trigger pair trigger pair of EM i and EM j pos pair part-of-speech pair of triggers of EM i and EM j nominal 1 if the trigger of EM i is nominal", |
| "content": "<table><tr><td>Category</td><td>Features</td><td>Remarks (EM i : the first event mention, EM j : the sec-</td></tr><tr><td/><td/><td>ond event mention)</td></tr><tr><td>Baseline</td><td colspan=\"2\">type subtype pair of event nom number \"plural\" or \"singular\" if the trigger of EM i is nominal</td></tr><tr><td/><td>pronominal</td><td>1 if the trigger of EM i is pronominal</td></tr><tr><td/><td>exact match</td><td>1 if the trigger spelling in EM i matches that in EM j</td></tr><tr><td/><td>stem match</td><td>1 if the trigger stem in EM i matches that in EM j</td></tr><tr><td/><td>trigger sim</td><td>the semantic similarity scores between triggers of EM i</td></tr><tr><td/><td/><td>and EM j using WordNet(Miller, 1995)</td></tr><tr><td>Arguments</td><td>argument match</td><td>1 if arguments holding the same roles in both EM i and</td></tr><tr><td/><td/><td>EM j matches</td></tr><tr><td>Attributes</td><td>mod,pol,gen,ten</td><td>four event attributes in EM i : modality, polarity, gener-icity and tense</td></tr><tr><td/><td>mod conflict,</td><td>1 if the attributes of EM i and EM j conflict</td></tr><tr><td/><td>pol conflict, gen conflict,</td><td/></tr><tr><td/><td>ten conflict</td><td/></tr><tr><td/><td colspan=\"2\">Table 1: Features for Event Coreference Resolution</td></tr><tr><td colspan=\"3\">we generally enhance the confidence of event pairs</td></tr><tr><td colspan=\"3\">with small visual distances and penalize those with</td></tr><tr><td colspan=\"3\">large ones. An alternative way for setting the alpha</td></tr><tr><td colspan=\"3\">parameter is through cross validation over separate</td></tr><tr><td>data partitions.</td><td/><td/></tr><tr><td/><td/><td>With this \u03b1</td></tr></table>" |
| } |
| } |
| } |
| } |