ACL-OCL / Base_JSON /prefixY /json /Y17 /Y17-1049.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y17-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:33:37.160081Z"
},
"title": "Tweet Extraction for News Production Considering Unreality",
"authors": [
{
"first": "Yuka",
"middle": [],
"last": "Takei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science & Technology Research Laboratories",
"location": {
"addrLine": "1-10-11 Kinuta, Setagaya-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "takei.y-ek@nhk.or.jp"
},
{
"first": "Taro",
"middle": [],
"last": "Miyazaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science & Technology Research Laboratories",
"location": {
"addrLine": "1-10-11 Kinuta, Setagaya-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "miyazaki.t-jw@nhk.or.jp"
},
{
"first": "Ichiro",
"middle": [],
"last": "Yamada",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science & Technology Research Laboratories",
"location": {
"addrLine": "1-10-11 Kinuta, Setagaya-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "yamada.i-hy@nhk.or.jp"
},
{
"first": "Jun",
"middle": [],
"last": "Goto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NHK Science & Technology Research Laboratories",
"location": {
"addrLine": "1-10-11 Kinuta, Setagaya-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Acquiring information on incidents and accidents from social media can be useful for broadcasters to report news faster. However, many tweets including words related to incidents and accidents are actually irrelevant to real events, for example, \"Backdraft's explosion scene was impressive!!!\" Social media contains many comments on events in unreal worlds such as movies, animations and dramas, and it is time-consuming to discriminate these tweets manually. This work presents a method for automatically extracting useful tweets for news reports by focusing on \"unreal\" information. We first prepare unreal tweets as learning data and use a distributed representation and features that can determine if a tweet is real or unreal. By adding the features of a neural network, we generate a learning model that can effectively discriminate whether a tweet includes information on actual incidents or accidents. Results of evaluations revealed that the proposed method achieved a 3.8-point higher F-measure than the baseline method.",
"pdf_parse": {
"paper_id": "Y17-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "Acquiring information on incidents and accidents from social media can be useful for broadcasters to report news faster. However, many tweets including words related to incidents and accidents are actually irrelevant to real events, for example, \"Backdraft's explosion scene was impressive!!!\" Social media contains many comments on events in unreal worlds such as movies, animations and dramas, and it is time-consuming to discriminate these tweets manually. This work presents a method for automatically extracting useful tweets for news reports by focusing on \"unreal\" information. We first prepare unreal tweets as learning data and use a distributed representation and features that can determine if a tweet is real or unreal. By adding the features of a neural network, we generate a learning model that can effectively discriminate whether a tweet includes information on actual incidents or accidents. Results of evaluations revealed that the proposed method achieved a 3.8-point higher F-measure than the baseline method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social networking services (SNSs) enable us to easily transmit information anywhere in real-time. The large amount of information transmitted on SNS, known as \"Social Big Data,\" is a valuable information source for grasping newsworthy occurrence (Vieweg et al., 2010; Kanouchi et al., 2015) , and broadcasters monitor social media such as Twitter to collect information about incidents and accidents. By obtaining information directly from witnesses of such events, broadcasters can report news more quickly and effectively. They use various tools to manually search for tweets that have potential news value, using keywords to find tweets indicating incidents and accidents. However, a lot of effort is required to find valuable information from among the large number of tweets sent every day.",
"cite_spans": [
{
"start": 246,
"end": 267,
"text": "(Vieweg et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 268,
"end": 290,
"text": "Kanouchi et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Methods have been reported for automatically extracting tweets with potential news value by using machine learning (Freitas et al., 2016; Mizuno et al., 2016; Doggett et al., 2016) . However, many tweets irrelevant to actual incidents or accidents include relevant words, which worsen the extraction results. Examples include tweets about events in current animations and TV programs, such as \"\u30c9 \u30e9 \u3048\u3082\u3093\u300c\u306e\u3073\u592a\u306e\u5bb6 \u706b\u4e8b\u306b\u306a\u308b\u30fb\u524d\u7de8\u300d (Doraemon -Nobita's House Fire\u30fbPart 1).\" Many viewers tweet while watching TV to share their opinions with other people. Therefore, many tweets include names of animations (which we call \"virtual proper nouns\") and TV programs. In addition, words in Japanese idioms could also suggest incidents or accidents, such as \" \u706b \u306e \u7121 \u3044 \u3068 \u3053 \u308d \u306b \u7159 \u306f \u7acb \u305f \u306a \u3044 (Where there's smoke, there's fire).\" Furthermore, there are tweets that include hypothetical expressions that assume an incident or accident occurring, such as \"\u706b\u4e8b\u306b\u306a\u3063\u305f\u3089\u3001\u3069\u3053\u306b\u9003\u3052\u308b\u3079\u304d\u3060\u308d\u3046 (If a fire occurs, where should I escape to?).\"",
"cite_spans": [
{
"start": 115,
"end": 137,
"text": "(Freitas et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 138,
"end": 158,
"text": "Mizuno et al., 2016;",
"ref_id": null
},
{
"start": 159,
"end": 180,
"text": "Doggett et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "All three tweets include the word \"fire\" but do not indicate the occurrence of a real fire. The conventional method extracts information from tweets that include words related to incidents and accidents, regardless of whether one has actually occurred. Therefore, to utilize the extracted tweet as a news source, more work is required to determine whether it is a \"real event\" or \"unreal event.\" In this paper, virtual proper nouns (movies and animations), TV program titles, and idiomatic phrases are defined as \"characteristic phrases.\" In addition, phrases including expressions of hypothesized situations are defined as \"hypothesis expressions.\" By adding the presence or absence of \"characteristic phrases\" and \"hypothesis expressions\" to the input of a neural network as features, we generate a learning model that can efficiently discriminate whether a tweet includes information on actual incidents or accidents. Extending the input dimension like this, improved the F-measure by 3.8 points, revealing the effectiveness of the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "During large-scale disasters, such as the 2011 Great East Japan Earthquake, SNSs such as Twitter are effective for transmitting information (Aida et al., 2012) . On the basis of information on SNSs, public officials and emergency workers can grasp what is happening in the disaster area in real-time. However, on Twitter, unreliable and unnecessary information is also diffused excessively, requiring more effort to discriminate relevant information.",
"cite_spans": [
{
"start": 140,
"end": 159,
"text": "(Aida et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "To extract relevant information during a disaster, Neubig et al. developed a semiautomatic information extraction method (Neubig et al., 2011 (Neubig et al., , 2013 ) that efficiently filters information by using active learning. In the process of active learning, an annotator labels each tweet presented by the system as positive or negative. Conventional active learning labels sequentially from the tweets near the boundary of positive and negative samples. On the other hand, their method presents tweets that have the highest possibility of being positive samples, making it possible to minimize the number of negative samples labeled by annotators and improving work efficiency. However, when largescale incidents or accidents occur, secondary tweets such as retweets often occur, so the absolute number of tweets judged to be positive samples increases. Continually presenting tweets with high scores as positive samples will increase the accuracy, but less information will be covered, causing tweets judged to be positive samples to be overlooked.",
"cite_spans": [
{
"start": 121,
"end": 141,
"text": "(Neubig et al., 2011",
"ref_id": "BIBREF8"
},
{
"start": 142,
"end": 164,
"text": "(Neubig et al., , 2013",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Broadcasters must acquire a wide variety of information, not only information about large-scale disasters. By limiting negative samples to the minimum, we aim to improve information gathering efficiency, and by maintaining the diversity of the positive samples, we reduce the risk of information being missed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In this section, we describe a method for extracting tweets for news reporting. In the proposed method, we generate a model that learns by focusing on unreal negative samples. We use a feed forward neural network as a learning algorithm to automatically extract tweets that have potential news value. The input to the neural network uses the distributed representation of tweets. By adding a feature of whether a characteristic phrase or hypothesis expression is included in a tweet, learning models are generated. The configuration of the neural network is shown in Figure. 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 567,
"end": 574,
"text": "Figure.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "In this paper, our target for news production is to extract tweets related to \"fire,\" which is the most frequently occurring topic in Japanese news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. Configuration of neural network",
"sec_num": null
},
{
"text": "First, a tweeted sentence is divided into morpheme units using the morphological analyzer MeCab (Kudo et al., 2004) . Then, by using Word2Vec (Mikolov et al., 2013) , each unit is converted into a 200-dimensional distributed representation. The average of all vectors for words included in a sentence is regarded as the sentence vector and is used for an input for a neural network. We used the Wikipedia dump data of September 2016 to generate distributed expressions using Word2Vec.",
"cite_spans": [
{
"start": 96,
"end": 115,
"text": "(Kudo et al., 2004)",
"ref_id": "BIBREF6"
},
{
"start": 142,
"end": 164,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features based on Distributed Representation",
"sec_num": "3.1"
},
{
"text": "As described in the introduction, information about broadcast content such as dramas and animations is often sent to SNSs, and some tweets includes idiomatic phrases. We prepared three kinds of characteristic phrases: TV program names, virtual proper nouns, and idiomatic phrases. If a tweet includes them, we put \"1\" in the corresponding dimension of the phrase and \"0\" if not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of Characteristic Phrases",
"sec_num": "3.2"
},
{
"text": "We gathered 9,473 titles, mainly of dramas, using the program guide application programming interface (API) of broadcasting stations and Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TV program names",
"sec_num": null
},
{
"text": "Virtual proper nouns 12,310 proper nouns such as animation, movie, and video game titles were gathered from Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TV program names",
"sec_num": null
},
{
"text": "We gathered 32 phrases that contained \"fire\" from published dictionaries 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Idiomatic phrases",
"sec_num": null
},
{
"text": "In characteristic phrases, we exclude titles that contained common verbs or adjectives such as \"\u751f \u304d\u308b (live)\" and single-character titles such as \"\u6c5f (Gou).\" ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Idiomatic phrases",
"sec_num": null
},
{
"text": "\u3072\u3088\u3063\u3053 (Hiyokko), \u3079\u3063\u3074\u3093\u3055\u3093 (Beppinnsann), \u3042\u3055\u30a4\u30c1(Asaichi) Virtual proper nouns \u30b9\u30fc\u30d1\u30fc\u30de\u30f3 (Superman), \u30de\u30ea\u30aa\u30d1\u30fc\u30c6\u30a3 (Mario-Party), \u30b9\u30e9\u30e0\u30c0\u30f3\u30af (Slam-Dunk) Idiomatic phrases \u5bfe\u5cb8\u306e\u706b\u4e8b (taiganno-kazi), \u706b\u4e8b\u5834\u306e\u99ac\u9e7f \u529b (kajibano-bakadikara)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Idiomatic phrases",
"sec_num": null
},
{
"text": "The features of characteristic phrases are set as follows. The example sentence \"\u6d77\u5916\u306e\u4e8b\u4f8b\u3092\u5bfe\u5cb8\u306e \u706b\u4e8b\u3068\u697d\u89b3\u8996\u3067\u304d\u306a\u3044 (Foreign cases cannot be optimistic about the fire on the other side,)\" includes the Japanese idiomatic phrase \"\u5bfe\u5cb8\u306e\u706b\u4e8b (the fire on the other side).\" The idiomatic phrases dimension corresponding to it is \"1.\" Since the sentence 1 http://www.jlogos.com/ does not include any TV program names or virtual proper nouns, their values are set to \"0.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Idiomatic phrases",
"sec_num": null
},
{
"text": "Due to the effect of recent news of terrorism overseas, tweets expressing worries about terrorism have been posted such as \"\u8fd1\u304f\u3067\u7206\u767a\u304c\u8d77\u304d\u305f\u3089\u6016\u3044 \u306a (If an explosion occurs nearby, I'll be scared)\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of Hypothesis Expressions",
"sec_num": "3.3"
},
{
"text": "We extract this kind of assumption from a sentence and use it as a feature for tweet extraction. We analyze the relationship between words that include expressions related to fire such as \"\u7206\u767a (explosion)\" and include assumptions such as \"\u305f\u3089 (if).\" A tweeted sentence is divided into clauses by using the parser CaboCha (Kudo and Matsumoto, 2002) . If the tweet includes (1) or (2), it is determined to include a \"hypothesis expression.\"",
"cite_spans": [
{
"start": 319,
"end": 345,
"text": "(Kudo and Matsumoto, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features of Hypothesis Expressions",
"sec_num": "3.3"
},
{
"text": "(1) A dependency relationship between an expression related to fire and an assumption",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of Hypothesis Expressions",
"sec_num": "3.3"
},
{
"text": "(2) An Expression related to fire and an assumption in the same clause",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features of Hypothesis Expressions",
"sec_num": "3.3"
},
{
"text": "In the above example, since \"if\" has a dependency relationship with \"explosion,\" the feature of the \"hypothesis expressions\" is set to \"1.\" Specific examples are shown in Figure. 2. ",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Figure.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features of Hypothesis Expressions",
"sec_num": "3.3"
},
{
"text": "We conducted two experiments to evaluate the effectiveness of our method. The first was to evaluate the effect of learning data using tweets that include \"characteristic phrases.\" The second is to evaluate the effect of the features including \"characteristic phrases\" and \"hypothesis expressions.\" ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Experiment",
"sec_num": "4."
},
{
"text": "For the training data, we gathered 5,065 tweets used in actual news reports as positive samples, which included information related to \"fire\" from March 2014 to August 2015. For comparison, we prepared two kinds of negative samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "(A) A random sample of 5,065 tweets randomly selected from all tweets in September 2016. This random sample did not include news source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "(B) A mixed sample of 5,065 tweets randomly selected from a dataset that mixed tweets in (A) and tweets including characteristic phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "The evaluation data were narrowed down to 8,154 tweets from about 7,700,000 from October 23rd, 2016. These were selected by keyword matching concerning fire-based events. The keywords are devised by the news production section of our broadcasting station 2 . There are 61 keywords related to fire, and broadcasters combine them to search for newsworthy information. Then a positive sample label was given to tweets with content related to actual fires or explosions, and a negative sample label was given to tweets with content not related to fire. For example, if a fire is happening in an unreal world or someone's imagination, this tweet is a negative sample. All the tweets are annotated by one annotator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "We use Chainer (Tokui et al., 2015) to implement our method. The input layer uses 204 dimensions (1 to 200 dimensions indicate the distributed representation and 201 to 204 dimensions respectively indicate presence or absence of TV program names / virtual proper nouns / idiomatic phrases / hypothesis expressions). The output layer is twodimensional, and the middle layer has two layers. The middle layers contain 500 nodes and 250 nodes from the nearest to the input layer. In addition, exponential linear units (ELUs) (Clevert et al., 2015) were used as an activation function, and batch normalization was performed in each layer. The number of learning sessions was set to 30.",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "(Tokui et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 521,
"end": 543,
"text": "(Clevert et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": null
},
{
"text": "The experimental results of the training data are shown in Table 2 . The random sample uses negative samples (A) of the learning data as the baseline. The mixed sample uses negative samples (B) of the learning data. Comparing the training data, the mixed sample including the characteristic phrases performs better than the random sample. Therefore, we used the mixed sample as training data in the next experiment and experimental results with various features. Table 3 shows the experimental results for using each feature. Mixed sample (MS) is the method that learned only distributed representation as described in Section 3.1. We added the features of TV program names, virtual proper nouns, and idiomatic phrases described in Section 3.2. Characteristic (1d) indicates the results of summarizing three features expressing characteristic phrases into one dimension, and Characteristic (3d) indicates the results of simultaneously adding three features to different dimensions. Furthermore, as a result of adding hypothesis expressions described in Section 3.3 as a feature to the MS method, the results obtained by adding all the features are shown. Among the three types of features of characteristic phrases, using virtual proper nouns achieves the highest F-measure. Performance was improved more by dividing each feature into three dimensions rather than putting each feature together. In addition, even when the features of hypothesis expressions were added, the F-measure did not improve.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 463,
"end": 470,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison of training data",
"sec_num": null
},
{
"text": "Training data As a negative sample of the training data, the mixed sample that included characteristic phrases performed better than the random sample. By including these mixed tweets, our method can learn negative samples including news-related words precisely. It can also learn combinations of newsrelated words and other words. Therefore, a characteristic phrase is a clue to select effective training data from among a large number of tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "The results of adding features of characteristic phrases to different dimensions (3d) is better than those of other methods. The proposed method has a 1.3-point higher F-measure and 3.5-point higher recall than the MS method. This result shows that we can acquire many positive samples as well as excluding tweets about unreal worlds. Examples of improvements by the proposed method are shown in case-A and case-B. (Even though I got on a limited express, the train stopped due to a fire along the railroad.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of features",
"sec_num": null
},
{
"text": "In Case-A, words related to fires such as \"fire\" and \"flame\" were included, so the MS method judged it as a positive. However, \"Fire Bird\" is the name of Japanese animation. Therefore, by adding proposed features, the proposed method can judge it as negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of features",
"sec_num": null
},
{
"text": "The MS method sometimes judged tweets including words related to fire as negative such as Case-B, because the method learned mixed sample including characteristic phrase without adding proposed features. For example, the method learned tweet including phrases related to fire like a \"\u5bfe\u5cb8 \u306e\u706b\u4e8b (the fire on the other side)\" as a negative example. Thus, words related to fire are included in negative examples as well as positive examples. When the features were added, the positive and negative criteria were clarified. Therefore, our proposed method can maintain the diversity of the positive samples. In addition, features of characteristic phrases were improved more by dividing each feature into three dimensions rather than using each feature as one dimensions. By dividing TV program names, virtual proper nouns, and idiomatic phrases into features, the proposed method can learn patterns of notation when phrases appear in sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of features",
"sec_num": null
},
{
"text": "The features of hypothesis expressions could not improve the F-measure because recall decreased. From results of error analysis, our method judged positive samples in the evaluation data as negative. For example, the negative results included tweets attributing causality to fire such as \"\u305f\u304f\u3055\u3093\u306e\u7159\u304c \u898b\u3048\u308b,\u706b\u4e8b\u3060\u3063\u305f\u3089\u3053\u307e\u308b\u306a\u3041 (There is a lot of smoke over there, I'm in trouble if it's a fire)\". In order not to miss such a tweet expressing the possibility of an incident or accident, a detailed analysis method needs to be developed to analyze causality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of features",
"sec_num": null
},
{
"text": "In this paper, we presented a method to automatically extract tweets with potential news value by adding new features focusing on \"unreal\" events to a neural network. The proposed method achieved a highest F-measure of 85.7, a 3.5-point increase over the baseline method, by focusing on \"characteristic phrases\" (TV program names, virtual proper nouns, and idiomatic phrases). This method is expected to reduce the workload of broadcasters who acquire information from social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "In the future, we aim to further improve the performance by acquiring more characteristic phrases such as \"cast of a TV program\" and \"TV programrelated information\" from real-time data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Case-A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case-B",
"sec_num": null
},
{
"text": "NHK (Japan Broadcasting Corporation) has a social media analysis team, that looks for news on the internet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rescue Activity for the Great East Japan Earthquake Based on a Website that Extracts Rescue Requests from the Net",
"authors": [
{
"first": "Shin",
"middle": [],
"last": "Aida",
"suffix": ""
},
{
"first": "Yasutaka",
"middle": [],
"last": "Shindoh",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Language Processing and Crisis Information",
"volume": "",
"issue": "",
"pages": "19--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shin Aida, Yasutaka Shindoh, and Masao Utiyama. 2013. Rescue Activity for the Great East Japan Earthquake Based on a Website that Extracts Rescue Requests from the Net. Proceedings of the Workshop on Language Processing and Cri- sis Information 2013, pages 19-25.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)",
"authors": [
{
"first": "Djork-Arn\u00e9",
"middle": [],
"last": "Clevert",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Unterthiner",
"suffix": ""
},
{
"first": "Sepp",
"middle": [],
"last": "Ochreiter",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.07289"
]
},
"num": null,
"urls": [],
"raw_text": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Ochreiter. 2015. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arXiv:1511.07289.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying Eyewitness News-Worthy Events on Twitter",
"authors": [
{
"first": "Erika",
"middle": [],
"last": "Doggett",
"suffix": ""
},
{
"first": "Alejandro",
"middle": [],
"last": "Cantarero",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "7--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erika Doggett and Alejandro Cantarero. 2016. Identifying Eyewitness News-Worthy Events on Twitter. Proceedings of The Fourth Internation- al Workshop on Natural Language Processing for Social Media, pages 7-13.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Identifying News from Tweets",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of 2016 EMNLP Workshop on Natural Language Processing and Computational Social Science",
"volume": "",
"issue": "",
"pages": "11--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Freitas and Heng Ji, 2016. Identifying News from Tweets. Proceedings of 2016 EMNLP Workshop on Natural Language Processing and Computational Social Science, pages 11-16.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Who caught a cold? -Identifying the subject of a symptom",
"authors": [
{
"first": "Shin",
"middle": [],
"last": "Kanouchi",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Eiji",
"middle": [],
"last": "Aramaki",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1660--1670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shin Kanouchi, Mamoru Komachi, Naoaki Oka- zaki, Eiji Aramaki, and Hiroshi Ishikawa. 2015. Who caught a cold? -Identifying the subject of a symptom. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Con- ference on Natural Language Processing, pages 1660-1670.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Japanese dependency analysis using cascaded chunking",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 6th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In Proceedings of the 6th Conference on Natural Language Learning 2002, pages 1-7.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Applying Conditional Random Fields to Japanese Morphological Analysis",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Kaoru",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "230--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo, Kaoru Yamamoto, and Yuji Matsumo- to. 2004. Applying Conditional Random Fields to Japanese Morphological Analysis. In Pro- ceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 230- 237.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chikara Hashimoto, and Kentaro Torisawa. 2016. WISDOM X, DISAANA and D-SUMM: Large-scale NLP Systems for Analyzing Textual Big Data",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corradoet",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean ; Junta Mizuno",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Kiyonori",
"middle": [],
"last": "Ohtake",
"suffix": ""
},
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Kloetzer",
"suffix": ""
}
],
"year": 2013,
"venue": "proceedings of the 26th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "263--267",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corradoet, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781. Junta Mizuno, Masahiro Tanaka, Kiyonori Ohtake, Jong-Hoon Oh, Julien Kloetzer, Chikara Hash- imoto, and Kentaro Torisawa. 2016. WISDOM X, DISAANA and D-SUMM: Large-scale NLP Systems for Analyzing Textual Big Data. In proceedings of the 26th International Confer- ence on Computational Linguistics, pages 263- 267.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Safety information mining -what can NLP do in a disaster",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yuichiroh",
"middle": [],
"last": "Matsubayashi",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Murakami",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "965--973",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Yuichiroh Matsubayashi, Masato Hagiwara, and Koji Murakami. 2011. Safety in- formation mining -what can NLP do in a disas- ter -. Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 965-973.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Framework and Tool for Collaborative Extraction of Reliable Information",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Mizukami",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Language Processing and Crisis Information",
"volume": "",
"issue": "",
"pages": "26--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Shinsuke Mori, and Masahiro Mizukami. 2013. A Framework and Tool for Collaborative Extraction of Reliable Information. In Proceedings of the Workshop on Language Processing and Crisis Information, pages 26-35.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chainer: a next-generation open source framework for deep learning",
"authors": [
{
"first": "Seiya",
"middle": [],
"last": "Tokui",
"suffix": ""
},
{
"first": "Kenta",
"middle": [],
"last": "Oono",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Hido",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Clayton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-Ninth Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceed- ings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-Ninth Annual Con- ference on Neural Information Processing Sys- tems.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Microblogging during two natural hazards events: what Twitter may contribute to situational awareness",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Vieweg",
"suffix": ""
},
{
"first": "Amanda",
"middle": [
"L"
],
"last": "Hughes",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Starbird",
"suffix": ""
},
{
"first": "Leysia",
"middle": [],
"last": "Palen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the SIGCHI conference on human factors in computing systems",
"volume": "",
"issue": "",
"pages": "1079--1088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Vieweg, Amanda L Hughes, Kate Starbird, and Leysia Palen. 2010. Microblogging during two natural hazards events: what Twitter may contribute to situational awareness. In Proceed- ings of the SIGCHI conference on human factors in computing systems, pages 1079-1088.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Examples of hypothesis expressions",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "MS method: Positive proposed method: Negative \u300c\u706b\u306e\u9ce5\u300d\u306e\u6700\u7d42\u56de\u304c\u708e\u4e0a (The last round of \"Fire Bird\" is flaming.) MS method: Negative proposed method: Positive \u305b\u3063\u304b\u304f\u7279\u6025\u4e57\u3063\u305f\u306e\u306b,\u6cbf\u7dda\u706b\u707d\u3067\u96fb\u8eca\u304c \u6b62\u307e\u3063\u3066\u3044\u308b",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Feature type TV program names</td><td>Example</td></tr></table>",
"type_str": "table",
"text": "Examples of characteristic phrases",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table><tr><td>Method Random sample Mixed sample (MS)</td><td>Recall Precision 84.1 79.7 85.4 83.4</td><td>F-measure 81.9 84.4</td></tr></table>",
"type_str": "table",
"text": "Experimental results for each training dataset",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table><tr><td>Method Mixed sample (MS) MS + TV program names MS + Virtual proper nouns MS + idiomatic phrases MS + Characteristic (1d) MS + Characteristic (3d) MS + Hypothesis Expression (HE) MS + All feature (3d+HE)</td><td>Recall Precision F-measure 85.4 83.4 84.4 84.5 84.6 84.6 89.7 80.1 84.7 83.9 85.2 84.5 82.7 83.7 83.2 88.9 82.8 85.7 82.5 84.4 83.4 83.5 84.4 84.0</td></tr></table>",
"type_str": "table",
"text": "Experimental results for each method",
"num": null,
"html": null
}
}
}
}