ACL-OCL / Base_JSON /prefixR /json /rocling /2019.rocling-1.35.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:54:26.030691Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Identifying argument components has become an important area of research in argument mining. When argument components are identified, they can not only be used for stance classification but also can provide reasons for determining an article is supporting or opposing about a specific target. Previous research mainly used text classification and summarization techniques to solve this task. However, by transforming the task to a classification problem, not only rely heavily on choosing and using bag-of-words features, but also lose the article entity information due to extract the sentences out of the article and treat as an individual training instance. In the other hand, although summarization techniques handle on entire article and try to figure out which sentence can best represent the core concept of the article, in identifying argument components still heavily relies on bag-of-words feature representation and lack of argument-oriented features to concern about argument components characteristics. In our study, we dive down to the core of the summarization method, not only makes it based on argument strength to summarize articles and identify argument components, but also proposed a directed graph construction approach. Experiments show that our proposed method outperforms 8% better than those without argument-oriented methods.",
"pdf_parse": {
"paper_id": "2019",
"_pdf_hash": "",
"abstract": [
{
"text": "Identifying argument components has become an important area of research in argument mining. When argument components are identified, they can not only be used for stance classification but also can provide reasons for determining an article is supporting or opposing about a specific target. Previous research mainly used text classification and summarization techniques to solve this task. However, by transforming the task to a classification problem, not only rely heavily on choosing and using bag-of-words features, but also lose the article entity information due to extract the sentences out of the article and treat as an individual training instance. In the other hand, although summarization techniques handle on entire article and try to figure out which sentence can best represent the core concept of the article, in identifying argument components still heavily relies on bag-of-words feature representation and lack of argument-oriented features to concern about argument components characteristics. In our study, we dive down to the core of the summarization method, not only makes it based on argument strength to summarize articles and identify argument components, but also proposed a directed graph construction approach. Experiments show that our proposed method outperforms 8% better than those without argument-oriented methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In \"Argument Components Extraction (ACE),\" researchers try to get which sentence can be treated as arguing points, as known as \"Argument Components (AC).\" According to [1] , which summarized tons of state-of-art knowledge about AM, summarized that how can a sentence become an argument component. The argument components in argumentative articles are usually formed by five types of sentences: \"Claim\" are the sentences that represent the statement being argued, \"Data\" are the facts or evidence used to prove the claim, \"Warrant\" are the sentences that make a connection between data and claim, \"Backing\" and \"Rebuttal\" are the sentences that support and against the warrant, respectively. ACE is what our study is targeting on, to find out which are the key sentences that make people explain why they support or against something. Furthermore, the results can feedback to SC task, since we know the key sentences that lead an article to support or against.",
"cite_spans": [
{
"start": 168,
"end": 171,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We are using the dataset released by [2] , which collected posts under the domain \"Abortion,\" \"Gay Rights,\" \"Obama\" and \"Marijuana\" from an online debate forum. They manually labeled argument components in each post under each domain.",
"cite_spans": [
{
"start": 37,
"end": 40,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To accomplish the goal of ACE, previous methods commonly use bag-of-words or feature-based approach to represent the sentence in a feature vector form. These methods will cause the problems that lead to sparse feature, and it needs to treat sentences individually rather than processing with other sentences in the post.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We based on existed summarization method, TextRank to be one of our baselines. In [3] , they aim to extract argument components by applying TextRank, which first using TF-IDF to represent each sentence then create an undirected graph, then applies PageRank on the undirected graph to acquire ranked sentences; In the end, the top-ranked sentence will be treated as argument components. However, we can say that the summarization algorithm was proved to perform well in extracting keywords or key sentences from an article, but cannot confidently say they are argument components.",
"cite_spans": [
{
"start": 82,
"end": 85,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In [4] , which their work motivated us to integrate argument-oriented information in the graph, shows that changing the edge construction method can improve TextRank performance. We proposed an argument-oriented TextRank, ArguRank to address previously issued problems:",
"cite_spans": [
{
"start": 3,
"end": 6,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "With integrating subjectivity score to change the calculation within TextRank, we can confidently say that the result will be argument-ranked. Moreover, we proposed a directed graph construction approach to retrieve the nodes relation and direction, which aim to gain more performance by concern about how the score will be propagated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To summarize, we make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\uf06c A model to retrieve subjectivity score of words through manual compiled argument-oriented corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\uf06c An argument-oriented TextRank to identify the argument component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\uf06c A directed graph construction approach to pursuing better ranking performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The lexicon expansion approach aims to enlarge an existing lexicon using possibility, so it can not only be used by itself but can also lead a classifier or a model to learn its core concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Expansion",
"sec_num": "2.1"
},
{
"text": "Previously, a lexicon-based method uses the attributes provided in the lexicon to do a specific task. The attribute in the lexicon usually becomes a feature that whether the target article contains the word or not, if contains, then the feature will switch to a state, otherwise will have another state. But more chances words in the target article will not appear in the chosen lexicon, make the feature is not as efficient as it should be. Researchers like [5] proposed a concept that using an existing lexicon as seed, and choose a model to learn the concept of the lexicon and then the model can try to predict attributes of words that not contain in the lexicon to maximize the feature that we want to retrieve from the lexicon. The lexicon expansion is accomplished in the following step, which proposed in [5] : (1) Choose a lexicon as seed, (2) Transfer words from text to its representation in vector space, finally 3Construct and train a classifier or model to predict unknown words to retrieve the information the seed lexicon can give. The main workflow is shown in our research; we will use such this approach to enlarge an existing subjectivity lexicon to acquire words subjectivity strength probability for further procedure.",
"cite_spans": [
{
"start": 459,
"end": 462,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 813,
"end": 816,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Expansion",
"sec_num": "2.1"
},
{
"text": "Another field and category to summarize documents are the graph-based methods. One of this kind is LexRank, proposed by [6] . It adopts TF-IDF to represent each words' importance in the sentences, then uses a modified cosine similarity equation to construct the edges between sentences.",
"cite_spans": [
{
"start": 120,
"end": 123,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "2.2"
},
{
"text": "The other approaches are TextRank and TextRank-based variation. [3] modified TextRank (shorten as PsTK) to rank sentences to determine which sentences are highly possible to be an argument component. The work constructs the graph for PageRank to iterate with sentences as nodes and similarity between each sentence as edges. In the original TextRank [7] , it uses the following Equation 1 to calculate the similarity between sentences.",
"cite_spans": [
{
"start": 64,
"end": 67,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 350,
"end": 353,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( , ) = |{ | \u2208 & \u2208 }| log (| |) + log (| |)",
"eq_num": "(1)"
}
],
"section": "Graph-based Summarization",
"sec_num": "2.2"
},
{
"text": "After the edges are calculated, TextRank will use PageRank, proposed by [8] to iterate the graph and get the scores of each node, here we treat nodes as sentences in an article. In PsTK, they use a different way to construct the edges. However, the method cannot convince that summarization is a suitable method to solve the task, since none of the operation related to argument, stance or reasoning. In our work, we proposed an argument-oriented TextRank, ArguRank, which considered argument specific characteristics to identify argument component. ",
"cite_spans": [
{
"start": 72,
"end": 75,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "2.2"
},
{
"text": "We based on PsTK, the state-of-the-arts method on our dataset to develop our method, ArguRank. In the previous section, we address the issue of TextRank, motivated us to proposed argument-oriented TextRank, ArguRank, which aim to solve these issues and performs better in identifying argument components. During this pipe which shows in Figure 1 , we will apply our method to certain steps to make become argument-oriented TextRank, ArguRank. ",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3."
},
{
"text": "To make TextRank argument-oriented, we develop a model to help enhance extract hidden argumentative information. The corpus we use to build the model is \"Multi-Perspective Question Answering (MPQA) Subjectivity Lexicon,\" was compiled by [9] . It contains 8222 words; each word was labeled as \"strong subjectivity\" or \"weak subjectivity.\" To get the score of words that are not contained in the lexicon, we use the word vector model and binary classifier to build an argumentative scoring model to predict the argumentative score of the word. To construct the argumentative scoring model, we (1) use a word vector model to get its vector representation, then (2) feed to a binary classifier to make it learn how to separate the vector into strong or weak subjectivity.",
"cite_spans": [
{
"start": 237,
"end": 240,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentative Scoring Model",
"sec_num": "3.1"
},
{
"text": "After training the classifier, we use it to get subjectivity score of each word in each sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument-oriented Summarization",
"sec_num": "3.2"
},
{
"text": "First, we normalized the subjectivity score of words in each sentence by a softmax function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument-oriented Summarization",
"sec_num": "3.2"
},
{
"text": "The sentence representation will then be calculated via Equation 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument-oriented Summarization",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2192 = \u2211 \u2192",
"eq_num": "(2)"
}
],
"section": "Argument-oriented Summarization",
"sec_num": "3.2"
},
{
"text": "Where V \u20d7 \u20d7 represents the word vector of W i and S \u20d7 denotes the sentence representation before constructing the graph for PageRank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument-oriented Summarization",
"sec_num": "3.2"
},
{
"text": "After we acquire various sentence representations, the next step is to construct the graph that represents the article. We choose cosine similarity to retrieve sentence relationship as edges, which shows in Equation 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "^( \u2192 , \u2192 ) = \u2192 \u22c5 \u2192 || \u2192 || \u22c5 || \u2192 || = \u2211 \u2192 =1 \u22c5 \u2192 \u221a \u2211 \u2192 2 =1 \u221a \u2211 \u2192 2 =1",
"eq_num": "(3)"
}
],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "where \u2192 , \u2192 are representations of two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "After the edges been calculated, the graph G will be constructed. 1 , 2 , 3 denotes the sentences respectively, and 12 , 23 , 13 indicates the similarity between sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "Based on the undirected graph, we then further apply two conditions to make it becomes directional. First is the condition to determine what edges are going to be discarded due to low similarity, which determined by \u210e \u210e and shows in Equation 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= {^( \u2192 , \u2192 ), if ^( \u2192 , \u2192 ) \u2265 \u210e \u210e 0, otherwise",
"eq_num": "(4)"
}
],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "Next, the construction of directional graph representation will be completed in two sub-steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "First, every sentence will have a direction that points to its next sentence, which shows in Equation 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= { ( \u2192 , \u2192 ), if \u2212 = 1 0, otherwise",
"eq_num": "(5)"
}
],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "Last, the sentences which have high similarity (determined by \u210e \u210e ) will be used to applied direction that makes two nodes points together, which similar to Equation 4 and shows in Equation 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= {^( \u2192 , \u2192 ), if ^( \u2192 , \u2192 ) \u2265 \u210e \u210e 0, otherwise",
"eq_num": "(6)"
}
],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "where the difference is the threshold will be interpreted as how similar of the sentences will have a bi-directional connection. The directed graph can be visualized in Figure 2 . In the end, PageRank will applied on the directed graph to output the rank of these sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "The top-1 ranked sentences will then be considered as argument components of the online debate article. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "3.3"
},
{
"text": "The methods we proposed required different evaluation metrics. For assessing the argumentative scoring model, first, we will use accuracy to evaluate how well the model can correctly predict whether the word in the lexicon that is strong or weak subjectivity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "Moreover, to prevent the illusion that the accuracy gives we adopt sensitivity (also called the true positive rate, or recall) and specificity (also called the false positive rate) to observe how the model performs on the answer in the strong and wrong subjectivity, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "To evaluate ArguRank, we use accuracy again to assess how many articles in the dataset are correctly found the argument components. If the top-1 ranked sentence match one of the annotated argument components within an online debate article then the accuracy will raise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "There are two main parts we are going to discuss in this section: (1) Building Argumentative Scoring Model and (2) Graph Construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result Discussion",
"sec_num": "4.2"
},
{
"text": "We select three word vector models: Word2vec, GloVe, and fastText for comparison. The result shows in Table 1 , which using fastText and debate posts plus Wikipedia articles as training corpus plus the original lexicon words can get acceptable performance on predicting subjectivity score of words than other methods. The model than further being used in applying the score of each word in sentence representation. ",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Building Argumentative Scoring Model",
"sec_num": "4.2.1"
},
{
"text": "In constructing a directed graph, due to the sentence splitting operation, some articles in our dataset exist only one sentence, which cannot construct the graph since no enough nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "4.2.2"
},
{
"text": "After filtering these kinds of articles, the filtered datasize is shown in Table 2 , also with the performance of various directed graph construction approaches. By only drop connections on the undirected graph, the accuracy will get slightly improvement, and the performance will get better after direction construction with different granularity. ",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "4.2.2"
},
{
"text": "We apply our method to the datasize that removes the posts which one of the argument components is in the first sentence and PsTK predicted. The reason to remove these posts is in PsTK or other methods based on undirected graph, it will face a problem if the ranked scores are the same, the algorithm will predict the first sentence of the post to be the argument components. To show how our directed graph construction approach can deal with such this problem, we experimented on the datasize after the removal. In Table 3 , we run the directed graph construction methods on the datasize and filtered graph size; the results show that by representing sentence by specific weighting mechanism and the directed graph construction, our method can identify argument components better than PsTK. Table 3 . After removing the posts that one of its argument components is located at the first place of sentences. ",
"cite_spans": [],
"ref_spans": [
{
"start": 516,
"end": 523,
"text": "Table 3",
"ref_id": null
},
{
"start": 792,
"end": 799,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Positions of Argument Components",
"sec_num": "4.2.3"
},
{
"text": "We based on TextRank to develop an argument-oriented and directed ranking method called \"ArguRank,\" which makes TextRank argumentative and directed. Also, we show how we build our research environment to expand a lexicon for identifying argumentative words and construct an argument representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "The experiments show the proof that using argument-oriented graph-based summarization method by applying the subjectivity lexicon to construct the sentence representation can get better result on extracting argument components. Moreover, the approach of directed graph construction significantly improves the performance of identifying argument components via graph-based summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ArgMine: Argumentation Mining from Text",
"authors": [
{
"first": "G",
"middle": [
"F"
],
"last": "Da Rocha",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "da Rocha, G.F. ArgMine: Argumentation Mining from Text. 2016; Available from: http://hdl.handle.net/10216/89719.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Why are you taking this stance? Identifying and classifying reasons in ideological debates",
"authors": [
{
"first": "K",
"middle": [
"S"
],
"last": "Hasan",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasan, K.S. and V. Ng. Why are you taking this stance? Identifying and classifying reasons in ideological debates. in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying argument components through TextRank",
"authors": [
{
"first": "G",
"middle": [],
"last": "Petasis",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Karkaletsis",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petasis, G. and V. Karkaletsis. Identifying argument components through TextRank. in Proceedings of the Third Workshop on Argument Mining (ArgMining2016). 2016.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Variations of the similarity function of textrank for automated summarization",
"authors": [
{
"first": "F",
"middle": [],
"last": "Barrios",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.03606"
]
},
"num": null,
"urls": [],
"raw_text": "Barrios, F., et al., Variations of the similarity function of textrank for automated summarization. arXiv preprint arXiv:1602.03606, 2016.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving claim stance classification with lexical knowledge expansion and context utilization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bar-Haim, R., et al. Improving claim stance classification with lexical knowledge expansion and context utilization. in Proceedings of the 4th Workshop on Argument Mining. 2017.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G",
"middle": [
"U"
],
"last": "Erkan",
"suffix": ""
},
{
"first": "D",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "journal of artificial intelligence research",
"volume": "22",
"issue": "",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erkan, G.u., nes and D.R. Radev, Lexrank: Graph-based lexical centrality as salience in text summarization. journal of artificial intelligence research, 2004. 22: p. 457-479.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bringing order into texts",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "P",
"middle": [
"T"
],
"last": "Textrank",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, T., P.T. Textrank, and others. Bringing order into texts. in Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2004.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The PageRank citation ranking: Bringing order to the web",
"authors": [
{
"first": "L",
"middle": [],
"last": "Page",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Page, L., et al., The PageRank citation ranking: Bringing order to the web. 1999, Stanford InfoLab.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Recognizing contextual polarity in phrase-level sentiment analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson, T., J. Wiebe, and P. Hoffmann. Recognizing contextual polarity in phrase-level sentiment analysis. in Proceedings of the conference on human language technology and empirical methods in natural language processing. 2005.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "System framework and flowchart of our proposed method."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Our proposed system will (1) read online debate posts, (2) preprocess the texts into sentences, (3) create sentence representation for calculating edge, (4) build a directed graph then (5) apply PageRank to scoring sentences."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "A directed graph that represents the article."
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "0.515 0.548 0.606 0.637 0.577 Improvement 0.085 0.073 0.094 0.069 0.081"
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"3\">Word2vec GloVe fastText</td></tr><tr><td>Accuracy</td><td>0.776</td><td>0.767</td><td>0.777</td></tr><tr><td>Sensitivity</td><td>0.798</td><td>0.789</td><td>0.767</td></tr><tr><td>Specificity</td><td>0.729</td><td>0.719</td><td>0.798</td></tr><tr><td>AUC</td><td>0.834</td><td>0.829</td><td>0.86</td></tr></table>",
"text": "Final argumentative scoring model where boldfaced scores show that better than others.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table><tr><td/><td>ABO GAY OBA MAR avg</td></tr><tr><td>PsTK</td><td>0.469 0.528 0.582 0.641 0.555</td></tr><tr><td>ArguRank</td><td>0.490 0.538 0.628 0.693 0.587</td></tr><tr><td colspan=\"2\">Improvement 0.021 0.01 0.046 0.052 0.032</td></tr></table>",
"text": "The results of different graph constructions using cosine similarity. The different number of size about valid dataset in use to construct graph is also shown in here.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}