ACL-OCL / Base_JSON /prefixS /json /sdp /2020.sdp-1.31.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:36:23.310267Z"
},
"title": "A-Team / Martin-Luther-Universit\u00e4t Halle-Wittenberg@CLSciSumm 20",
"authors": [
{
"first": "Maik",
"middle": [],
"last": "Boltze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lorna Ulbrich",
"location": {}
},
"email": ""
},
{
"first": "Anja",
"middle": [],
"last": "Fischer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lorna Ulbrich",
"location": {}
},
"email": ""
},
{
"first": "Artur",
"middle": [],
"last": "Jurk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lorna Ulbrich",
"location": {}
},
"email": "artur.jurk@student.uni-halle.de"
},
{
"first": "Georg",
"middle": [],
"last": "Keller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lorna Ulbrich",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This document demonstrates our groups approach to the CL-SciSumm shared task 2020 (Chandrasekaran et al., 2020). There are three tasks in CL-SciSumm 2020. In Task 1a, we apply a Siamese neural network to identify the spans of text in the reference paper best reflecting a citation. In Task 1b, we use a SVM to classify the facet of a citation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This document demonstrates our groups approach to the CL-SciSumm shared task 2020 (Chandrasekaran et al., 2020). There are three tasks in CL-SciSumm 2020. In Task 1a, we apply a Siamese neural network to identify the spans of text in the reference paper best reflecting a citation. In Task 1b, we use a SVM to classify the facet of a citation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Task 1 of the CL-SciSumm shared task 2020 contains two sub tasks. The document dataset for the tasks consists of multiple reference papers (RPs) and for each RP a set of citing papers (CPs) that all contain a citation of the original RP. For each of these citations the cited text spans and the belonging facet have been manually annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For task 1a the goal was to predict the cited text span for a given citation and its reference paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In task 1b the participants had to identify what facet a cited text span belongs to, from a predefined set of facets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our team's approach utilizes a neural network for task 1a to classify pairs of (citation, reference paper sentence) as either matching or not matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For task 1b the syntax of reference text in the form of part-of-speech n-grams is used to predict it's facet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Citations play a more significant role in the scientific development than one might expect. Fact is, that they help tracking the development of scientific problems and build a foundation for future research. Citations spread information and are a key attribute of determining the impact of a paper or rather its value to science (Hern\u00e1ndez-Alvarez and Gomez, 2016) .",
"cite_spans": [
{
"start": 329,
"end": 364,
"text": "(Hern\u00e1ndez-Alvarez and Gomez, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There are different methods of extracting useful citations. Some utelize supervised Markov Random Fields classifiers (Qazvinian and Radev, 2010) , others modeling the link information and the citation texts (Kataria et al., 2010) , or sequence labeling with segment classification (Abu-Jbara and Radev, 2012) . The main goal of these approaches is to find the sentences or spans of a CP that explain some facets of the RP. Because a way to see citations is as short textual parts describing some facets of the cited work.",
"cite_spans": [
{
"start": 117,
"end": 144,
"text": "(Qazvinian and Radev, 2010)",
"ref_id": "BIBREF16"
},
{
"start": 207,
"end": 229,
"text": "(Kataria et al., 2010)",
"ref_id": "BIBREF10"
},
{
"start": 281,
"end": 308,
"text": "(Abu-Jbara and Radev, 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However in this document we don't need to generate or extract citations from a cited work. The citances are already given and we need to find a method to determine the sentence or span in a RP corresponding to the given citance. For this purpose it may help analyzing the aim or rethorical status of a citance like in (Hern\u00e1ndez-Alvarez and Gomez, 2016) . One work presented a classification framework based on lexically and linguistically inspired features for classifying citation functions (Teufel et al., 2006) .",
"cite_spans": [
{
"start": 318,
"end": 353,
"text": "(Hern\u00e1ndez-Alvarez and Gomez, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 493,
"end": 514,
"text": "(Teufel et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A different mind may think about text summarization as a helping feature to find the corresponding textual span to a given citance. Fortunately the field of summerization grew to a well researched subject in the recent decades. There are several approaches to consider. Some of them are topic modeling (Gong and Liu, 2001) , supervised models (Chali and Hasan, 2012) , graph based models (Mihalcea, 2004) and neural networks (Chopra et al., 2016) . For topic modeling a probabilistic framework is used to estimate the distribution of content in the final summary. Supervised models get a selection of sentences relevant for the final summary to learn on, to afterwards be able to seek the right sentences for a final summary. Graph based models focus on finding the most central sentence in a graph of a text, where sentences are nodes and similarities are edges, which represents a summarizing 3 Baseline",
"cite_spans": [
{
"start": 302,
"end": 322,
"text": "(Gong and Liu, 2001)",
"ref_id": "BIBREF7"
},
{
"start": 343,
"end": 366,
"text": "(Chali and Hasan, 2012)",
"ref_id": "BIBREF3"
},
{
"start": 425,
"end": 446,
"text": "(Chopra et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As baseline we trained a SVM for each citation and chose the one with the largest tf-idf score as prediction. On the 2018 training set we got an F1-score of 0.09 (micro) and 0.10 (macro).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1a",
"sec_num": null
},
{
"text": "The dataset of 2018 consists of a total of 176 citations. 104 citations are labelled as method facet, 9 as implication facet, 34 as result facet, 22 as aim facet and only 7 citations belong to the hypothesis facet. That is why we decided to keep our baseline simple and tagged all citations with the majority label \"method\". The performance of this simple baseline can be seen in table 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "3.1"
},
{
"text": "4 Approach and Experiment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "3.1"
},
{
"text": "Our first preprocessing step is computing the cross product for all citations and every sentence of a reference paper, given annotated citations. The pairs consisting of a citation and its matching reference sentence were labelled as class \"1\" and all other pairs as class \"0\". The resulting data matrix, as shown in table 2, contains the citation-sentence pairs and the class labels. By defining a threshold value of 0.9 we were able to use our NN as a binary classifier. Figure 1 shows the performance of our system when using different thresholds. With our training dataset, a value of 0.9 seemed to be suited best as threshold value. Our second preprocessing step was mapping each word, which is contained in the word2vec vocabulary (Mikolov et al., 2013b,a) to a unique number in the training data. Based on this, an |word2vec vector size| \u00d7 |vocabulary size| embedding matrix E was constructed as a ground layer for the NN. We used a set pre-trained on the Google-NewsArchive as a word2vec embedding. Reference sentences and citations are represented as onehot over the vocabulary. Because of the construction of the training data the class \"1\" was very much underrepresented. For the NN to be able to handle this, we decided to undersample the huge \"0\" class. This improved our results by a factor of 30, as shown in table 3.",
"cite_spans": [
{
"start": 737,
"end": 762,
"text": "(Mikolov et al., 2013b,a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 473,
"end": 481,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Task 1a",
"sec_num": "4.1"
},
{
"text": "Our system for task 1a is based on a neuronal network (NN) that utilizes two identical long shortterm memory (LSTM) networks, mostly referred to as a \"Siamese\" 1 neural network. The output of the two networks is computed by the exp negate Manhattan distance function (1) as proposed by (Mueller and Thyagarajan, 2016) :",
"cite_spans": [
{
"start": 286,
"end": 317,
"text": "(Mueller and Thyagarajan, 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1a",
"sec_num": "4.1"
},
{
"text": "e \u2212||h (lef t) \u2212h (right) || 1 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1a",
"sec_num": "4.1"
},
{
"text": "The complete NN architecture is shown in figure 2. Table 1 shows the evaluation results of our system on 2017 training data. For the experiment we trained the NN with 2016, 2018 and 2019 training data for 50 epochs and a threshold value of 0.9. We used the \"adam\" function of the keras tensorflow library (Chollet et al., 2015; Kingma and Ba, 2014) as an optimizer.",
"cite_spans": [
{
"start": 305,
"end": 327,
"text": "(Chollet et al., 2015;",
"ref_id": null
},
{
"start": 328,
"end": 348,
"text": "Kingma and Ba, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Task 1a",
"sec_num": "4.1"
},
{
"text": "Our approach is based on a support vector machine (SVM) which uses part-of-speech (POS) n-grams as features. During the experiment, we tried using different POS n-gram features in SVMs with linear and polynomial kernels and compared their performances. We did not include the results of SVMs with polynomial kernel, because they showed bad performances. In machine learning, kernel methods are a class of algorithms that use a kernel to perform their calculations implicitly in a higher-dimensional space. On one hand we used the function linear_kernel which determines the linear kernel. On the other hand we used the function polynomial_kernel which determines the degree-d polynomial kernel between two vectors. The polynomial kernel represents the similarity between two vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "4.2"
},
{
"text": "Basically, the polynomial kernel considers both the similarity between vectors in the same dimension and the similarities across dimensions. When used in machine learning algorithms, this allows to observe the interaction between different features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "4.2"
},
{
"text": "The polynomial kernel with input vectors x , y and kernel degree d is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "4.2"
},
{
"text": "k(x, y) = (yx T y + c 0 ) d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "4.2"
},
{
"text": "If c 0 = 0 the kernel is called homogeneous. The linear kernel is a special case of the polynomial kernel where d = 1 and c 0 = 0. If x, y are column vectors, their linear kernel is described as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "4.2"
},
{
"text": "k(x, y) = x T y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "4.2"
},
{
"text": "We tried different degrees with the polynomial kernel, but did not include these in the results of SVMs, because they showed bad performances as well as the results with unbalanced training data. We used the python nltk (Bird et al., 2009) and spaCy (Honnibal and Montani, 2017) libraries for POS tagging and n-gram construction. As shown in table 4 the biggest improvement was gained when increasing n from POS 4-grams to POS 5-grams. Increasing n further seems to deteriorate the results 3 show, the best performance was reached using POS 5-grams in combination with a linear kernel SVM.",
"cite_spans": [
{
"start": 220,
"end": 239,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 250,
"end": 278,
"text": "(Honnibal and Montani, 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 490,
"end": 491,
"text": "3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Task 1b",
"sec_num": "4.2"
},
{
"text": "We could improve upon the solutions of past-year's PolyU approach (Cao et al., 2016) for task 1a. In future works better results may be obtained with more training data as is often the case with neural networks. Moreover the parameters of the neuronal network for task 1a could be tuned.",
"cite_spans": [
{
"start": 66,
"end": 84,
"text": "(Cao et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "A Siamese neural network is characterized by using the same weights while working on two different input vectors in tandem, to compute comparable output vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reference scope identification in citing sentences",
"authors": [
{
"first": "Amjad",
"middle": [],
"last": "Abu",
"suffix": ""
},
{
"first": "-Jbara",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "80--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amjad Abu-Jbara and Dragomir Radev. 2012. Refer- ence scope identification in citing sentences. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 80-90, Montr\u00e9al, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python, 1st edi- tion. O'Reilly Media, Inc.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Polyu at cl-scisumm 2016",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dapeng",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the joint workshop on bibliometric-enhanced information retrieval and natural language processing for digital libraries (BIRNDL)",
"volume": "",
"issue": "",
"pages": "132--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Wenjie Li, and Dapeng Wu. 2016. Polyu at cl-scisumm 2016. In Proceedings of the joint workshop on bibliometric-enhanced information re- trieval and natural language processing for digital libraries (BIRNDL), pages 132-138.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Query-focused multi-document summarization: Automatic data annotations and supervised learning approaches",
"authors": [
{
"first": "Yllias",
"middle": [],
"last": "Chali",
"suffix": ""
},
{
"first": "Sadid",
"middle": [
"A"
],
"last": "Hasan",
"suffix": ""
}
],
"year": 2012,
"venue": "Nat. Lang. Eng",
"volume": "18",
"issue": "1",
"pages": "109--145",
"other_ids": {
"DOI": [
"10.1017/S1351324911000167"
]
},
"num": null,
"urls": [],
"raw_text": "Yllias Chali and Sadid a. Hasan. 2012. Query-focused multi-document summarization: Automatic data an- notations and supervised learning approaches. Nat. Lang. Eng., 18(1):109-145.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview and insights from scientific document summarization shared tasks 2020: CL-SciSumm, LaySumm and Long-Summ",
"authors": [
{
"first": "M",
"middle": [
"K"
],
"last": "Chandrasekaran",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Feigenblat",
"suffix": ""
},
{
"first": "Hovy",
"middle": [
"E"
],
"last": "",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Shmueli-Scheuer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Waard",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Workshop on Scholarly Document Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. K. Chandrasekaran, G. Feigenblat, Hovy. E., A. Ravichander, M. Shmueli-Scheuer, and A De Waard. 2020. Overview and insights from scientific document summarization shared tasks 2020: CL-SciSumm, LaySumm and Long- Summ. In Proceedings of the First Workshop on Scholarly Document Processing (SDP 2020).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Abstractive sentence summarization with attentive recurrent neural networks",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "93--98",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1012"
]
},
"num": null,
"urls": [],
"raw_text": "Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 93-98, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generic text summarization using relevance measure and latent semantic analysis",
"authors": [
{
"first": "Yihong",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '01",
"volume": "",
"issue": "",
"pages": "19--25",
"other_ids": {
"DOI": [
"10.1145/383952.383955"
]
},
"num": null,
"urls": [],
"raw_text": "Yihong Gong and Xin Liu. 2001. Generic text summa- rization using relevance measure and latent semantic analysis. In Proceedings of the 24th Annual Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '01, page 19-25, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Survey about citation context analysis: Tasks, techniques, and resources",
"authors": [
{
"first": "Myriam",
"middle": [],
"last": "Hern\u00e1ndez",
"suffix": ""
},
{
"first": "-Alvarez",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"M"
],
"last": "Gomez",
"suffix": ""
}
],
"year": 2016,
"venue": "Natural Language Engineering",
"volume": "22",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/S1351324915000388"
]
},
"num": null,
"urls": [],
"raw_text": "Myriam Hern\u00e1ndez-Alvarez and Jos\u00e9 M. Gomez. 2016. Survey about citation context analysis: Tasks, tech- niques, and resources. Natural Language Engineer- ing, 22(3).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Utilizing context in generative bayesian models for linked corpus",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Kataria",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Bhatia",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saurabh Kataria, Prasenjit Mitra, and Sumit Bhatia. 2010. Utilizing context in generative bayesian mod- els for linked corpus.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Graph-based ranking algorithms for sentence extraction, applied to text summarization",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, ACLdemo '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1219044.1219064"
]
},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea. 2004. Graph-based ranking algorithms for sentence extraction, applied to text summariza- tion. In Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, ACLdemo '04, page 20-es, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, G.s Corrado, Kai Chen, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. pages 1-12.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Siamese recurrent architectures for learning sentence similarity",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Thyagarajan",
"suffix": ""
}
],
"year": 2016,
"venue": "thirtieth AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In thirtieth AAAI conference on artificial intelli- gence.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Identifying non-explicit citing sentences for citation-based summarization",
"authors": [
{
"first": "Vahed",
"middle": [],
"last": "Qazvinian",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "555--564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahed Qazvinian and Dragomir Radev. 2010. Identify- ing non-explicit citing sentences for citation-based summarization. pages 555-564.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic classification of citation function",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Tidhar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "103--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Teufel, Advaith Siddharthan, and Dan Tidhar. 2006. Automatic classification of citation function. In Proceedings of the 2006 Conference on Empiri- cal Methods in Natural Language Processing, pages 103-110, Sydney, Australia. Association for Compu- tational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Comparison of threshold values, evaluated on 2017 training data for task 1a",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Neural network architecture",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Task 1b results for different POS n-grams",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"content": "<table><tr><td>Task</td><td colspan=\"2\">precision</td><td>recall</td><td/><td colspan=\"2\">f1-score</td></tr><tr><td/><td colspan=\"6\">micro avg macro avg micro avg macro avg micro avg macro avg</td></tr><tr><td>1a</td><td>0.369</td><td>0.403</td><td>0.369</td><td>0.403</td><td>0.369</td><td>0.403</td></tr><tr><td>1b (POS 5-grams)</td><td>0.483</td><td>0.482</td><td>0.125</td><td>0.169</td><td>0.199</td><td>0.25</td></tr></table>",
"text": "Results for Task 1",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"text": "Structure of input data for task 1a citation original is_match 0 Another related... Supersense tagging... 1 1 Another related... Our approach uses ... 0 2 Another related... Some specialist to... 0 3 Another related... Our approach uses ...",
"type_str": "table"
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">precision</td><td>recall</td><td/><td colspan=\"2\">f1-score</td></tr><tr><td/><td colspan=\"7\">micro avg macro avg micro avg macro avg micro avg macro avg</td></tr><tr><td colspan=\"2\">Without balancing, 25 epochs</td><td>0.003</td><td>0.003</td><td>0.155</td><td>0.168</td><td>0.005</td><td>0.006</td></tr><tr><td colspan=\"2\">With balancing, 25 epochs</td><td>0.326</td><td>0.329</td><td>0.229</td><td>0.250</td><td>0.269</td><td>0.284</td></tr><tr><td colspan=\"2\">With balancing, 50 epochs</td><td>0.369</td><td>0.403</td><td>0.369</td><td>0.403</td><td>0.369</td><td>0.403</td></tr><tr><td colspan=\"2\">Input data</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">(citation, reference tuples)</td><td/><td/><td/><td/><td/></tr><tr><td>citation as number vector</td><td colspan=\"2\">reference as number vector</td><td/><td/><td/><td/></tr><tr><td>[42,73,\u2026]</td><td colspan=\"2\">[42,73,\u2026]</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Embedding</td><td/><td/><td/><td/><td/></tr><tr><td/><td>(static)</td><td/><td/><td/><td/><td/></tr><tr><td>LSTM</td><td>=</td><td>LSTM</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Neg. manhatten distance</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>threshold value</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Prediction</td><td/><td/><td/><td/><td/></tr></table>",
"text": "Differences between balanced and unbalanced training data for task 1a",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">precision</td><td>recall</td><td/><td colspan=\"2\">f1-score</td></tr><tr><td/><td colspan=\"6\">micro avg macro avg micro avg macro avg micro avg macro avg</td></tr><tr><td>Baseline</td><td>0.63</td><td>0.613</td><td>0.152</td><td>0.186</td><td>0.245</td><td>0.285</td></tr><tr><td>POS 3-grams</td><td>0.207</td><td>0.226</td><td>0.054</td><td>0.081</td><td>0.085</td><td>0.119</td></tr><tr><td>POS 4-grams</td><td>0.2</td><td>0.232</td><td>0.054</td><td>0.051</td><td>0.085</td><td>0.084</td></tr><tr><td>POS 5-grams</td><td>0.483</td><td>0.482</td><td>0.125</td><td>0.169</td><td>0.199</td><td>0.25</td></tr><tr><td>POS 6-grams</td><td>0.413</td><td>0.446</td><td>0.107</td><td>0.153</td><td>0.17</td><td>0.228</td></tr><tr><td colspan=\"3\">again and thus we decided to not test POS n-grams</td><td/><td/><td/><td/></tr><tr><td>for higher n.</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">As the results in table 4 and figure</td><td/><td/><td/><td/><td/></tr></table>",
"text": "Task 1b results for different POS n-grams",
"type_str": "table"
}
}
}
}