ACL-OCL / Base_JSON /prefixW /json /wosp /2020.wosp-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:44:26.591960Z"
},
"title": "Scubed at 3C task B -A simple baseline for citation context influence classification",
"authors": [
{
"first": "Shubhanshu",
"middle": [],
"last": "Mishra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Kanpur Kanpur",
"location": {
"country": "India"
}
},
"email": "mishra@shubhanshu.com"
},
{
"first": "Sudhanshu",
"middle": [],
"last": "Mishra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Kanpur Kanpur",
"location": {
"country": "India"
}
},
"email": "sdhanshu@iitk.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present our team Scubed's approach in the 3C Citation Context Classification Task, Subtask B, citation context influence classification. Our approach relies on text based features transformed via tf-idf features followed by training a variety of simple models resulting in a strong baseline. Our best model on the leaderboard is a random forest classifier using only the citation context text. A replication of our analysis finds logistic regression and gradient boosted tree classifier to be the best performing model. Our submission code can be found at: https://github.com/napster nxg/Citation Context Classificatio n.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present our team Scubed's approach in the 3C Citation Context Classification Task, Subtask B, citation context influence classification. Our approach relies on text based features transformed via tf-idf features followed by training a variety of simple models resulting in a strong baseline. Our best model on the leaderboard is a random forest classifier using only the citation context text. A replication of our analysis finds logistic regression and gradient boosted tree classifier to be the best performing model. Our submission code can be found at: https://github.com/napster nxg/Citation Context Classificatio n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The number of research papers has increased exponentially in recent years. In order to efficiently access this scientific resource, we need automated solutions for extracting information from these records. Citations in research papers are important for multiple reasons e.g. comparing novelty (Mishra and Torvik, 2016) , expertise (Mishra et al., 2018a) , and self-citation patterns (Mishra et al., 2018b) . For people new to the field, they are a way to increase knowledge whereas for experts in the field they act as useful pointers to summarize the paper. Citations are also used to measure various indexes which showcase the influence and reach of the researchers in their field. However, these indexes give equal weight to each citation. It has been established that all citations are not equal (N. Kunnath et al., 2020; Mishra et al., 2018b) . In many cases, cited papers are used as examples or are not influential to the paper itself.",
"cite_spans": [
{
"start": 294,
"end": 319,
"text": "(Mishra and Torvik, 2016)",
"ref_id": "BIBREF7"
},
{
"start": 332,
"end": 354,
"text": "(Mishra et al., 2018a)",
"ref_id": "BIBREF5"
},
{
"start": 384,
"end": 406,
"text": "(Mishra et al., 2018b)",
"ref_id": "BIBREF6"
},
{
"start": 805,
"end": 826,
"text": "Kunnath et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 827,
"end": 848,
"text": "Mishra et al., 2018b)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we describe our team, Scubed's entry for the citation context influence classification shared task (N. Kunnath et al., 2020) . This work aims to develop models that can identify the influence of citations in the research papers, and hence can then be used to produce better indexes and make research more easily accessible to everyone.",
"cite_spans": [
{
"start": 113,
"end": 138,
"text": "(N. Kunnath et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been a significant amount of work done in this area previously to better understand the significance of the citations in a paper (N. Kunnath et al., 2020) . As the number of research papers increase with time, the algorithms for suggesting research papers become more and more important. These algorithms are a deciding factor for lots of measures of a researcher's influence in a field. The no. of citations of a paper are important for deciding measures such as h-index (Hirsch, 2005) and g-index (Egghe, 2006) . These are influential measures for describing the significance of a researcher in a field. Scholars have argued that all of the citations in a paper should not have the same weight while determining the impact and reach of a paper. Moras et. al (Moravcsik and Murugesan, 1975) showed, that many references in research papers are redundant and quite often share little context with the citing paper. There have been many techniques for classifying citations as influential. However, one of the strongest baseline for this task is the prior citation count of the cited paper. Works of (Chubin and Moitra, 1975) show the effectiveness of citation count in determining influence. The work of (Zhu et al., 2015) points out suitable features for this task. They evaluated the performance of 5 classes of features, count, position, similarity, context and miscellaneous. They determined that counting the number of times a citation is referenced in a paper is the best estimator to determine the influence of a citation. (Hou et al., 2011) also showed that the count of a citation in a research paper is a simple and effective technique to assign its scientific contribution and influence. (Nazir et al., 2020) applied SVM, Random Forests and Kernel Linear Regression classifiers to identify important and non-important citations. They used citation count and similarity scores using tf-idf features to train their models. Their results show that these techniques produce an improved precision score of 0.84 in these tasks.",
"cite_spans": [
{
"start": 139,
"end": 164,
"text": "(N. Kunnath et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 482,
"end": 496,
"text": "(Hirsch, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 509,
"end": 522,
"text": "(Egghe, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 770,
"end": 801,
"text": "(Moravcsik and Murugesan, 1975)",
"ref_id": "BIBREF8"
},
{
"start": 1108,
"end": 1133,
"text": "(Chubin and Moitra, 1975)",
"ref_id": "BIBREF0"
},
{
"start": 1213,
"end": 1231,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 1539,
"end": 1557,
"text": "(Hou et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 1708,
"end": 1728,
"text": "(Nazir et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.1"
},
{
"text": "This paper focuses on the WOSP 3C shared subtask B (N. Kunnath et al., 2020) . In this sub-task, we were required to classify the citation context in research papers on the basis of their influence and purpose in the paper. For this shared task we used the ACL-ARC dataset (Jurgens et al., 2018) . The dataset consisted of 3000 labeled data-points annotated using the ACT platform (Pride et al., 2019) . The data provided contains the following fields:",
"cite_spans": [
{
"start": 51,
"end": 76,
"text": "(N. Kunnath et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 273,
"end": 295,
"text": "(Jurgens et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 381,
"end": 401,
"text": "(Pride et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data Description",
"sec_num": "2"
},
{
"text": "\u2022 Unique Identifier \u2022 COREID of Citing Paper \u2022 Citing Paper Title \u2022 Citing Paper Author \u2022 Cited Paper Title \u2022 Cited Paper Author \u2022 Citation Context \u2022 Citation Class Label \u2022 Citation Influence Label",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data Description",
"sec_num": "2"
},
{
"text": "To identify the citation being considered a #AU-THORTAG is placed in the citation. For this task the Citation Class Label field was ignored. This was a binary classification task, where the following target labels were used :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data Description",
"sec_num": "2"
},
{
"text": "\u2022 INCIDENTAL \u2022 INFLUENTIAL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data Description",
"sec_num": "2"
},
{
"text": "To evaluate the models the macro-F1 score was used on the test data. The final score that was used to rank was not the public score but a different subset of data that was not visible to the participating teams. The teams were advised to make submissions that would perform the best overall and not just on the public subset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data Description",
"sec_num": "2"
},
{
"text": "We utilize a simple approach based on text classification baseline methods. For the original submission we utilized a limited set of models. However, we trained additional models to conduct exhaustive evaluation for this paper. Below, we describe our workflow for pre-processing, feature extraction, and model-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The data provided was in raw text format which is not suitable for making predictions directly. In order to make useful predictions, it has to be first converted into numerical vector form that our models can process. The raw data consisted of columns having different attributes for which different feature extraction techniques had to be applied. For example, the citing and cited title consisted of a titles of the research papers whereas the citation context consisted of a description of the citation context. In order to efficiently process each column separately we used the ColumnTransformer module from the scikit-learn library (Pedregosa et al., 2011) . Each of the column contained text data. To extract useful features from this text data we used the TfidfVectorizer from the scikit-learn (Pedregosa et al., 2011 ) library on each column. This generates the term frequency inverse document frequency(tf-idf ) score for each of the texts in each column. The tf-idf score is a normalized count for the words occurring in the corpus. This type of feature however does not account for the position and inter-dependence of words. The tf-idf score is calculated as follows:",
"cite_spans": [
{
"start": 637,
"end": 661,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 801,
"end": 824,
"text": "(Pedregosa et al., 2011",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Processing and Feature Extraction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "tf \u2212 idf (t, d) = tf (t, d) * idf (t)",
"eq_num": "(1)"
}
],
"section": "Pre-Processing and Feature Extraction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "idf (t) = log 1 + n 1 + df (t) + 1",
"eq_num": "(2)"
}
],
"section": "Pre-Processing and Feature Extraction",
"sec_num": "3.1"
},
{
"text": "In the above equations, tf stands for term frequency which refers to the number of times a term t occurs in a document d. The n in (2) refers to the total number of documents present in the document set. (Df(t)) refers to the document frequency which calculates the number of documents in the document set that contain the term t. The tf-idf score is a better feature compared to the count of words in a sentence. The tf-idf score down weights uninformative words like pronouns compared to more rare but informative words present in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Processing and Feature Extraction",
"sec_num": "3.1"
},
{
"text": "In the end we ended up using two version of text features for our models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Processing and Feature Extraction",
"sec_num": "3.1"
},
{
"text": "1. Citing Context only (v1): uses only features extracted from citation context column. Our hypothesis here is that citation context should have the highest signal for identifying how the citation is used. 2. All features (v2): uses features extracted from citation context as well as citing and cited title column. Our hypothesis here is that using the combination of features from both citing and cited paper should improve the signal for identifying how the citation is used. However, we are also aware that this may also increase the proportion of noisy features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Processing and Feature Extraction",
"sec_num": "3.1"
},
{
"text": "For this shared task we were allowed to submit a maximum of 5 models for evaluation on test data 1 . Our goal was to investigate usage of the most simple models based on proven linear and nonlinear models which are faster and easier to train and deploy compared to the recent more powerful but resource hungry deep learning models. The following models were submitted for evaluation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Models",
"sec_num": "3.2"
},
{
"text": "\u2022 Logistic Regression Classifier (LR): A simple logistic regression model trained on the tf-idf features of 3 columns. All the models were trained using the scikit-learn library. Table 1 shows the the public and private leader board scores for each of our submissions for this 1 https://www.kaggle.com/c/3c-shared-taskinfluence/rules task. Our RF (v1) model performed best on the leader-board while being quite similar to the top performing model (within 0.003 F1 score). table 3 and 4) . First, table 2 shows the evaluation scores of all the models on the test set. One consistent pattern emerges, v1 models which use only the citation context text as its feature, consistently perform much better than v2 models. Next, the best v1 models are RF and LR. However, for v2, the best models is GBT which has consistent performance across v1 and v2. It appears that inclusion of extra features leads to over-fitting which is also evident from the training evaluation scores. Finally, the LR model (which is a linear model compared to all the other non-linear models) has the highest drop in evaluation score from v1 to v2, this may indicate that the linear model suffers more with the inclusion of noisy features.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 472,
"end": 486,
"text": "table 3 and 4)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Prediction Models",
"sec_num": "3.2"
},
{
"text": "Second, in table 3 we investigate the per label evaluation (in terms of F1 score) for each of the models. For both v1 and v2 features almost all models show similar performance on both labels. The only exception is the LR model which has 0.0 F1 score on Influential label for v2 features. Overall, it appears that these baseline models are quite good at learning this task compared to other submissions, while being fast and easy to implement. Finally, in table 4 we list the top features for each class as identified based on the coefficients of the LR v2 model. Since, this is a binary classification task the model only learns a single coefficient for each feature. Hence, coefficients with negative values indicate features more important for the Incidental class while the coefficients with positive values indicate features more important for the Incidental class. The top features for influential label appear to be presence of words like first, while for incidental label it is including. The word first is a strong indicator of the citing paper being influential by being the first to introduce a concept. This phenomenon has also been observed in case of (Mishra and Torvik, 2016) which showed that novel papers (papers which were among the first to introduce a concept) are slightly more cited.",
"cite_spans": [
{
"start": 1165,
"end": 1190,
"text": "(Mishra and Torvik, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Our results show that tradition tf-idf features give good performance for this shared task resulting in a strong baseline to compare against. Simple machine learning models like logistic regression, random forests, and gradient boosted trees perform well for this task compared to other submissions. Furthermore, the citation context contains the maximum signal for predicting citation influence. We were able to achieve one of the top performances in the task within the number of submissions required in the task. Due to the small dataset, multiple submissions increase the likelihood of the models to over-fit to the test set. Furthermore, our methods show that deep learning methods (e.g. mlp and mlp-3) do not give significant advantage over simpler machine learning methods. The minor loss in performance is acceptable compared to the in-creased speed and low computation of simple machine learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Further analysis reveals that MLP based models are indeed over-fitting to the training data as shown by near perfect F1-score on the training data (see 2). Additionally, GBT models consistently achieve much better performance on the test set compared to other models, including RF model which was our best entry on the leader board. Furthermore, the highest performing label is the Influential label. All models (except LR) perform the worse on the Incidental when using all text features but when only using citation context, the label performance is similar across labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Our team 'Scubed' submitted 5 models for the citation context classification based on influence task. Out of the submitted models the random forest classifier performed the best on the test set achieving second position in this task. It achieved a private score of 0.55204 on the test set which was not only 0.003 behind the best performing model. We were able to achieve competitive results under minimum trials using fast and computationally cheap machine learning models. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Content analysis of references: Adjunct or alternative to citation counting?",
"authors": [
{
"first": "Daryl",
"middle": [
"E"
],
"last": "Chubin",
"suffix": ""
},
{
"first": "Soumyo",
"middle": [
"D"
],
"last": "Moitra",
"suffix": ""
}
],
"year": 1975,
"venue": "Social Studies of Science",
"volume": "5",
"issue": "4",
"pages": "423--441",
"other_ids": {
"DOI": [
"10.1177/030631277500500403"
]
},
"num": null,
"urls": [],
"raw_text": "Daryl E. Chubin and Soumyo D. Moitra. 1975. Con- tent analysis of references: Adjunct or alternative to citation counting? Social Studies of Science, 5(4):423-441.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Theory and practise of the g-index",
"authors": [
{
"first": "Leo",
"middle": [
"Egghe"
],
"last": "",
"suffix": ""
}
],
"year": 2006,
"venue": "Scientometrics",
"volume": "69",
"issue": "1",
"pages": "131--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Egghe. 2006. Theory and practise of the g-index. Scientometrics, 69(1):131-152.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An index to quantify an individual's scientific research output",
"authors": [
{
"first": "Jorge",
"middle": [],
"last": "Hirsch",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the National Academy of Sciences of the United States of America",
"volume": "102",
"issue": "",
"pages": "16569--72",
"other_ids": {
"DOI": [
"10.1073/pnas.0507655102"
]
},
"num": null,
"urls": [],
"raw_text": "Jorge Hirsch. 2005. An index to quantify an individ- ual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102:16569-72.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Counting citations in texts rather than reference lists to improve the accuracy of assessing scientific contribution",
"authors": [
{
"first": "Wen-Ru",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Deng-Ke",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2011,
"venue": "BioEssays",
"volume": "33",
"issue": "10",
"pages": "724--727",
"other_ids": {
"DOI": [
"10.1002/bies.201100067"
]
},
"num": null,
"urls": [],
"raw_text": "Wen-Ru Hou, Ming Li, and Deng-Ke Niu. 2011. Counting citations in texts rather than reference lists to improve the accuracy of assessing scientific con- tribution. BioEssays, 33(10):724-727.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Measuring the evolution of a scientific field through citation frames",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Raine",
"middle": [],
"last": "Hoover",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Mc-Farland",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens, Srijan Kumar, Raine Hoover, Dan Mc- Farland, and Dan Jurafsky. 2018. Measuring the evo- lution of a scientific field through citation frames. Transactions of the Association of Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Expertise as an aspect of author contributions",
"authors": [
{
"first": "Shubhanshu",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Brent",
"middle": [
"D"
],
"last": "Fegley",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Diesner",
"suffix": ""
},
{
"first": "Vetle",
"middle": [
"I"
],
"last": "Torvik",
"suffix": ""
}
],
"year": 2018,
"venue": "WORKSHOP ON IN-FORMETRIC AND SCIENTOMETRIC RESEARCH (SIG/MET)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shubhanshu Mishra, Brent D. Fegley, Jana Diesner, and Vetle I. Torvik. 2018a. Expertise as an aspect of author contributions. In WORKSHOP ON IN- FORMETRIC AND SCIENTOMETRIC RESEARCH (SIG/MET), Vancouver.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Self-citation is the hallmark of productive authors, of any gender",
"authors": [
{
"first": "Shubhanshu",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Brent",
"middle": [
"D"
],
"last": "Fegley",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Diesner",
"suffix": ""
},
{
"first": "Vetle",
"middle": [
"I"
],
"last": "Torvik",
"suffix": ""
}
],
"year": 2018,
"venue": "PLOS ONE",
"volume": "13",
"issue": "9",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0195773"
]
},
"num": null,
"urls": [],
"raw_text": "Shubhanshu Mishra, Brent D. Fegley, Jana Diesner, and Vetle I. Torvik. 2018b. Self-citation is the hall- mark of productive authors, of any gender. PLOS ONE, 13(9):e0195773.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Quantifying Conceptual Novelty in the Biomedical Literature. D-Lib magazine : the magazine of the Digital Library Forum",
"authors": [
{
"first": "Shubhanshu",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Vetle",
"middle": [
"I"
],
"last": "Torvik",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "22",
"issue": "",
"pages": "9--10",
"other_ids": {
"DOI": [
"10.1045/september2016-mishra"
]
},
"num": null,
"urls": [],
"raw_text": "Shubhanshu Mishra and Vetle I. Torvik. 2016. Quanti- fying Conceptual Novelty in the Biomedical Litera- ture. D-Lib magazine : the magazine of the Digital Library Forum, 22(9-10).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Some results on the function and quality of citations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Poovanalingam",
"middle": [],
"last": "Moravcsik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murugesan",
"suffix": ""
}
],
"year": 1975,
"venue": "Social Studies of Science",
"volume": "5",
"issue": "1",
"pages": "86--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J. Moravcsik and Poovanalingam Murugesan. 1975. Some results on the function and quality of citations. Social Studies of Science, 5(1):86-92.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the 2020 wosp 3c citation context classification task",
"authors": [
{
"first": "N",
"middle": [],
"last": "Suchetha",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kunnath",
"suffix": ""
},
{
"first": "Bikash",
"middle": [],
"last": "Pride",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Gyawali",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knoth",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 8th International Workshop on Mining Scientific Publications, ACM/IEEE Joint Conference on Digital Libraries",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchetha N. Kunnath, David Pride, Bikash Gyawali, and Petr Knoth. 2020. Overview of the 2020 wosp 3c citation context classification task. In Proceed- ings of The 8th International Workshop on Mining Scientific Publications, ACM/IEEE Joint Conference on Digital Libraries (JCDL 2020, Wuhan, China.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Muhammad Tanvir Afzal, and Hanan Aljuaid. 2020. Important citation identification by exploiting content and section-wise in-text citation count",
"authors": [
{
"first": "Shahzad",
"middle": [],
"last": "Nazir",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Asif",
"suffix": ""
},
{
"first": "Shahbaz",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Faisal",
"middle": [],
"last": "Bukhari",
"suffix": ""
}
],
"year": null,
"venue": "PLOS ONE",
"volume": "15",
"issue": "3",
"pages": "1--19",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0228885"
]
},
"num": null,
"urls": [],
"raw_text": "Shahzad Nazir, Muhammad Asif, Shahbaz Ahmad, Faisal Bukhari, Muhammad Tanvir Afzal, and Hanan Aljuaid. 2020. Important citation identifica- tion by exploiting content and section-wise in-text citation count. PLOS ONE, 15(3):1-19.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Act: An annotation platform for citation typing at scale",
"authors": [
{
"first": "D",
"middle": [],
"last": "Pride",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Knoth",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Harag",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Pride, P. Knoth, and J. Harag. 2019. Act: An anno- tation platform for citation typing at scale. In 2019",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ACM/IEEE Joint Conference on Digital Libraries (JCDL)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "329--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 329-330.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Measuring academic influence: Not all citations are equal",
"authors": [
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lemire",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Vellino",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodan Zhu, Peter D. Turney, Daniel Lemire, and Andr\u00e9 Vellino. 2015. Measuring academic in- fluence: Not all citations are equal. CoRR, abs/1501.06587.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Random Forest (RF): Random Forest model with 100 trees in the forest and boot-strapping trained on the tf-idf features.\u2022 Gradient Boosting Classifier (GBT): A gradient boosted classifier with 100 boosting stages trained on the tf-idf features. \u2022 Multi-layer Perceptron Classifier (MLP):A 1 hidden layer multi-layer perceptron classifier with 100 nodes and Relu activation, optimized using Adam optimizer with a learning rate of 0.001 and momentum of 0.99. \u2022 Multi-layer Perceptron Classifier (MLP-3): A 3 hidden layer multi-layer perceptron classifier with 256, 256, and 128 nodes in the first, second and third layers with Relu activation optimized using Adam optimizer with a learning rate of 0.001 and momentum of 0.99.",
"uris": null,
"num": null
},
"TABREF0": {
"text": "Results for the Influence Sub-task. The overall best model used 116 submissions on the test data while we only utilized max 5 submissions as specified by the competition.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>S.No</td><td>Model</td><td colspan=\"3\">Private Public Rank</td></tr><tr><td>1</td><td>LR (v2)</td><td>0.323</td><td>0.305</td><td>-</td></tr><tr><td>2</td><td>GBT (v2)</td><td>0.524</td><td>0.565</td><td>5</td></tr><tr><td>3</td><td>RF (v1)</td><td>0.552</td><td>0.591</td><td>2</td></tr><tr><td>4</td><td colspan=\"2\">MLP-3 (v2) 0.482</td><td>0.516</td><td>-</td></tr><tr><td>6</td><td>Best</td><td>0.556</td><td>0.576</td><td>1</td></tr><tr><td colspan=\"5\">4.1 Replication model performance after</td></tr><tr><td colspan=\"3\">leader board submission</td><td/><td/></tr><tr><td colspan=\"5\">After the final leader board ranking, we decided</td></tr><tr><td colspan=\"5\">to replicate the model performance on the actual</td></tr><tr><td colspan=\"5\">test set provided to us by the shared task organizers.</td></tr><tr><td colspan=\"5\">Our evaluation scores may not match with the sub-</td></tr><tr><td colspan=\"5\">mitted solutions as the model changes on each run</td></tr><tr><td colspan=\"5\">and we did not record the random seed for the orig-</td></tr><tr><td colspan=\"5\">inal submission. This analysis was conducted to</td></tr><tr><td colspan=\"5\">generate comparable results for all models across</td></tr><tr><td colspan=\"5\">the training and test sets (see table 2), and to further</td></tr><tr><td colspan=\"5\">inspect the performance of the model on each label</td></tr><tr><td>(see</td><td/><td/><td/><td/></tr></table>"
},
"TABREF1": {
"text": "Model evaluation scores (macro F1) on the test data on retraining models after leader board ranking.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>model</td><td>v1</td><td/><td>v2</td></tr><tr><td/><td>test</td><td>train</td><td>test</td><td>train</td></tr><tr><td>mlp</td><td colspan=\"4\">0.523 0.992 0.494 1.000</td></tr><tr><td colspan=\"5\">mlp-3 0.524 0.994 0.496 1.000</td></tr><tr><td>gbt</td><td colspan=\"4\">0.535 0.770 0.537 0.804</td></tr><tr><td>rf</td><td colspan=\"4\">0.550 0.976 0.492 0.985</td></tr><tr><td>lr</td><td colspan=\"4\">0.551 0.830 0.314 0.343</td></tr></table>"
},
"TABREF2": {
"text": "Per label model evaluation on the test data. model INCIDENTAL INFLUENTIAL accuracy macro avg weighted avg",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"2\">Citing Context only (v1)</td><td/><td/></tr><tr><td>mlp</td><td>0.487</td><td>0.559</td><td>0.526</td><td>0.523</td><td>0.526</td></tr><tr><td>mlp-3</td><td>0.512</td><td>0.535</td><td>0.524</td><td>0.524</td><td>0.525</td></tr><tr><td>gbt</td><td>0.568</td><td>0.502</td><td>0.537</td><td>0.535</td><td>0.532</td></tr><tr><td>rf</td><td>0.545</td><td>0.554</td><td>0.550</td><td>0.550</td><td>0.550</td></tr><tr><td>lr</td><td>0.567</td><td>0.536</td><td>0.552</td><td>0.551</td><td>0.550</td></tr><tr><td/><td/><td colspan=\"2\">All features (v2)</td><td/><td/></tr><tr><td>lr</td><td>0.627</td><td>0.000</td><td>0.457</td><td>0.314</td><td>0.287</td></tr><tr><td>rf</td><td>0.489</td><td>0.495</td><td>0.492</td><td>0.492</td><td>0.492</td></tr><tr><td>mlp</td><td>0.469</td><td>0.519</td><td>0.495</td><td>0.494</td><td>0.496</td></tr><tr><td>mlp-3</td><td>0.444</td><td>0.548</td><td>0.501</td><td>0.496</td><td>0.500</td></tr><tr><td>gbt</td><td>0.499</td><td>0.575</td><td>0.540</td><td>0.537</td><td>0.540</td></tr></table>"
},
"TABREF3": {
"text": "Top features in the LR (v1) model",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">INCIDENTAL</td><td colspan=\"2\">INFLUENTIAL</td></tr><tr><td>feature</td><td colspan=\"2\">weight feature</td><td>weight</td></tr><tr><td colspan=\"3\">0 including -0.703 the</td><td>1.547</td></tr><tr><td>1 learning</td><td colspan=\"2\">-0.702 first</td><td>0.813</td></tr><tr><td>2 11</td><td colspan=\"2\">-0.652 were</td><td>0.742</td></tr><tr><td>3 2002</td><td colspan=\"2\">-0.629 to</td><td>0.676</td></tr><tr><td>4 and</td><td colspan=\"2\">-0.624 of</td><td>0.631</td></tr><tr><td>5 amp</td><td colspan=\"2\">-0.623 cessation</td><td>0.620</td></tr><tr><td colspan=\"3\">6 academic -0.608 us</td><td>0.575</td></tr><tr><td>7 impact</td><td colspan=\"2\">-0.580 avh</td><td>0.518</td></tr><tr><td>8 13</td><td colspan=\"2\">-0.544 virus</td><td>0.513</td></tr><tr><td>9 research</td><td colspan=\"2\">-0.495 temperature</td><td>0.510</td></tr></table>"
}
}
}
}