ACL-OCL / Base_JSON /prefixS /json /sdp /2020.sdp-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:55.559887Z"
},
"title": "Structure-Tags Improve Text Classification for Scholarly Document Quality Prediction",
"authors": [
{
"first": "Gideon",
"middle": [],
"last": "Maillette De Buy Wenniger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"settlement": "Groningen",
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Van Dongen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"settlement": "Groningen",
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Eleri",
"middle": [],
"last": "Aedmaa",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Herbert",
"middle": [
"Teun"
],
"last": "Kruitbosch",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Edwin",
"middle": [
"A"
],
"last": "Valentijn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"settlement": "Groningen",
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Lambert",
"middle": [],
"last": "Schomaker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"settlement": "Groningen",
"country": "The Netherlands"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Training recurrent neural networks on long texts, in particular scholarly documents, causes problems for learning. While hierarchical attention networks (HANs) are effective in solving these problems, they still lose important information about the structure of the text. To tackle these problems, we propose the use of HANs combined with structure-tags which mark the role of sentences in the document. Adding tags to sentences, marking them as corresponding to title, abstract or main body text, yields improvements over the stateof-the-art for scholarly document quality prediction. The proposed system is applied to the task of accept/reject prediction on the Peer-Read dataset and compared against a recent BiLSTM-based model and joint textual+visual model as well as against plain HANs. Compared to plain HANs, accuracy increases on all three domains. On the computation and language domain our new model works best overall, and increases accuracy 4.7% over the best literature result. We also obtain improvements when introducing the tags for prediction of the number of citations for 88k scientific publications that we compiled from the Allen AI S2ORC dataset. For our HAN-system with structure-tags we reach 28.5% explained variance, an improvement of 1.8% over our reimplementation of the BiLSTM-based model as well as 1.0% improvement over plain HANs.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Training recurrent neural networks on long texts, in particular scholarly documents, causes problems for learning. While hierarchical attention networks (HANs) are effective in solving these problems, they still lose important information about the structure of the text. To tackle these problems, we propose the use of HANs combined with structure-tags which mark the role of sentences in the document. Adding tags to sentences, marking them as corresponding to title, abstract or main body text, yields improvements over the stateof-the-art for scholarly document quality prediction. The proposed system is applied to the task of accept/reject prediction on the Peer-Read dataset and compared against a recent BiLSTM-based model and joint textual+visual model as well as against plain HANs. Compared to plain HANs, accuracy increases on all three domains. On the computation and language domain our new model works best overall, and increases accuracy 4.7% over the best literature result. We also obtain improvements when introducing the tags for prediction of the number of citations for 88k scientific publications that we compiled from the Allen AI S2ORC dataset. For our HAN-system with structure-tags we reach 28.5% explained variance, an improvement of 1.8% over our reimplementation of the BiLSTM-based model as well as 1.0% improvement over plain HANs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic prediction of the quality of scientific and other texts is a new topic within the field of deep learning. Deep learning has been successfully applied to many natural language processing (NLP) problems including text classification, as well as many computer vision applications including document structure analysis. These successes suggest automatic quality assessment of scientific documents, while still highly ambitious, is feasible for scientific study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sequential deep learning models, particularly recurrent neural networks (RNNs), long short-term memories (LSTMs) and their variants, have been particularly successful for applications that require the encoding and/or generation of relatively short sequences of text, typically at most a few sentences. Applications include (short) text classification (Rao and Spasojevic, 2016) , entailment (Rockt\u00e4schel et al., 2015) and neural machine translation (MT) (Bahdanau et al., 2014; Luong et al., 2015) . Newer attention-based models, particularly the transformer model (Vaswani et al., 2017) are even more apt at using all of the possible context when encoding sentences, further improving performance. Transformers are also used to build general sentence embeddings with the BERT model (Devlin et al., 2018) . In comparison, the accurate classification of full documents remains challenging. To be effective, a deep learning model for longer text should fulfill the following three criteria: 1. Trainability: being trainable on long texts.",
"cite_spans": [
{
"start": 351,
"end": 377,
"text": "(Rao and Spasojevic, 2016)",
"ref_id": "BIBREF15"
},
{
"start": 391,
"end": 417,
"text": "(Rockt\u00e4schel et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 454,
"end": 477,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 478,
"end": 497,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 565,
"end": 587,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 783,
"end": 804,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "as parallelizability, to effectively use GPUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational efficiency: efficiency as well",
"sec_num": "2."
},
{
"text": "3. Rich context: having access to rich context at sentence and document level. And avoiding therefore: 1) the assumption that sentences at different locations are independent, 2) the even more crippling assumption of statistical independence of document words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational efficiency: efficiency as well",
"sec_num": "2."
},
{
"text": "Plain sequential models such as RNNs and LSTMs model text as unstructured word sequences. This causes problems on longer texts because of the vanishing gradient and exploding gradient problem (Pascanu et al., 2013) , which hampers trainability. Gradient bounding methods including gradient clipping (Hochreiter, 1998) , can help to reduce these problems, but provide no solution for docu-ments with thousands of words. Transformers and BERT are not a good match for long texts either, as these models have a computational cost that grows quadratically with sentence length. Arguably, bagof-word models, including models performing average pooling over word embeddings accomplish trainability and computational efficiency. However, their computational cheapness is achieved at the price of making very strong statistical independence assumptions that harm prediction quality.",
"cite_spans": [
{
"start": 192,
"end": 214,
"text": "(Pascanu et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 299,
"end": 317,
"text": "(Hochreiter, 1998)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational efficiency: efficiency as well",
"sec_num": "2."
},
{
"text": "A group of models does fulfill all three criteria: hierarchical versions of sequential models, in particular hierarchical attention networks (HANs) (Yang et al., 2016) . HANs produce hierarchical text encodings using a hierarchical stacking of LSTMswith attention, for the sentence and text level. This massively increases parallelization while simultaneously reducing the number of steps the gradient signal needs to be back-propagated during training, increasing learnability. HAN text encodings can still take much context into account at every level in the representation, thanks to the use of LSTMs.",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational efficiency: efficiency as well",
"sec_num": "2."
},
{
"text": "While HANs are highly effective in forming adequate representations of longer texts, they are still deficient in the use of structure information inherent in the text. The reason is simple: these models have only one ((Bi)LSTM) encoding sub-model per level in the hierarchy. This sub-model is used to encode all the inputs at that level, without access to relevant structure context. In this work we observe that this problem can be tackled by adding XML-like structure-tags at the beginning and end of each input sentence. The effectiveness of our approach is demonstrated on two tasks: A Paper accept/reject prediction on the Peer-Read dataset (Kang et al., 2018) .",
"cite_spans": [
{
"start": 646,
"end": 665,
"text": "(Kang et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational efficiency: efficiency as well",
"sec_num": "2."
},
{
"text": "B Number of citations prediction for scholarly documents, on a new dataset with 88K articles compiled from the Allen AI S2ORC dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational efficiency: efficiency as well",
"sec_num": "2."
},
{
"text": "The experiments for both tasks show that using just three tags to mark abstract, title and body text, already provides substantial improvements: A) outperforming all models on the computation and language domain and HAN without tags on all domains, B) outperforming all other models. Larger gains can likely be made by further enriching the tag-set. The proposed tagging approach is particularly useful in the domain of scholarly document understanding, since while these document are typically long, they are also highly structured. The rest of the paper is structured as follows. In section 2 we discuss the various existing and alternative NLP models for the aforementioned tasks of quality prediction. Section 3 describes the proposed HAN model combined with structure-tags. Section 4 and 5 respectively discuss their use for accept/reject and number of citations prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational efficiency: efficiency as well",
"sec_num": "2."
},
{
"text": "Multiple methods have been proposed to estimate the quality of scientific papers. The most common approach is to use the citation counts as a measure of quality, to be predicted by models. Fu and Aliferis (2008) proposed one of the first models which used both the papers content in the form of the paper title, abstract and keywords as well as bibliometric information. Notably they used automated scripts to retrieve bibliometric information, even so their final corpus is still relatively small, containing 3788 papers. Limited recent research is available on the subject of predicting the quality of papers with deep learning using the textual content. Shen et al. (2019) combine visual and textual content using a CNN and LSTM respectively. The authors make use of the Wikipedia and the arXiv datasets. The authors propose a joint model that classifies the quality of papers. To generate textual embeddings, the authors use a bi-directional LSTM model similar to the one proposed by the same authors in (Shen et al., 2017) . The input to the model is the word embeddings of a paper, obtained using GloVe, and the output is a textual embedding.",
"cite_spans": [
{
"start": 189,
"end": 211,
"text": "Fu and Aliferis (2008)",
"ref_id": "BIBREF2"
},
{
"start": 657,
"end": 675,
"text": "Shen et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 1008,
"end": 1027,
"text": "(Shen et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some recent work focuses on predicting the number of citations from the paper text augmented with review text. To do so, Li et al. (2019) created a dataset of abstracts and reviews from the ICLR and NIPS conferences: 1739 abstracts with a total of 7171 reviews for ICLR and 384 abstracts with 1119 reviews for NIPS. Plank and van Dale (2019) collect a dataset of 3427 papers with 12260 reviews. Both papers show improvement in the results from using the review information.",
"cite_spans": [
{
"start": 121,
"end": 137,
"text": "Li et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hierarchical versions of sequential models have already been pioneered in the literature for a long time in the form of hierarchical RNNs (Hihi and Bengio, 1996) . More recently however, use of LSTMs instead of RNNs and use of attention resulted in the now popular HAN model (Yang et al., 2016) , which was successfully applied to sentiment analysis and text classification.",
"cite_spans": [
{
"start": 138,
"end": 161,
"text": "(Hihi and Bengio, 1996)",
"ref_id": "BIBREF4"
},
{
"start": 275,
"end": 294,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical sequential models",
"sec_num": null
},
{
"text": "Adding structure through additional inputs Our proposed structure-tag framework resembles the approach that been used for neural MT translating multiple source languages to multiple target languages using a unified model (Johnson et al., 2016) , in which a special \"command token\" is used to indicate which kind of translation is desired. Related also is the idea of using multiple embeddings for different types of information, as introduced in neural MT by Sennrich and Haddow (2016) , which was later also exploited in the transformer model (Vaswani et al., 2017) . In contrast to the latter approaches which change the embedding layer, like (Johnson et al., 2016) we leave the (HAN) model exactly as is and only change the input.",
"cite_spans": [
{
"start": 221,
"end": 243,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 459,
"end": 485,
"text": "Sennrich and Haddow (2016)",
"ref_id": "BIBREF17"
},
{
"start": 544,
"end": 566,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 645,
"end": 667,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical sequential models",
"sec_num": null
},
{
"text": "In this work we use and refine state-of-the-art textbased deep learning models for text classification and regression tasks: accept/reject prediction and number of citations prediction respectively. Our contributions focus on HANs, 1 which we show for these tasks to be competitive with models that use a flat BiLSTM encoder at their core (Shen et al., 2019) . Figure 1a shows a diagram of our HAN model with structure-tags added to the input, and Figure 1b shows a diagram of the BiLSTM-based model, our baseline for comparison. As can be seen from the diagrams, both models use a BiLSTM at the text level that works on embeddings computed for the sentences of the text. However, while HAN uses the sequential order to compute an embedding, the baseline model averages word vectors, disregarding order, similar to bag-of-word representations. We also use a second baseline model: Average Word Embeddings (AWE), which simply encodes text by the average word embedding.",
"cite_spans": [
{
"start": 339,
"end": 358,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 361,
"end": 370,
"text": "Figure 1a",
"ref_id": "FIGREF1"
},
{
"start": 448,
"end": 457,
"text": "Figure 1b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "The hierarchical structure of text characterized by structure elements such as sections, paragraphs and sentences and labeling elements such as document titles and section titles reveals important information. Models without hierarchy such as plain RNN/LSTM models ignore this structure, which motivated HAN. HAN uses an LSTM with attention to create encodings of each sentence separately and combines this with a second LSTM with attention on top to transform these into an encoding of the entire text. The hierarchical structure of HAN provides several advantages over flat sequential models, i.e. plain RNNs/LSTMs: 1. Trainability on long texts: using less steps for back-propagating gradients during training, HAN can process longer texts without running into vanishing/exploding gradient problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence type tags for more structure",
"sec_num": "3.1"
},
{
"text": "<TITLE>Cross-Task Knowledge-Constrained Self Training </TITLE> <ABSTRACT> Abstract </ABSTRACT> <ABSTRACT> We present an algorithmic framework for learning multiple related tasks. </ABSTRACT> . . . <BODY_TEXT> 1 Introduction </BODY_TEXT> <BODY_TEXT> When two NLP systems are run on the same data, we expect certain constraints to hold between their outputs. </BODY_TEXT> . . . HAN preserves high-resolution when forming sentence-level encodings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence type tags for more structure",
"sec_num": "3.1"
},
{
"text": "2. Computational efficiency: the structure of HAN makes computations better parallelizable, since its sentence encoding LSTMs can process their inputs in parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence type tags for more structure",
"sec_num": "3.1"
},
{
"text": "3. Interpretability of predictions: visualizing HAN attention facilitates some qualitative insight into what inputs are important for making predictions at a sentence and word level. Despite these large advantages, HAN in its normal application still remains limited in its use of structure. In particular, while HAN encodes sentences in a hierarchical way, it does so while using the same LSTM encoder for every sentence without structure context. In this work we introduce a way to overcome these problems by adding sentence type tags encoding the role of a sentences or other information, which is then directly available to the BiLSTM when encoding the sentences. This is illustrated in Figure 2 . First the input is segmented into a list of sentences, 2 as is also done in preprocessing for regular HAN. Then the role of each sentence is added at the beginning and end of each sentence. In our current experiments the roles are restricted to three options: TITLE, ABSTRACT, BODY_TEXT, however, the idea is general enough to include much more specific tags as well as tags encoding relative or absolute sentence position information; to be explored in future work. We will refer to this system as hierarchical attention network with structure tags (HAN ST ). The advantages of the tag-base approach over other possible solutions, such as using different BiLSTMs for different types of sentences are simplicity and scalability. Equally important, using tags allows the BiLSTM to only specialize its functioning to specific types of sentences where needed, while",
"cite_spans": [],
"ref_spans": [
{
"start": 691,
"end": 699,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Sentence type tags for more structure",
"sec_num": "3.1"
},
{
"text": "The first scholarly document quality prediction task we test our methods on is accept/reject prediction on arXiv papers from the PeerRead dataset (Kang et al., 2018) . This dataset is chosen because of the large amount of earlier work in the literature reporting results on it, allowing comparison against the state-of-the-art on a well studied task.",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "(Kang et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Accept/Reject prediction on PeerRead",
"sec_num": "4"
},
{
"text": "The full PeerRead dataset holds 14784 papers in total, each of which contains implicit or explicit accept/reject labels. Furthermore, PeerRead contains different subsets of papers. The largest subset consists of arXiv papers (11778) in three computerscience sub-domains: 3 machine learning (cs.LG), computation and language (cs.CL), artificial intelligence (cs.AI), and has only accept/reject labels; this is the dataset that we use. A part of the papers also include reviews (3006 papers) and a subset of the latter also contains aspect scores (586 papers). However, of these papers with reviews, the large majority is from NIPS (2420 papers), and those papers are all accepted. As the arXiv portion is relatively larger, and accept/reject labeled, most work has focused on the task of accept/reject prediction for the papers in this set. Table 1 shows the sizes of the different subsets of the arXiv PeerRead dataset and their respective division in number of accept and reject examples. Note that this division is imbalanced for each of the three domains, with the least imbalance for the machine learning subset and the most imbalance for the artificial intelligence subset, in which around 90% of the examples is rejected. These imbalances in the number of examples for each of the classes make learning harder, but can be partly overcome by using strategies such as re-sampling.",
"cite_spans": [],
"ref_spans": [
{
"start": 840,
"end": 847,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Accept/Reject prediction on PeerRead",
"sec_num": "4"
},
{
"text": "In our experiments we tried to stay close to the experimental setup used by (Shen et al., 2019) , while deviating from their settings when necessary. We used PyTorch for our code and a single GeForce RTX 2080 Ti GPU for our experiments. Table 2 gives an overview of the used hyperparameters that are shared across experiments, as well as the hyperparameters that are specific to the accept/reject prediction task. We used Adam (Kingma and Ba, 2014) as optimizer, and Xavier (Glorot) (Glorot and Bengio, 2010) weight initialization. We use a considerably larger learning rate of 0.005, compared to 0.0001 used by (Shen et al., 2019) . 4 On PeerRead, we use a small batch size of 4 . This is necessary for HAN as it uses relatively much memory, because it builds rich hierarchical BiLSTM-based representations directly from the word embeddings. We furthermore use re-sampling on the computational language and artificial intelligence subsets, as we find that without it, due to the imbalance in the label frequencies, learning fails. The re-sampling is done for each epoch, by keeping the full subset of examples with the less frequent label, but sub-sampling an equal number of random examples from the more frequent label subset. In our experiments the training of all our models proceeds slower than the number of epochs (60) used by Shen et al. (2019) suggests. This observation holds not only for our models but also for our reimplementation of their model, and in spite of the fact that we are using a higher learning rate. We therefore used a higher number of 360 training epochs. 4 Learning rate 0.0001 gave poor results in our experiments. In each experiment, we used the highest accuracy score on the validation set to select the best model, using the last epoch that achieves that score in case of ties. We trained plus evaluated every model three times, to control for optimizer instability, reporting mean and standard deviation of the metric scores.",
"cite_spans": [
{
"start": 76,
"end": 95,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 612,
"end": 631,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 634,
"end": 635,
"text": "4",
"ref_id": null
},
{
"start": 1335,
"end": 1353,
"text": "Shen et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 1586,
"end": 1587,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Using the full text as input is in theory preferred over using only selected text, in order not to lose information prematurely. In practice however, this is not feasible with high resolution deep learning models such as HANs, which take input that starts at the word level. To save memory and computation, models may instead start out from the sentence level, using embeddings directly as inputs. But with simple sentence embeddings, this leads to a substantial loss of input information, which may hamper performance. Even so, (Shen et al., 2019) apply this strategy in a basic way by computing the average word embedding for each sentence, and using a BiLSTM model on top of that. Nevertheless, they still use a limit on the input length, by allowing only a maximum of 350 sentences. With HAN, which uses more memory and computation-intense sentence-level encodings, limiting the input length is even more crucial. However, rather than limiting the number of sentences, we limited the maximum number of characters, set to 20000. We found that with HANs this gives better results, even though on average it corresponds to less words. This is explained by the distribution over the number of words per example for each of the two length cutoff policies, see Table 4 . Fixing the number of sentences causes large variance in the number of words per example, likely caused by writing style differences across authors. In contrast, fixing the number of characters by definition assures a con- stant input length, and hence a more constant number of words (which is proportional to number of characters). We believe this more constant amount of information in the input aides learning. Table 5 shows our best results on the PeerRead dataset, using HAN ST . The same table also shows the previous literature results of (Shen et al., 2019) and (Kang et al., 2018) . Observe that in the computation & language domain, we gain 4.7% accuracy over the best of the these literature models (Joint), while on the machine learning domain and artificial intelligence domain datasets we lose 2.4% and 3.8% respectively in comparison to the best performing of the literature models on these domains (BiLSTM and Joint). In Table 6 we show the results for both our HAN models as well as for the other models. These results show a clear and consistent improvement from HAN ST over plain HAN: 1.5% accuracy for the computation & language domain and 2.1% for the machine learning domain and 0.7% for the machine learning domain. Table 6 also shows results for our own re-implementation of the BiLSTM model described by (Shen et al., 2019) . This useful for comparison since we made some changes to the experimental setup, including the use of higher learning rate and use of resampling. We observe better results with our reimplementation of BiLSTM than in the original work for these datasets where re-sampling was helpful, showing its importance for imbalanced datasets. While HAN ST is competitive with the literature models on PeerRead, it benefits from larger training data, as is available for the task of number of citations prediction.",
"cite_spans": [
{
"start": 529,
"end": 548,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1815,
"end": 1834,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1839,
"end": 1858,
"text": "(Kang et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 2599,
"end": 2618,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1259,
"end": 1266,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1683,
"end": 1690,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 2206,
"end": 2213,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 2508,
"end": 2516,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Input cutoff",
"sec_num": "4.1.1"
},
{
"text": "To determine the importance of different structure tags, in particular the title marking, we performed ablation experiments in which we reduced the label set. We combined the title and abstract label into one, leaving a structure-tag set with only two tags. Table 7 shows the results. As can be seen, the smaller structure-tag set reduces performance in comparison to HAN with three structure tags on all three domains. In the computation & language domain, the model performs worse also than HAN without structure tags on both accuracy and AUC, and in the artificial intelligence domain it performs equal in terms of accuracy but worse still on AUC. In the machine learning domain the model also loses performance over HAN with three structure tags, but still outperforms plain HAN. The results suggest that the titles of articles contain informa- tion that is relatively important for the model to make correct classification decisions, at least for the accept/reject prediction task with the PeerRead data. We leave study into the effect of extending the structure-tag set for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Effects of reducing the label set",
"sec_num": "4.1.3"
},
{
"text": "The second task we test our models on is number of citations prediction. A key advantage of this task is that large datasets can be obtained relatively easy by leveraging public sources such as the Semantic Scholar Database. In contrast, obtaining accept/reject labels in large quantities typically requires having an agreement with publishers, and even then because of legal problems, it is hard to obtain and publish such data. 6 Yet, how useful it is to predict the number of citations? More specifically: is the number of citations of a paper predictive of its quality? Intuitively one would expect this to be the case at least to some extent. Figure 3 shows histograms of the numbers of citations of articles from the PeerRead datasets for accepted and rejected papers. 7 While there are some differences between the two domains, the main trend is the same in both cases: for rejected papers, the counts are peaked around zero citations and quickly decrease to one or zero for high citation counts. In contrast, the number of citations for accepted papers is two to three times higher on average, depending on the domain. Finally, we formally computed correlation in the form of the Spearman rank-order correlation coefficient (\u03c1) and associated p-value for both domains. For both domains, the value of \u03c1 is high and the p-value extremely close to zero, which indicates significant correlation can be concluded at all p-levels of significance for a two-sided test. These histograms and numbers prove that there is indeed a strong correlation between acceptance/rejection and the number of citations. Therefore it makes sense to consider the number of citations as an imperfect but nonetheless useful proxy for the quality of scholarly documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 648,
"end": 656,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Number of citations prediction",
"sec_num": "5"
},
{
"text": "Recent works undertake the task of number of citations prediction based on the scholarly document text, but mostly do so while using relatively small datasets. As discussed in related work, some of the recent work adds review text to the input. However, creating models using reviewer comments limits their practical application to after reviewing and reduces available training data. These observations motivated us to rather aim for a relatively large dataset of paper, number of citations pairs. We selected a subset of papers in the computer science domain from the S2ORC (Lo et al., 2020) data, for which title, abstract and body text information is present; these are combined as the example text. We did this for papers in the year range 2000-2010, and counted the number of citations of citing papers that are published within 8 years after the publication of a paper. Randomly ordering the papers, from this we compiled a dataset with in total about 88K papers, and statistics as shown in Table 8 . 8 Note that to the best of our knowledge, the largest number of articles used for citation prediction in earlier work is described in (Plank and van Dale, 2019) , we use more than 23 times the number of articles used in their experiments. While we kept the maximum number of words per example at 20000, during our experiments we have only used the first of the list of text dictionaries for each article in S2ORC , consequently the average number of words is much lower: around 840 words per example. 9 We leave creating examples with the ",
"cite_spans": [
{
"start": 576,
"end": 593,
"text": "(Lo et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 1008,
"end": 1009,
"text": "8",
"ref_id": null
},
{
"start": 1142,
"end": 1168,
"text": "(Plank and van Dale, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1509,
"end": 1510,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 998,
"end": 1005,
"text": "Table 8",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "The dataset",
"sec_num": "5.1"
},
{
"text": "The number of citations follow of scholarly documents follow a Zipfian distribution (Silagadze, 1997) . That is, most papers have little citations, but those that obtain more citations tend to get exponentially more. To account for this, we used the log of the number of citations to create a metric that aims to approximates a measure of quality on a linear scale. In practice, we use the function: citation-score = log e (n + 1) (1) adding one to the number of citations n before taking the log, to make sure the function is well-defined even for papers with zero citations.",
"cite_spans": [
{
"start": 84,
"end": 101,
"text": "(Silagadze, 1997)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Citation-score as a quality proxy",
"sec_num": "5.2"
},
{
"text": "What alternatives to our log-based metric have been explored in the literature? Li et al. (2019) map citation counts to the [0,1] range, presumably by simply scaling them after the paper with the maximum and minimum number of citations in a dataset have been determined. But this approach transfers poorly to new data, since as the number of citations follows the Zipfian distribution, still higher citation counts in unseen data are likely. Furthermore, because of the Zipfian nature of the number of citations, this transformation will map the citation score of many papers to a number close to zero, drastically inflating the evaluation scores of predictions for this citation score. A better approach is to discretize the number of citations into a fixed number of ranges. To predict the impact of scientific papers, Plank and van Dale (2019) discretize timenormalized citation statistics into low, medium and high impact papers based on a boxplot and outlier analysis. In comparison however, our approach does not require discretization/binning, which has advantages: 1) not committing to a fixed resolution, 2) avoiding problems for papers with a number of citations on the border of two bins, 3) allowing the predicted scores to be deterministically transformed back to a number of citations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to alternative citation scores",
"sec_num": null
},
{
"text": "The next important question is what loss function we should optimize when training our networks to predict the chosen citation score (1). Whereas mean-squared-error (MSE) is the default choice for regression problems, we found this loss function to perform poorly in combination with our score. In contrast, preliminary experiments showed that mean-absolute-error (MAE) facilitates effective and relatively stable optimization, so we decided to use this. Another important question is the choice of quality metrics. MSE and MAE are standard metrics for regression evaluation, so we report those. Additionally, we report the R 2 score, denoting the proportion of the variance in the dependent variable that is predictable from the independent variable(s), defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss function and evaluation metrics",
"sec_num": "5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R 2 = 1 \u2212 FVU = 1 \u2212 MSE(Y, Y ) var[Y ]",
"eq_num": "(2)"
}
],
"section": "Loss function and evaluation metrics",
"sec_num": "5.3"
},
{
"text": "With Y and Y being the predicted and actual labels respectively, MSE being the mean-squared-error and and FVU the fraction of variance unexplained. This explains how the R 2 score normalizes for the relative difficulty for the task, by dividing by the variance of the labels in the test set. Another interpretation is that the R 2 score normalizes by the error obtained when always predicting the average of the test labels. Consequently, a R 2 score larger than 0 means performance better than this baseline, and below 0 means worse. This avoids the need to add scores for this baseline as reference, making the R 2 score more directly interpretable than MSE or MAE. As such, unlike the other metrics the R 2 score is also meaningfully comparable across datasets, which typically differ in test set variance. Table 9 shows the results of our models trained on our new S2ORC number of citations predic-tion dataset. We observe that the HAN ST model outperforms the other models. Furthermore, the improvements of HAN ST over BiLSTM and AWE is statistically significant (wilcoxon signed-rank test), with p-value 0.008 in both cases .",
"cite_spans": [],
"ref_spans": [
{
"start": 810,
"end": 817,
"text": "Table 9",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Loss function and evaluation metrics",
"sec_num": "5.3"
},
{
"text": "This work showed the usefulness of HAN and rich context tags to the processing of scientific documents. Consistent improvements in prediction quality were obtained for both accept/reject estimation and number of citations prediction for HAN when adding structure-tags. A strong and significant correlation between accept/reject labels and number of citations was demonstrated, signaling the usefulness of the latter as a measure of scholarly document quality. With more training data, as available on the citation-score prediction task, HAN with structure-tags outperforms the strong and recently proposed scholarly document quality prediction models that we compared to in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our HAN implementation is adapted from https://github.com/cedias/Hierarchical-Sentiment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use spaCy for this: https://spacy.io/ effectively sharing what can be generalized independent of sentence type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Based on arXiv categories within computer science, see: https://arxiv.org/archive/cs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The scores of the 3 runs for one system are combined at example level by taking the mode/average, i.e. simple voting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that while the PeerRead arXiv accept/reject dataset is relatively large, its labels are based on heuristics.7 By restricting citation counting to citing papers published within two years of each paper's publication, we keep citation counts comparable across papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The new S2ORC-derived log-citation-count prediction dataset, used in our experiments, is available from: https://github.com/gwenniger/s2orc-cc/ 9 Due to a misunderstanding of the S2ORC data format, which actually does contain longer text when combining all the text dictionaries, which got clarified after submission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This project has been supported by the European Fund for Regional development (EFRO) and the Target Fieldlab. The Peregrine high performance computing cluster, at the Center for Information Technology of the University (CIT) of Groningen, was used for running part of the experiments in this study. We would like to thank the people at the CIT for their support and access to the cluster. We would also like to thank Charles-Emmanuel Dias for sharing his HAN implementation, which proved to be a solid foundation for the HAN-based models used in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Models for predicting and explaining citation count of biomedical articles. AMIA",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Aliferis",
"suffix": ""
}
],
"year": 2008,
"venue": "Annual Symposium proceedings / AMIA Symposium. AMIA Symposium",
"volume": "6",
"issue": "",
"pages": "222--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Fu and Constantin Aliferis. 2008. Mod- els for predicting and explaining citation count of biomedical articles. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium, 6:222-6.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "AISTATS",
"volume": "9",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In AISTATS, volume 9 of JMLR Proceed- ings, pages 249-256. JMLR.org.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hierarchical recurrent neural networks for long-term dependencies",
"authors": [
{
"first": "Salah",
"middle": [
"El"
],
"last": "Hihi",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salah El Hihi and Yoshua Bengio. 1996. Hierarchical recurrent neural networks for long-term dependen- cies. In D. S. Touretzky, M. C. Mozer, and M. E.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Advances in Neural Information Processing Systems",
"authors": [
{
"first": "",
"middle": [],
"last": "Hasselmo",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "8",
"issue": "",
"pages": "493--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 493-499. MIT Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
}
],
"year": 1998,
"venue": "International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems",
"volume": "6",
"issue": "2",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter. 1998. The vanishing gradient prob- lem during learning recurrent neural nets and prob- lem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(2):107- 116.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A dataset of peer reviews (peerread): Collection, insights and nlp applications",
"authors": [
{
"first": "Dongyeop",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Van Zuylen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Kohlmeier",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Ed- uard Hovy, and Roy Schwartz. 2018. A dataset of peer reviews (peerread): Collection, insights and nlp applications.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "the 3rd International Conference for Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a confer- ence paper at the 3rd International Conference for Learning Representations, San Diego, 2015.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A neural citation count prediction model based on peer review text",
"authors": [
{
"first": "Siqing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wayne",
"middle": [
"Xin"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Eddy",
"middle": [],
"last": "Jing Yin",
"suffix": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4914--4924",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1497"
]
},
"num": null,
"urls": [],
"raw_text": "Siqing Li, Wayne Xin Zhao, Eddy Jing Yin, and Ji- Rong Wen. 2019. A neural citation count prediction model based on peer review text. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4914-4924, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "S2orc: The semantic scholar open research corpus",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Kinney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kin- ney, and Daniel S. Weld. 2020. S2orc: The seman- tic scholar open research corpus. In Proceedings of ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On the difficulty of training recurrent neural networks",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "ICML (3)",
"volume": "28",
"issue": "",
"pages": "1310--1318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neu- ral networks. In ICML (3), volume 28 of JMLR Workshop and Conference Proceedings, pages 1310- 1318. JMLR.org.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Citetracked: A longitudinal dataset ofpeer reviews and citations",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reinard Van Dale",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank and Reinard van Dale. 2019. Cite- tracked: A longitudinal dataset ofpeer reviews and citations. In Proceedings of the 4th Joint Workshop on Bibliometric-enhanced Information Re- trieval and Natural Language Processing for Digital Libraries (BIRNDL 2019).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Actionable and political text classification using word embeddings and lstm",
"authors": [
{
"first": "Adithya",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Nemanja",
"middle": [],
"last": "Spasojevic",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adithya Rao and Nemanja Spasojevic. 2016. Action- able and political text classification using word em- beddings and lstm.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Reasoning about entailment with neural attention",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, and Phil Blunsom. 2015. Reasoning about entailment with neural attention.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Linguistic input features improve neural machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. CoRR, abs/1606.02892.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A hybrid model for quality assessment of wikipedia articles",
"authors": [
{
"first": "Aili",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jianzhong",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Australasian Language Technology Association Workshop 2017",
"volume": "",
"issue": "",
"pages": "43--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aili Shen, Jianzhong Qi, and Timothy Baldwin. 2017. A hybrid model for quality assessment of wikipedia articles. In Proceedings of the Aus- tralasian Language Technology Association Work- shop 2017, pages 43-52.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A joint model for multimodal document quality assessment",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Bahar",
"middle": [],
"last": "Salehi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Jianzhong",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2019,
"venue": "JCDL '19: Proceedings of the 18th Joint Conference on Digital Libraries",
"volume": "",
"issue": "",
"pages": "107--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Shen, Bahar Salehi, Timothy Baldwin, and Jianzhong Qi. 2019. A joint model for multimodal document quality assessment. In JCDL '19: Pro- ceedings of the 18th Joint Conference on Digital Li- braries, pages 107-110.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Citations and the zipfmandelbrot's law. Complex Systems",
"authors": [
{
"first": "Z",
"middle": [
"K"
],
"last": "Silagadze",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "11",
"issue": "",
"pages": "487--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. K. Silagadze. 1997. Citations and the zipf- mandelbrot's law. Complex Systems, 11(6):487- 499.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1174"
]
},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "(b) Model proposed byShen et al. (2019).",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Most important models compared in this work.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Example of structure-tags for a paper from in the PeerRead computation and language arXiv dataset.",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Histograms and global statistics of number of citations for accepted and rejected papers for the subdomains of PeerRead. Histograms are truncated on the right at 100 citations. The table in 3c shows the formal correlation measure: average numbers of citations and Spearman rank-order correlation in the different domains full paper text for future work. The labels added to the examples consist of a function of the number of citations, as explained next.",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td/><td>num</td><td>training acc:rej</td><td>num</td><td>validation acc:rej</td><td>num</td><td>testing acc:rej</td><td>total</td></tr><tr><td>machine learning</td><td colspan=\"7\">4543 36.9% 2638</td></tr><tr><td>artificial intelligence</td><td colspan=\"3\">3682 10.5% : 89.5% 205</td><td>8.3% : 91.7%</td><td>205</td><td colspan=\"2\">7.8% : 92.2% 4092</td></tr></table>",
"text": "Data sizes and division between the ratio of accepted and rejected papers for the arXiv subsets 4% : 63.6% 252 36.5% : 63.5% 253 32.0% : 68.0% 5048 computation & language 2374 24.3% : 75.7% 132 22.0% : 78.0% 132 31.1% : 68.",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td/><td>PeerRead</td><td>S2ORC</td></tr><tr><td/><td>classification</td><td>regression</td></tr><tr><td>optimizer, learning rate</td><td colspan=\"2\">Adam, 0.005</td></tr><tr><td>maximum input characters</td><td colspan=\"2\">20000</td></tr><tr><td>vocabulary size</td><td colspan=\"2\">10000</td></tr><tr><td>weight initialization</td><td/><td/></tr><tr><td>general</td><td colspan=\"2\">Xavier uniform</td></tr><tr><td>lstm</td><td colspan=\"2\">Xavier normal</td></tr><tr><td>bias</td><td colspan=\"2\">zero</td></tr><tr><td>word embeddings</td><td colspan=\"2\">GloVe</td></tr><tr><td>loss function</td><td colspan=\"2\">cross entropy MAE</td></tr><tr><td>dropout probability</td><td>0.5</td><td>0.2</td></tr><tr><td>BiLSTM hidden size</td><td>256</td><td>100</td></tr><tr><td>batch size</td><td>4</td><td>64</td></tr><tr><td>embedding size</td><td>50</td><td>300</td></tr></table>",
"text": "Hyperparameters used in the experiments.",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>Task</td><td>AWE</td><td>BiLSM</td><td>HAN/HAN ST</td></tr><tr><td>PeerRead</td><td>500202</td><td colspan=\"2\">1657222 3235206</td></tr><tr><td colspan=\"4\">citation prediction 3000901 3402801 3644801</td></tr></table>",
"text": "Total trainable parameters per model.",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td/><td>average words</td><td>median words</td></tr><tr><td/><td>per example</td><td>per example</td></tr><tr><td>20000 chars length cutoff</td><td>3909 \u00b1 692</td><td>4076</td></tr><tr><td colspan=\"2\">360 sentences length cutoff 5246 \u00b1 1717</td><td>5514</td></tr></table>",
"text": "The effect of the length cutoff policy on the number of words distribution.",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>arXiv sub-domain dataset</td><td>Majority class prediction</td><td>Benchmark (Kang et al.,</td><td>BiLSTM (Shen et al.,</td><td>Joint (Shen et al., 2019)</td><td>HAN ST</td></tr><tr><td/><td/><td>2018)</td><td>2019)</td><td/><td/></tr><tr><td>artificial intelligence</td><td>92.2%</td><td>92.6%</td><td>91.5 \u00b1 1.03%</td><td>93.4 \u00b1 1.07%</td><td>89.6 \u00b1 1.02%</td></tr><tr><td>computation &amp; language</td><td>68.9%</td><td>75.7%</td><td>76.2 \u00b1 1.30%</td><td>77.1 \u00b1 3.10%</td><td>81.8 \u00b1 1.91%</td></tr><tr><td>machine learning</td><td>68.0%</td><td>70.7%</td><td>81.1 \u00b1 0.83%</td><td>79.9 \u00b1 2.54%</td><td>78.7 \u00b1 0.69%</td></tr></table>",
"text": "PeerRead accept/reject prediction accuracy: comparison of HAN ST against state-of-the-art.",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>arXiv</td><td/><td>Majority</td><td>Average</td><td>BiLSTM</td><td/><td/></tr><tr><td>sub-domain</td><td>metric</td><td>class</td><td>Word</td><td>(re-</td><td>HAN</td><td>HAN ST</td></tr><tr><td>dataset</td><td/><td>prediction</td><td>Embeddings</td><td>implemented)</td><td/><td/></tr><tr><td>artificial</td><td>accuracy</td><td>92.2%</td><td>74.1 \u00b1 0.49%</td><td colspan=\"3\">92.4 \u00b1 1.02% 88.9 \u00b1 1.97 % 89.6 \u00b1 1.02%</td></tr><tr><td>intelligence</td><td>AUC</td><td>0.50</td><td colspan=\"4\">0.793 \u00b1 0.0143 0.711 \u00b1 0.0771 0.625 \u00b1 0.042 0.705 \u00b1 0.055</td></tr><tr><td>computation</td><td>accuracy</td><td>68.9%</td><td>73.7 \u00b1 0.87%</td><td>80.1 \u00b1 1.91%</td><td>80.3 \u00b1 2.00%</td><td>81.8 \u00b1 1.91%</td></tr><tr><td>&amp; language</td><td>AUC</td><td>0.50</td><td>0.740 \u00b1 0.010</td><td colspan=\"3\">0.744 \u00b1 0.056 0.712 \u00b1 0.029 0.745 \u00b1 0.011</td></tr><tr><td>machine</td><td>accuracy</td><td>67.9%</td><td>72.9 \u00b1 0.60%</td><td>79.6 \u00b1 3.19%</td><td>76.7 \u00b1 2.77%</td><td>78.7 \u00b1 0.69%</td></tr><tr><td>learning</td><td>AUC</td><td>0.50</td><td>0.662 \u00b1 0.003</td><td colspan=\"3\">0.743 \u00b1 0.025 0.743 \u00b1 0.019 0.758 \u00b1 0.0149</td></tr></table>",
"text": "PeerRead accept/reject prediction accuracy and AUC (area under ROC curve) scores for our models.",
"html": null,
"num": null
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>domain</td><td>artificial</td><td>computation</td><td>machine</td></tr><tr><td>metric</td><td>intelligence</td><td>&amp; language</td><td>learning</td></tr><tr><td>accuracy</td><td>89.6</td><td/><td/></tr></table>",
"text": "Results of the HAN ST model with a reduced structure-tag set of only two tags. \u00b1 1.57% 79.3 \u00b1 0.14% 77.2 \u00b1 1.21% AUC 0.610 \u00b1 0.067 0.727 \u00b1 0.015 0.759 \u00b1 0.017",
"html": null,
"num": null
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">data subset num examples avg num words</td></tr><tr><td>training</td><td>78894</td><td>839.1 \u00b1 473.7</td></tr><tr><td>validation</td><td>4383</td><td>849.1 \u00b1 477.5</td></tr><tr><td>testing</td><td>4382</td><td>856.4 \u00b1 489.0</td></tr></table>",
"text": "S2ORC dataset size statistics.",
"html": null,
"num": null
},
"TABREF10": {
"type_str": "table",
"content": "<table/>",
"text": "Test scores for the log number of citations prediction on the S2ORC dataset.Average Word Embeddings BiLSTM (re-implemented) HAN HAN struct-tag R 2 score 0.238 \u00b1 0.0005 0.267 \u00b1 0.007 0.275 \u00b1 0.008 0.285 \u00b1 0.002 mean squared error 1.261 \u00b1 0.0008 1.214 \u00b1 0.009 1.201 \u00b1 0.007 1.184 \u00b1 0.002 mean absolute error 0.867 \u00b1 0.0002 0.842 \u00b1 0.001 0.833 \u00b1 0.003 0.831 \u00b1 0.001",
"html": null,
"num": null
}
}
}
}