| { |
| "paper_id": "P17-1044", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:16:47.816012Z" |
| }, |
| "title": "Deep Semantic Role Labeling: What Works and What's Next", |
| "authors": [ |
| { |
| "first": "Luheng", |
| "middle": [], |
| "last": "He", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Univ. of Washington", |
| "location": { |
| "settlement": "Seattle", |
| "region": "WA" |
| } |
| }, |
| "email": "luheng@cs.washington.edu" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Univ. of Washington", |
| "location": { |
| "settlement": "Seattle", |
| "region": "WA" |
| } |
| }, |
| "email": "kentonl@cs.washington.edu" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "mikelewis0@fb.com" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Univ. of Washington", |
| "location": { |
| "settlement": "Seattle", |
| "region": "WA" |
| } |
| }, |
| "email": "lukez@allenai.org" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.", |
| "pdf_parse": { |
| "paper_id": "P17-1044", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Semantic role labeling (SRL) systems aim to recover the predicate-argument structure of a sentence, to determine essentially \"who did what to whom\", \"when\", and \"where.\" Recently breakthroughs involving end-to-end deep models for SRL without syntactic input (Zhou and Xu, 2015; Marcheggiani et al., 2017) seem to overturn the long-held belief that syntactic parsing is a prerequisite for this task (Punyakanok et al., 2008) . In this paper, we show that this result can be pushed further using deep highway bidirectional LSTMs with constrained decoding, again significantly moving the state of the art (another 2 points on CoNLL 2005) . We also present a careful empirical analysis to determine what works well and what might be done to progress even further.", |
| "cite_spans": [ |
| { |
| "start": 258, |
| "end": 277, |
| "text": "(Zhou and Xu, 2015;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 278, |
| "end": 304, |
| "text": "Marcheggiani et al., 2017)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 398, |
| "end": 423, |
| "text": "(Punyakanok et al., 2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 623, |
| "end": 634, |
| "text": "CoNLL 2005)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our model combines a number of best practices in the recent deep learning literature. Fol-lowing Zhou and Xu (2015) , we treat SRL as a BIO tagging problem and use deep bidirectional LSTMs. However, we differ by (1) simplifying the input and output layers, (2) introducing highway connections (Srivastava et al., 2015; Zhang et al., 2016) , (3) using recurrent dropout (Gal and Ghahramani, 2016) , (4) decoding with BIOconstraints, and (5) ensembling with a product of experts. Our model gives a 10% relative error reduction over previous state of the art on the test sets of CoNLL 2005 and 2012. We also report performance with predicted predicates to encourage future exploration of end-to-end SRL systems.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 115, |
| "text": "Zhou and Xu (2015)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 293, |
| "end": 318, |
| "text": "(Srivastava et al., 2015;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 319, |
| "end": 338, |
| "text": "Zhang et al., 2016)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 369, |
| "end": 395, |
| "text": "(Gal and Ghahramani, 2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We present detailed error analyses to better understand the performance gains, including (1) design choices on architecture, initialization, and regularization that have a surprisingly large impact on model performance; (2) different types of prediction errors showing, e.g., that deep models excel at predicting long-distance dependencies but still struggle with known challenges such as PPattachment errors and adjunct-argument distinctions; (3) the role of syntax, showing that there is significant room for improvement given oracle syntax but errors from existing automatic parsers prevent effective use in SRL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In summary, our main contributions incluede:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 A new state-of-the-art deep network for endto-end SRL, supported by publicly available code and models. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 An in-depth error analysis indicating where the model works well and where it still struggles, including discussion of structural consistency and long-distance dependencies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Experiments that point toward directions for future improvements, including a detailed discussion of how and when syntactic parsers could be used to improve these results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Two major factors contribute to the success of our deep SRL model: (1) applying recent advances in training deep recurrent neural networks such as highway connections (Srivastava et al., 2015) and RNN-dropouts (Gal and Ghahramani, 2016 ), 2 and (2) using an A * decoding algorithm (Lewis and Steedman, 2014; Lee et al., 2016) to enforce structural consistency at prediction time without adding more complexity to the training process. Formally, our task is to predict a sequence y given a sentence-predicate pair (w, v) as input. Each y i \u2208 y belongs to a discrete set of BIO tags T . Words outside argument spans have the tag O, and words at the beginning and inside of argument spans with role r have the tags B r and I r respectively. Let n = |w| = |y| be the length of the sequence.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 192, |
| "text": "(Srivastava et al., 2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 197, |
| "end": 235, |
| "text": "RNN-dropouts (Gal and Ghahramani, 2016", |
| "ref_id": null |
| }, |
| { |
| "start": 281, |
| "end": 307, |
| "text": "(Lewis and Steedman, 2014;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 308, |
| "end": 325, |
| "text": "Lee et al., 2016)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Predicting an SRL structure under our model involves finding the highest-scoring tag sequence over the space of all possibilities Y:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y = argmax y\u2208Y f (w, y)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We use a deep bidirectional LSTM (BiLSTM) to learn a locally decomposed scoring function conditioned on the input: n t=1 log p(y t | w). To incorporate additional information (e.g., structural consistency, syntactic input), we augment the scoring function with penalization terms:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "f (w, y) = n t=1 log p(y t | w) \u2212 c\u2208C c(w, y 1:t ) (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Each constraint function c applies a non-negative penalty given the input w and a length-t prefix y 1:t . These constraints can be hard or soft depending on whether the penalties are finite.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our model computes the distribution over tags using stacked BiLSTMs, which we define as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "i l,t = \u03c3(W l i [h l,t+\u03b4 l , x l,t ] + b l i ) (3) o l,t = \u03c3(W l o [h l,t+\u03b4 l , x l,t ] + b l o ) (4) f l,t = \u03c3(W l f [h l,t+\u03b4 l , x l,t ] + b l f + 1) (5) c l,t = tanh(W l c [h l,t+\u03b4 l , x l,t ] + b l c ) (6) c l,t = i l,t \u2022c l,t + f l,t \u2022 c t+\u03b4 l (7) h l,t = o l,t \u2022 tanh(c l,t )", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "2 We thank Mingxuan Wang for suggesting highway connections with simplified inputs and outputs. Part of our model is extended from his unpublished implementation. where x l,t is the input to the LSTM at layer l and timestep t. \u03b4 l is either 1 or \u22121, indicating the directionality of the LSTM at layer l.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To stack the LSTMs in an interleaving pattern, as proposed by Zhou and Xu (2015) , the layerspecific inputs x l,t and directionality \u03b4 l are arranged in the following manner:", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 80, |
| "text": "Zhou and Xu (2015)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "x l,t = [W emb (w t ), W mask (t = v)] l = 1 h l\u22121,t l > 1 (9) \u03b4 l = 1 if l is even \u22121 otherwise", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The input vector x 1,t is the concatenation of token w t 's word embedding and an embedding of the binary feature (t = v) indicating whether w t word is the given predicate. Finally, the locally normalized distribution over output tags is computed via a softmax layer:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p(y t | x) \u221d exp(W y tag h L,t + b tag )", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Highway Connections To alleviate the vanishing gradient problem when training deep BiL-STMs, we use gated highway connections (Zhang et al., 2016; Srivastava et al., 2015) . We include transform gates r t to control the weight of linear and non-linear transformations between layers (See Figure 1) . The output h l,t is changed to:", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 146, |
| "text": "(Zhang et al., 2016;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 147, |
| "end": 171, |
| "text": "Srivastava et al., 2015)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 288, |
| "end": 297, |
| "text": "Figure 1)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "r l,t = \u03c3(W l r [h l,t\u22121 , x t ] + b l r )", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h l,t = o l,t \u2022 tanh(c l,t )", |
| "eq_num": "(13)" |
| } |
| ], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h l,t = r l,t \u2022 h l,t + (1 \u2212 r l,t ) \u2022 W l h x l,t", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Recurrent Dropout To reduce over-fitting, we use dropout as described in Gal and Ghahramani (2016) . A shared dropout mask z l is applied to the hidden state:", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 98, |
| "text": "Gal and Ghahramani (2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h l,t = r l,t \u2022 h l,t + (1 \u2212 r l,t ) \u2022 W l h x l,t (15) h l,t = z l \u2022 h l,t", |
| "eq_num": "(16)" |
| } |
| ], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "z l is shared across timesteps at layer l to avoid amplifying the dropout noise along the sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep BiLSTM Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The approach described so far does not model any dependencies between the output tags. To incorporate constraints on the output structure at decoding time, we use A * search over tag prefixes for decoding. Starting with an empty sequence, the tag sequence is built from left to right. The score for a partial sequence with length t is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "f (w, y 1:t ) = t i=1 log p(y i | w) \u2212 c\u2208C c(w, y 1:i )", |
| "eq_num": "(17)" |
| } |
| ], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "An admissible A * heuristic can be computed efficiently by summing over the best possible tags for all timesteps after t:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "g(w, y 1:t ) = n i=t+1 max y i \u2208T log p(y i | w)", |
| "eq_num": "(18)" |
| } |
| ], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Exploration of the prefixes is determined by an agenda A which is sorted by f (w, y 1:t ) + g(w, y 1:t ). In the worst case, A * explores exponentially many prefixes, but because the distribution p(y t | w) learned by the BiLSTM models is very peaked, the algorithm is efficient in practice. We list some example constraints as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "BIO Constraints These constraints reject any sequence that does not produce valid BIO transitions, such as B ARG0 followed by I ARG1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "SRL Constraints Punyakanok et al. (2008) ; T\u00e4ckstr\u00f6m et al. (2015) described a list of SRLspecific global constraints:", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 40, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 Unique core roles (U): Each core role (ARG0-ARG5, ARGA) should appear at most once for each predicate. \u2022 Continuation roles (C): A continuation role C-X can exist only when its base role X is realized before it. \u2022 Reference roles (R): A reference role R-X can exist only when its base role X is realized (not necessarily before R-X).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We only enforce U and C constraints, since the R constraints are more commonly violated in gold data and enforcing them results in worse performance (see discussions in Section 4.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained A * Decoding", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We can enforce consistency with a given parse tree by rejecting or penalizing arguments that are not constituents. In Section 4.4, we will discuss the motivation behind using syntactic constraints and experimental results using both predicted and gold syntax.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Constraints", |
| "sec_num": null |
| }, |
| { |
| "text": "While the CoNLL 2005 shared task assumes gold predicates as input (Carreras and M\u00e0rquez, 2005) , this information is not available in many downstream applications. We propose a simple model for end-to-end SRL, where the system first predicts a set of predicate words v from the input sentence w. Then each predicate in v is used as an input to argument prediction. We independently predict whether each word in the sentence is a predicate, using a binary softmax over the outputs of a bidirectional LSTM trained to maximize the likelihood of the gold labels.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 94, |
| "text": "(Carreras and M\u00e0rquez, 2005)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate Detection", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "3 Experiments", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate Detection", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We measure the performance of our SRL system on two PropBank-style, span-based SRL datasets: CoNLL-2005 (Carreras and M\u00e0rquez, 2005) and CoNLL-2012 (Pradhan et al., 2013 3 . Both datasets provide gold predicates (their index in the sentence) as part of the input. Therefore, each provided predicate corresponds to one training/test tag sequence. We follow the traindevelopment-test split for both datasets and use the official evaluation script from the CoNLL 2005 shared task for evaluation on both datasets.", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 132, |
| "text": "(Carreras and M\u00e0rquez, 2005)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 137, |
| "end": 147, |
| "text": "CoNLL-2012", |
| "ref_id": null |
| }, |
| { |
| "start": 148, |
| "end": 169, |
| "text": "(Pradhan et al., 2013", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Our network consists of 8 BiLSTM layers (4 forward LSTMs and 4 reversed LSTMs) with 300dimensional hidden units, and a softmax layer for predicting the output distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Setup", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Initialization All the weight matrices in BiL-STMs are initialized with random orthonormal matrices as described in Saxe et al. (2013). All tokens are lower-cased and initialized with 100-dimensional GloVe embeddings pre-trained on 6B tokens (Pennington et al., 2014) and updated during training. Tokens that are not covered by GloVe are replaced with a randomly initialized UNK embedding.", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 267, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Setup", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Training We use Adadelta (Zeiler, 2012) with = 1e \u22126 and \u03c1 = 0.95 and mini-batches of size 80. We set RNN-dropout probability to 0.1 and clip gradients with norm larger than 1. All the models are trained for 500 epochs with early stopping based on development results. 4 Ensembling We use a product of experts (Hinton, 2002) to combine predictions of 5 models, each trained on 80% of the training corpus and validated on the remaining 20%. For the CoNLL 2012 corpus, we split the training data from each sub-genre into 5 folds, such that the training data will have similar genre distributions.", |
| "cite_spans": [ |
| { |
| "start": 269, |
| "end": 270, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 310, |
| "end": 324, |
| "text": "(Hinton, 2002)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Setup", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "and CoNLL 2012 development sets. Only the BIO hard constraints significantly improve over the ensemble model. Therefore, in our final results, we only use BIO hard constraints during decoding. 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained Decoding We experimented with different types of constraints on the CoNLL 2005", |
| "sec_num": null |
| }, |
| { |
| "text": "In Table 1 and 2, we compare our best single and ensemble model with previous work. Our ensemble (PoE) has an absolute improvement of 2.1 F1 on both CoNLL 2005 and CoNLL 2012 over the previous state of the art. Our single model also achieves more than a 0.4 improvement on both datasets. In comparison with the best reported results, our percentage of completely correct predicates improves by 5.9 points. While the continuing trend of improving SRL without syntax seems to suggest that neural end-to-end systems no longer needs parsers, our analysis in Section 4.4 will show that accurate syntactic information can improve these deep models. Figure 2 shows learning curves of our model ablations on the CoNLL 2005 development set. We ablate our full model by removing highway connections, RNN-dropout, and orthonormal initialization independently. Without dropout, the model overfits at around 300 epochs at 78 F1. Orthonormal parameter initialization is surprisingly important-without this, the model achieves only 65 F1 within the first 50 epochs. All 8 layer ablations suffer a loss of more than 1.7 in absolute F1 compared to the full model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 643, |
| "end": 651, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The network for predicate detection (Section 2.3) contains 2 BiLSTM layers with 100-dimensional hidden units, and is trained for 30 epochs. For end-to-end evaluation, all arguments predicted for the false positive predicates are counted as precision loss, and all arguments for the false negative predicates are considered as recall loss. Table 3 shows the predicate detection F1 as well as end-to-end SRL results with predicted predi-cates. 6 On CoNLL 2005, the predicate detector achieved over 96 F1, and the final SRL results only drop 1.2-3.5 F1 compared to using the gold predicates. However, on CoNLL 2012, the predicate detector has only about 90 F1, and the final SRL results decrease by up to 6.2 F1. This is at least in part due to the fact that CoNLL 2012 contains some nominal and copula predicates (Weischedel et al., 2013) , making predicate identification a more challenging problem.", |
| "cite_spans": [ |
| { |
| "start": 442, |
| "end": 443, |
| "text": "6", |
| "ref_id": null |
| }, |
| { |
| "start": 811, |
| "end": 836, |
| "text": "(Weischedel et al., 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 339, |
| "end": 346, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "End-to-end SRL", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "To better understand our deep SRL model and its relation to previous work, we address the following questions with a suite of empirical analyses:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 What is the model good at and what kinds of mistakes does it make?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 How well do LSTMs model global structural consistency, despite conditionally independent tagging decisions?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 Is our model implicitly learning syntax, and could explicitly modeling syntax still help? All the analysis in this section is done on the CoNLL 2005 development set with gold predicates, unless otherwise stated. We are also able to compare to previous systems whose model predictions are available online (Punyakanok et al., 2005; Pradhan et al., 2005 ). 7", |
| "cite_spans": [ |
| { |
| "start": 307, |
| "end": 332, |
| "text": "(Punyakanok et al., 2005;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 333, |
| "end": 353, |
| "text": "Pradhan et al., 2005", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Inspired by Kummerfeld et al. (2012) , we define a set of oracle transformations that fix various prediction errors sequentially and observe the relative improvement after each operation (see Table 4 ). Figure 3 shows how our work compares to the pre- Figure 3 : Performance after doing each type of oracle transformation in sequence, compared to two strong non-neural baselines. The gap is closed after the Add Arg. transformation, showing how our approach is gaining from predicting more arguments than traditional systems. vious systems in terms of different types of mistakes. While our model makes a similar number of labeling errors to traditional syntax-based systems, it has far fewer missing arguments (perhaps due to parser errors making some arguments difficult to recover for syntax-based systems). Table 4 , our system most commonly makes labeling errors, where the predicted span is an argument but the role was incorrectly labeled. Table 5 shows a confusion matrix for the most frequent labels. The model often confuses ARG2 with AM-DIR, AM-LOC and AM-MNR. These confusions can arise due to the use of ARG2 in many verb frames to represent semantic relations such as direction or location. For example, ARG2 in the frame move.01 is defined as Arg2-GOL: destination. 8 This type of argumentadjunct distinction is known to be difficult (Kingsbury et al., 2002) , and it is not surprising that our neural model has many such failure cases.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 36, |
| "text": "Kummerfeld et al. (2012)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1349, |
| "end": 1373, |
| "text": "(Kingsbury et al., 2002)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 192, |
| "end": 199, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 203, |
| "end": 211, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 252, |
| "end": 260, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 811, |
| "end": 818, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 947, |
| "end": 954, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Types Breakdown", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Attachment Mistakes A second common source of error is reflected by the Merge Spans transformation (10.6%) and the Split Spans transformation (14.7%). These errors are closely tied to prepositional phrase (PP) attachment errors, which are also known to be some of the biggest challenges for linguistic analysis (Kummerfeld et al., 2012) . Figure 4 shows the distribution of syntactic span labels involved in an attachment mistake, where 62% of the syntactic spans are prepositional phrases. For example, in Sumitomo financed the acquisition from Sears, our model mistakenly labels the prepositional phrase from Sears as the ARG2 of financed, whereas it should instead attach to acquisition.", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 336, |
| "text": "(Kummerfeld et al., 2012)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 339, |
| "end": 347, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label Confusion As shown in", |
| "sec_num": null |
| }, |
| { |
| "text": "To analyze the model's ability to capture longrange dependencies, we compute the F1 of our model on arguments with various distances to the predicate. Figure 5 shows that performance tends to degrade, for all models, for arguments further from the predicate. Interestingly, the gap between shallow and deep models becomes much larger for the long-distance predicate-argument structures. The absolute gap between our 2 layer and 8 layer models is 3-4 F1 for arguments that are within 2 words to the predicate, and 5-6 F1 for arguments that are farther away from the predicate. Surpris- Figure 4 : For cases where our model either splits a gold span into two (Z \u2192 XY ) or merges two gold constituents (XY \u2192 Z), we show the distribution of syntactic labels for the Y span. Results show the major cause of these errors is inaccurate prepositional phrase attachment. ingly, the neural model performance deteriorates less severely on long-range dependencies than traditional syntax-based models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 151, |
| "end": 159, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 585, |
| "end": 593, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Long-range Dependencies", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We can quantify two types of structural consistencies: the BIO constraints and the SRL-specific constraints. Via our ablation study, we show that deeper BiLSTMs are better at enforcing these structural consistencies, although not perfectly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Consistency", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The BIO format requires argument spans to begin with a B tag. Any I tag directly following an O tag or a tag with different label is considered a violation. Table 6 shows the number of BIO violations per token for BiLSTMs with different depths. The number of BIO violations decreases when we use a deeper model. The gap is biggest between 2-layer and 4-layer models, and diminishes after that. It is surprising that although the deeper models generate impressively accurate token-level predic- Figure 6 : Example where performance is hurt by enforcing the constraint that core roles may only occur once (+SRL).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 157, |
| "end": 164, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 494, |
| "end": 502, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BIO Violations", |
| "sec_num": null |
| }, |
| { |
| "text": "tions, they still make enough BIO errors to significantly hurt performance-when these constraints are simple enough to be enforced by trivial rules. We compare the average entropy between tokens involved in BIO violations with the averaged entropy of all tokens. For the 8-layer model, the average entropy on these tokens is 30 times higher than the averaged entropy on all tokens. This suggests that BIO inconsistencies occur when there is some ambiguity. For example, if the model is unsure whether two consecutive words should belong to an ARG0 or ARG1, it might generate inconsistent BIO sequences such as B ARG0 , I ARG1 when decoding at the token level. Using BIO-constrained decoding can resolve this ambiguity and result in a structurally consistent solution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BIO Violations", |
| "sec_num": null |
| }, |
| { |
| "text": "SRL Structure Violations The model predictions can also violate the SRL-specific constraints commonly used in prior work (Punyakanok et al., 2008; . As shown in Table 7, the model occasionally violates these SRL constraints. With our constrained decoding algorithm, it is straightforward to enforce the unique core roles (U) and continuation roles (C) constraints during decoding. The constrained decoding results are shown with the model named L8+PoE+SRL in Table 7 .", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 146, |
| "text": "(Punyakanok et al., 2008;", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 459, |
| "end": 466, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BIO Violations", |
| "sec_num": null |
| }, |
| { |
| "text": "Although the violations are eliminated, the performance does not significantly improve. This is mainly due to two factors: (1) the model often already satisfies these constraints on its own, so the number of violations to be fixed are relatively small, and (2) the gold SRL structure sometimes violates the constraints and enforcing hard constraints can hurt performance. Figure 6 shows a sentence in the CoNLL 2005 development set. Our original model produces two ARG2s for the predicate quicken, and this violates the SRL constraints. When the A * decoder fixes this violation, it changes the first ARG1 into ARG2 because ARG0, ARG1, ARG2 is a more frequent pattern in the training data and has higher overall score. Table 6 : Comparison of BiLSTM models without BIO decoding. We compare F1 and token-level accuracy (Token), averaged BIO violations per token (BIO), overall model entropy (All) model entropy at tokens involved in BIO violations (BIO).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 372, |
| "end": 380, |
| "text": "Figure 6", |
| "ref_id": null |
| }, |
| { |
| "start": 719, |
| "end": 726, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BIO Violations", |
| "sec_num": null |
| }, |
| { |
| "text": "Increasing the depth of the model beyond 4 does not produce more structurally consistent output, emphasizing the need for constrained decoding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BIO Violations", |
| "sec_num": null |
| }, |
| { |
| "text": "The Propbank-style SRL formalism is closely tied to syntax (Bonial et al., 2010; Weischedel et al., 2013) . In Table 7 , we show that 98.7% of the gold SRL arguments match an unlabeled constituent in the gold syntax tree. Similar to some recent work (Zhou and Xu, 2015) , our model achieves strong performance without directly modeling syntax. A natural question follows: are neural SRL models implicitly learning syntax? Table 7 shows the trend of deeper models making predictions that are more consistent with the gold syntax in terms of span boundaries. With our best model (L8+PoE), 94.3% of the predicted arguments spans are part of the gold parse tree. This consistency is on par with previous CoNLL 2005 systems that directly model constituency and use predicted parse trees as features (Punyakanok, 95.3% and Pradhan, 93.0%) .", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 80, |
| "text": "(Bonial et al., 2010;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 81, |
| "end": 105, |
| "text": "Weischedel et al., 2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 250, |
| "end": 269, |
| "text": "(Zhou and Xu, 2015)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 794, |
| "end": 806, |
| "text": "(Punyakanok,", |
| "ref_id": null |
| }, |
| { |
| "start": 807, |
| "end": 825, |
| "text": "95.3% and Pradhan,", |
| "ref_id": null |
| }, |
| { |
| "start": 826, |
| "end": 832, |
| "text": "93.0%)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 111, |
| "end": 118, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 422, |
| "end": 429, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Can Syntax Still Help SRL?", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Constrained Decoding with Syntax The above analysis raises a further question: would improving consistency with syntax provide improvements for SRL? Our constrained decoding algorithm described in Section 2.2 enables us to inject syntax as a decoding constraint without having to retrain the model. Specifically, if the decoded sequence contains k arguments that do not match any unlabeled syntactic constituent, it will receive a penalty of kC, where C is a single parameter dictating how much the model should trust the provided syntax. In Figure 7 , we compare the SRL accuracy with syntactic constraints specified by gold parse or automatic parses. When using gold syntax, the predictions improve up to 2 F1 as the penalty increases. A state-of-the-art parser (Choe Table 7 : Comparison of models with different depths and decoding constraints (in addition to BIO) as well as two previous systems. We compare F1, unlabeled agreement with gold constituency (Syn%) and each type of SRL-constraint violations (Unique core roles, Continuation roles and Reference roles). Our best model produces a similar number of constraint violations to the gold annotation, explaining why deterministically enforcing these constraints is not helpful.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 542, |
| "end": 550, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 770, |
| "end": 777, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Can Syntax Still Help SRL?", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "and Charniak, 2016) provides smaller gains, while using the Charniak parser (Charniak, 2000) hurts performance if the model places too much trust in it. These results suggest that high-quality syntax can still make a large impact on SRL. A known challenge for syntactic parsers is robustness on out-of-domain data. Therefore we provide experimental results in Table 8 for On the CoNLL 2005 development set, the predicted syntax gives a 0.5 F1 improvement over our best model, while on in-domain test and outof-domain development sets, the improvement is much smaller. As expected, on CoNLL 2012, syntax improves most on the newswire (NW) domain. These improvements suggest that while decoding with hard constraints is beneficial, joint training or multi-task learning could be even more effective by leveraging full, labeled syntactic structures.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 19, |
| "text": "Charniak, 2016)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 76, |
| "end": 92, |
| "text": "(Charniak, 2000)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 360, |
| "end": 371, |
| "text": "Table 8 for", |
| "ref_id": "TABREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Can Syntax Still Help SRL?", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Traditional approaches to semantic role labeling have used syntactic parsers to identify constituents and model long-range dependencies, and enforced Figure 7 : Performance of syntax-constrained decoding as the non-constituent penalty increases for syntax from two parsers (from Choe and Charniak (2016) and Charniak (2000) ) and gold syntax. The best existing parser gives a small improvement, but the improvement from gold syntax shows that there is still potential for syntax to help SRL. global consistency using integer linear programming (Punyakanok et al., 2008) or dynamic programs . More recently, neural methods have been employed on top of syntactic features (FitzGerald et al., 2015; Roth and Lapata, 2016) . Our experiments show that offthe-shelf neural methods have a remarkable ability to learn long-range dependencies, syntactic constituency structure, and global constraints without coding task-specific mechanisms for doing so. An alternative line of work has attempted to reduce the dependency on syntactic input for semantic role labeling models. Collobert et al. (2011) first introduced an end-to-end neural-based approach with sequence-level training and uses a convolutional neural network to model the context window. However, their best system fell short of traditional feature-based systems. Neural methods have also been used as classifiers in transition-based SRL systems (Henderson et al., 2013; Swayamdipta et al., 2016) . Most recently, several successful LSTM-based architec-tures have achieved state-of-the-art results in English span-based SRL (Zhou and Xu, 2015) , Chinese SRL (Wang et al., 2015) , and dependencybased SRL (Marcheggiani et al., 2017) with little to no syntactic input. Our techniques push results to more than 3 F1 over the best syntax-based models. However, we also show that there is potential for syntax to further improve performance.", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 303, |
| "text": "Charniak (2016)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 308, |
| "end": 323, |
| "text": "Charniak (2000)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 544, |
| "end": 569, |
| "text": "(Punyakanok et al., 2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 670, |
| "end": 695, |
| "text": "(FitzGerald et al., 2015;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 696, |
| "end": 718, |
| "text": "Roth and Lapata, 2016)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1067, |
| "end": 1090, |
| "text": "Collobert et al. (2011)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1400, |
| "end": 1424, |
| "text": "(Henderson et al., 2013;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1425, |
| "end": 1450, |
| "text": "Swayamdipta et al., 2016)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1578, |
| "end": 1597, |
| "text": "(Zhou and Xu, 2015)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1612, |
| "end": 1631, |
| "text": "(Wang et al., 2015)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1658, |
| "end": 1685, |
| "text": "(Marcheggiani et al., 2017)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 150, |
| "end": 158, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We presented a new deep learning model for spanbased semantic role labeling with a 10% relative error reduction over the previous state of the art. Our ensemble of 8 layer BiLSTMs incorporated some of the recent best practices such as orthonormal initialization, RNN-dropout, and highway connections, and we have shown that they are crucial for getting good results with deep models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Extensive error analysis sheds light on the strengths and limitations of our deep SRL model, with detailed comparison against shallower models and two strong non-neural systems. While our deep model is better at recovering longdistance predicate-argument relations, we still observe structural inconsistencies, which can be alleviated by constrained decoding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, we posed the question of whether deep SRL still needs syntactic supervision. Despite recent success without syntactic input, we found that our best neural model can still benefit from accurate syntactic parser output via straightforward constrained decoding. In our oracle experiment, we observed a 3 F1 improvement by leveraging gold syntax, showing the potential for high quality parsers to further improve deep SRL models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "https://github.com/luheng/deep_srl", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the version of OntoNotes downloaded at: http://cemantix.org/data/ontonotes.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Training the full model on CoNLL 2005 takes about 5 days on a single Titan X Pascal GPU.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A * search in this setting finds the optimal sequence for all sentences and is therefore equivalent to Viterbi decoding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The frame identification numbers reported in Pradhan et al. (2013) are not comparable, due to errors in the original release of the data, as mentioned in.7 Model predictions of CoNLL 2005 systems: http:// www.cs.upc.edu/\u02dcsrlconll/st05/st05.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Source: Unified verb index: http://verbs. colorado.edu.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research was supported in part by DARPA under the DEFT program (FA8750-13-2-0019), the ARO (W911NF-16-1-0121), the NSF (IIS-1252835, IIS-1562364), gifts from Google and Tencent, and an Allen Distinguished Investigator Award. We are grateful to Mingxuan Wang for sharing his highway LSTM implementation and Sameer Pradhan for help with the CoNLL 2012 dataset. We thank Nicholas FitzGerald, Dan Garrette, Julian Michael, Hao Peng, and Swabha Swayamdipta for helpful comments, and the anonymous reviewers for valuable feedback.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Propbank annotation guidelines", |
| "authors": [ |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Bonial", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Babko-Malaya", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jinho", |
| "suffix": "" |
| }, |
| { |
| "first": "Jena", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Hwang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claire Bonial, Olga Babko-Malaya, Jinho D Choi, Jena Hwang, and Martha Palmer. 2010. Propbank anno- tation guidelines. Center for Computational Lan- guage and Education Research Institute of Cognitive Science University of Colorado at Boulder .", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Introduction to the conll-2005 shared task: Semantic role labeling", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Llu\u00eds", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "152--164", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Introduc- tion to the conll-2005 shared task: Semantic role la- beling. In Proceedings of the Ninth Conference on Computational Natural Language Learning. Associ- ation for Computational Linguistics, pages 152-164.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A maximum-entropy-inspired parser", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of the First North American chapter of the Association for Computational Linguistics conference (NAACL). Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "132--139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proc. of the First North American chap- ter of the Association for Computational Linguis- tics conference (NAACL). Association for Compu- tational Linguistics, pages 132-139.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Parsing as language modeling", |
| "authors": [ |
| { |
| "first": "Kook", |
| "middle": [], |
| "last": "Do", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Choe", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of the 2016 Conference of Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proc. of the 2016 Con- ference of Empirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Natural language processing (almost) from scratch", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493-2537.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Semantic role labeling with neural network factors", |
| "authors": [ |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Fitzgerald", |
| "suffix": "" |
| }, |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "960--970", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicholas FitzGerald, Oscar T\u00e4ckstr\u00f6m, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 960-970.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A theoretically grounded application of dropout in recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Yarin", |
| "middle": [], |
| "last": "Gal", |
| "suffix": "" |
| }, |
| { |
| "first": "Zoubin", |
| "middle": [], |
| "last": "Ghahramani", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1019--1027", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems. pages 1019-1027.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Paola", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriele", |
| "middle": [], |
| "last": "Musillo", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Linguistics", |
| "volume": "39", |
| "issue": "4", |
| "pages": "949--998", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint pars- ing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics 39(4):949-998.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Training products of experts by minimizing contrastive divergence", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Geoffrey", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Neural computation", |
| "volume": "14", |
| "issue": "8", |
| "pages": "1771--1800", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural com- putation 14(8):1771-1800.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Adding semantic annotation to the penn treebank", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Kingsbury", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitch", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the human language technology conference", |
| "volume": "", |
| "issue": "", |
| "pages": "252--256", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Kingsbury, Martha Palmer, and Mitch Marcus. 2002. Adding semantic annotation to the penn tree- bank. In Proceedings of the human language tech- nology conference. pages 252-256.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Parser showdown at the wall street corral: An empirical investigation of error types in parser output", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [ |
| "K" |
| ], |
| "last": "Kummerfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. of the 2012 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1048--1059", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan K. Kummerfeld, David Hall, James R. Cur- ran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of er- ror types in parser output. In Proc. of the 2012 Con- ference on Empirical Methods in Natural Language Processing (EMNLP). pages 1048-1059.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Global neural ccg parsing with optimality guarantees", |
| "authors": [ |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global neural ccg parsing with optimality guaran- tees. In Proc. of the 2016 Conference on Em- pirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A* ccg parsing with a supertag-factored model", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "990--1000", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis and Mark Steedman. 2014. A* ccg pars- ing with a supertag-factored model. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 990-1000.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling", |
| "authors": [ |
| { |
| "first": "Diego", |
| "middle": [], |
| "last": "Marcheggiani", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Frolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1701.02593" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling. arXiv preprint arXiv:1701.02593 .", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP). pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Semantic role chunking combining complementary syntactic views", |
| "authors": [ |
| { |
| "first": "Kadri", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Wayne", |
| "middle": [], |
| "last": "Hacioglu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the 2005 Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "217--220", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James H Martin, and Daniel Jurafsky. 2005. Seman- tic role chunking combining complementary syntac- tic views. In Proc. of the 2005 Conference on Com- putational Natural Language Learning (CoNLL). pages 217-220.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Towards robust linguistic analysis using ontonotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhong", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proc. of the 2013 Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "143--152", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proc. of the 2013 Conference on Computational Natural Language Learning (CoNLL). pages 143-152.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Generalized inference with multiple semantic role labeling systems", |
| "authors": [ |
| { |
| "first": "Vasin", |
| "middle": [], |
| "last": "Punyakanok", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Koomen", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the 2005 Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vasin Punyakanok, Peter Koomen, Dan Roth, and Wen-tau Yih. 2005. Generalized inference with multiple semantic role labeling systems. In Proc. of the 2005 Conference on Computational Natural Language Learning (CoNLL).", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The importance of syntactic parsing and inference in semantic role labeling", |
| "authors": [ |
| { |
| "first": "Vasin", |
| "middle": [], |
| "last": "Punyakanok", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "2", |
| "pages": "257--287", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics 34(2):257-287.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Neural semantic role labeling with dependency path embeddings", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Roth and Mirella Lapata. 2016. Neural se- mantic role labeling with dependency path embed- dings. In Proc. of the Annual Meeting of the Associ- ation for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "L" |
| ], |
| "last": "Saxe", |
| "suffix": "" |
| }, |
| { |
| "first": "Surya", |
| "middle": [], |
| "last": "Mcclelland", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ganguli", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1312.6120" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew M Saxe, James L McClelland, and Surya Gan- guli. 2013. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 .", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Training very deep networks", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Rupesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Greff", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2377--2385", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rupesh K Srivastava, Klaus Greff, and J\u00fcrgen Schmid- huber. 2015. Training very deep networks. In Ad- vances in neural information processing systems. pages 2377-2385.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Greedy, joint syntacticsemantic parsing with stack lstms", |
| "authors": [ |
| { |
| "first": "Swabha", |
| "middle": [], |
| "last": "Swayamdipta", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah A", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of the 2016 Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Greedy, joint syntactic- semantic parsing with stack lstms. In Proc. of the 2016 Conference on Computational Natural Lan- guage Learning (CoNLL). page 187.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Efficient inference and structured learning for semantic role labeling", |
| "authors": [ |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "29--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oscar T\u00e4ckstr\u00f6m, Kuzman Ganchev, and Dipanjan Das. 2015. Efficient inference and structured learn- ing for semantic role labeling. Transactions of the Association for Computational Linguistics 3:29-41.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A global joint model for semantic role labeling", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "2", |
| "pages": "161--191", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher D Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics 34(2):161- 191.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Chinese semantic role labeling with bidirectional recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Zhen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tingsong", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Baobao", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifang", |
| "middle": [], |
| "last": "Sui", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1626--1631", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhen Wang, Tingsong Jiang, Baobao Chang, and Zhi- fang Sui. 2015. Chinese semantic role labeling with bidirectional recurrent neural networks. In Proc. of the 2015 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). pages 1626- 1631.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Linguistic Data Consortium, Philadelphia", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "ldc2013t19. Linguistic Data Consortium, Philadel- phia, PA .", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Adadelta: an adaptive learning rate method", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Matthew", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zeiler", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1212.5701" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701 .", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Highway long short-term memory rnns for distant speech recognition", |
| "authors": [ |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Guoguo", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Dong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaisheng", |
| "middle": [], |
| "last": "Yaco", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Glass", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "5755--5759", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, and James Glass. 2016. High- way long short-term memory rnns for distant speech recognition. In 2016 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP). pages 5755-5759.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "End-to-end learning of semantic role labeling using recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural net- works. In Proc. of the Annual Meeting of the As- sociation for Computational Linguistics (ACL).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "text": "Highway LSTM with four layers. The curved connections represent highway connections, and the plus symbols represent transform gates that control inter-layer information flow.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Smoothed learning curve of various ablations. The combination of highway layers, orthonormal parameter initialization and recurrent dropout is crucial to achieving strong performance. The numbers shown here are without constrained decoding.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "text": "F1 by surface distance between predicates and arguments. Performance degrades least rapidly on long-range arguments for the deeper neural models.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "text": "both CoNLL 2005 and CoNLL 2012, which consists of 8 different genres. The penalties are tuned on the two development sets separately (C = 10000 on CoNLL 2005 and C = 20 on CoNLL 2012).", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": "83.1 82.4 82.7 64.1 85.0 84.3 84.6 66.5 74.9 72.4 73.6 46.5 83.2 Ours 81.6 81.6 81.6 62.3 83.1 83.0 83.1 64.3 72.9 71.4 72.1 44.8 81.6 Struct.,PoE) 81.2 76.7 78.9 55.1 82.5 78.2 80.3 57.3 74.5 70.0 72.2 41.3", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td colspan=\"2\">Development</td><td colspan=\"2\">WSJ Test</td><td colspan=\"2\">Brown Test</td><td>Combined</td></tr><tr><td>Method</td><td>P</td><td>R</td><td>F1 Comp. P</td><td>R</td><td>F1 Comp. P</td><td>R</td><td>F1 Comp.</td><td>F1</td></tr><tr><td>Ours (PoE)</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"9\">Zhou FitzGerald (-79.7 79.4 79.6 -82.9 82.8 82.8 -70.7 68.2 69.4 -81.1 T\u00e4ckstr\u00f6m (Struct.) 81.2 76.2 78.6 54.4 82.3 77.6 79.9 56.0 74.3 68.6 71.3 39.8 -Toutanova (Ensemble) --78.6 58.7 81.9 78.8 80.3 60.1 --68.8 40.8 -Punyakanok (Ensemble) 80.1 74.8 77.4 50.7 82.3 76.8 79.4 53.8 73.4 62.9 67.8 32.3 77.9</td></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF1": { |
| "text": "Experimental results on CoNLL 2005, in terms of precision (P), recall (R), F1 and percentage of completely correct predicates (Comp.). We report results of our best single and ensemble (PoE) model. The comparison models areZhou and Xu (2015), FitzGerald et al.(2015),,Toutanova et al. (2008) andPunyakanok et al. (2008).", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td colspan=\"2\">Development</td><td/><td/><td>Test</td><td/><td/></tr><tr><td>Method</td><td>P</td><td>R</td><td>F1</td><td>Comp.</td><td>P</td><td>R</td><td>F1</td><td>Comp.</td></tr><tr><td>Ours (PoE) Ours</td><td colspan=\"2\">83.5 83.2 81.8 81.4</td><td>83.4 81.5</td><td>67.5 64.6</td><td>83.5 81.7</td><td>83.3 81.6</td><td>83.4 81.7</td><td>68.5 66.0</td></tr><tr><td colspan=\"3\">Zhou FitzGerald (Struct.,PoE) 81.0 78.5 --T\u00e4ckstr\u00f6m (Struct.) 80.5 77.8 Pradhan (revised) --</td><td>81.1 79.7 79.1 -</td><td>-60.9 60.1 -</td><td>-81.2 80.6 78.5</td><td>-79.0 78.2 76.6</td><td>81.3 80.1 79.4 77.5</td><td>-62.6 61.8 55.8</td></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF2": { |
| "text": "Experimental results on CoNLL 2012 in the same metrics as above. We compare our best single and ensemble (PoE) models againstZhou and Xu (2015), FitzGerald et al.(2015), andPradhan et al. (2013).", |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF4": { |
| "text": "Predicate detection performance and end-to-end SRL results using predicted predicates. \u2206 F1 shows the absolute performance drop compared to our best ensemble model with gold predicates.", |
| "type_str": "table", |
| "content": "<table><tr><td/><td>80</td><td/><td/><td/></tr><tr><td>Dev. F1 %</td><td>75 70</td><td/><td colspan=\"2\">Our model No highway connections</td></tr><tr><td/><td/><td/><td>No dropout</td><td/></tr><tr><td/><td>65</td><td/><td colspan=\"3\">No orthogonal initialization</td></tr><tr><td/><td>100</td><td>200</td><td>300</td><td>400</td><td>500</td></tr><tr><td/><td/><td colspan=\"2\">Num. epochs</td><td/></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF7": { |
| "text": "Oracle transformations paired with the relative error reduction after each operation. All the operations are permitted only if they do not cause any overlapping arguments.", |
| "type_str": "table", |
| "content": "<table><tr><td>pred. \\ gold A0 A1 A2 A3 ADV DIR LOC MNR PNC TMP A0 -55 11 13 4 0 0 0 0 0 A1 78 -46 0 0 22 11 10 25 14 A2 11 23 -48 15 56 33 41 25 0 A3 3 2 2 -4 0 0 0 25 14 ADV 0 0 0 4 -0 15 29 25 36 DIR 0 0 5 4 0 -11 2 0 0 LOC 5 9 12 0 4 0 -10 0 14 MNR 3 0 12 26 33 0 0 -0 21 PNC 0 3 5 4 0 11 4 2 -0 TMP 0 8 5 0 41 11 26 6 0 -</td></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF8": { |
| "text": "Confusion matrix for labeling errors, showing the percentage of predicted labels for each gold label. We only count predicted arguments that match gold span boundaries.", |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF12": { |
| "text": "F1 on CoNLL 2005, and the development set of CoNLL 2012, broken down by genres. Syntax-constrained decoding (+AutoSyn) shows bigger improvement on in-domain data (CoNLL 05 and CoNLL 2012 NW).", |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |