| { |
| "paper_id": "P19-1009", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:21:12.399721Z" |
| }, |
| "title": "AMR Parsing as Sequence-to-Graph Transduction", |
| "authors": [ |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Xutai", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University", |
| "location": {} |
| }, |
| "email": "xutaima@jhu.edu" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Duh", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University", |
| "location": {} |
| }, |
| "email": "kevinduh@cs.jhu.edu" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University", |
| "location": {} |
| }, |
| "email": "vandurme@cs.jhu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose an attention-based model that treats AMR parsing as sequence-to-graph transduction. Unlike most AMR parsers that rely on pre-trained aligners, external semantic resources, or data augmentation, our proposed parser is aligner-free, and it can be effectively trained with limited amounts of labeled AMR data. Our experimental results outperform all previously reported SMATCH scores, on both AMR 2.0 (76.3% F1 on LDC2017T10) and AMR 1.0 (70.2% F1 on LDC2014T12). Figure 11: Full model prediction vs. no BERT embeddings prediction.", |
| "pdf_parse": { |
| "paper_id": "P19-1009", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose an attention-based model that treats AMR parsing as sequence-to-graph transduction. Unlike most AMR parsers that rely on pre-trained aligners, external semantic resources, or data augmentation, our proposed parser is aligner-free, and it can be effectively trained with limited amounts of labeled AMR data. Our experimental results outperform all previously reported SMATCH scores, on both AMR 2.0 (76.3% F1 on LDC2017T10) and AMR 1.0 (70.2% F1 on LDC2014T12). Figure 11: Full model prediction vs. no BERT embeddings prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Abstract Meaning Representation (AMR, Banarescu et al., 2013) parsing is the task of transducing natural language text into AMR, a graphbased formalism used for capturing sentence-level semantics. Challenges in AMR parsing include:", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 61, |
| "text": "Banarescu et al., 2013)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) its property of reentrancy -the same concept can participate in multiple relations -which leads to graphs in contrast to trees (Wang et al., 2015) ;", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 150, |
| "text": "(Wang et al., 2015)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(2) the lack of gold alignments between nodes (concepts) in the graph and words in the text which limits attempts to rely on explicit alignments to generate training data (Flanigan et al., 2014; Wang et al., 2015; Damonte et al., 2017; Foland and Martin, 2017; Peng et al., 2017b; Groschwitz et al., 2018; Guo and Lu, 2018) ; and (3) relatively limited amounts of labeled data (Konstas et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 194, |
| "text": "(Flanigan et al., 2014;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 195, |
| "end": 213, |
| "text": "Wang et al., 2015;", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 214, |
| "end": 235, |
| "text": "Damonte et al., 2017;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 236, |
| "end": 260, |
| "text": "Foland and Martin, 2017;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 261, |
| "end": 280, |
| "text": "Peng et al., 2017b;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 281, |
| "end": 305, |
| "text": "Groschwitz et al., 2018;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 306, |
| "end": 323, |
| "text": "Guo and Lu, 2018)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 377, |
| "end": 399, |
| "text": "(Konstas et al., 2017)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recent attempts to overcome these challenges include: modeling alignments as latent variables (Lyu and Titov, 2018) ; leveraging external semantic resources (Artzi et al., 2015; Bjerva et al., 2016) ; data augmentation (Konstas et al., 2017; van Noord and Bos, 2017b) ; and employing attention-based sequence-to-sequence models (Barzdins and Gosko, 2016; Konstas et al., 2017; van Noord and Bos, 2017b) .", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 115, |
| "text": "(Lyu and Titov, 2018)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 157, |
| "end": 177, |
| "text": "(Artzi et al., 2015;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 178, |
| "end": 198, |
| "text": "Bjerva et al., 2016)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 219, |
| "end": 241, |
| "text": "(Konstas et al., 2017;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 242, |
| "end": 267, |
| "text": "van Noord and Bos, 2017b)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 328, |
| "end": 354, |
| "text": "(Barzdins and Gosko, 2016;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 355, |
| "end": 376, |
| "text": "Konstas et al., 2017;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 377, |
| "end": 402, |
| "text": "van Noord and Bos, 2017b)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we introduce a different way to handle reentrancy, and propose an attention-based model that treats AMR parsing as sequence-tograph transduction. The proposed model, supported by an extended pointer-generator network, is aligner-free and can be effectively trained with limited amount of labeled AMR data. Experiments on two publicly available AMR benchmarks demonstrate that our parser clearly outperforms the previous best parsers on both benchmarks. It achieves the best reported SMATCH scores: 76.3% F1 on LDC2017T10 and 70.2% F1 on LDC2014T12. We also provide extensive ablative and qualitative studies, quantifying the contributions from each component. Our model implementation is available at https://github. com/sheng-z/stog.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "AMR is a rooted, directed, and usually acyclic graph where nodes represent concepts, and labeled directed edges represent the relationships between them (see Figure 1 for an AMR example). The reason for AMR being a graph instead of a tree is that it allows reentrant semantic relations. For instance, in Figure 1(a) \"victim\" is both ARG0 and ARG1 of \"help-01\". While efforts have gone into developing graph-based algorithms for AMR parsing (Chiang et al., 2013; Flanigan et al., 2014) , it is more challenging to parse a sentence into an AMR graph rather than a tree as there are efficient off-the-shelf tree-based algorithms, e.g., Chu and Liu (1965) ; Edmonds (1968) . To leverage these tree-based algorithms as well as other structured prediction paradigms (McDonald et al., 2005) , we introduce another view of reentrancy. AMR reentrancy is employed when a node participates in multiple semantic relations. We convert an AMR graph into a tree by duplicating nodes that have reentrant relations; that is, whenever a node has a reentrant relation, we make a copy of that node and use the copy to participate in the relation, thereby resulting in a tree. Next, in order to preserve the reentrancy information, we add an extra layer of annotation by assigning an index to each node. Duplicated nodes are assigned the same index as the original node. Figure 1(b) shows a resultant AMR tree: subscripts of nodes are indices; two \"victim\" nodes have the same index as they refer to the same concept. The original AMR graph can be recovered by merging identically indexed nodes and unioning edges from/to these nodes. Similar ideas were used by Artzi et al. (2015) who introduced Skolem IDs to represent anaphoric references in the transformation from CCG to AMR, and van Noord and Bos (2017a) who kept co-indexed AMR variables, and converted them to numbers.", |
| "cite_spans": [ |
| { |
| "start": 440, |
| "end": 461, |
| "text": "(Chiang et al., 2013;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 462, |
| "end": 484, |
| "text": "Flanigan et al., 2014)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 633, |
| "end": 651, |
| "text": "Chu and Liu (1965)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 654, |
| "end": 668, |
| "text": "Edmonds (1968)", |
| "ref_id": null |
| }, |
| { |
| "start": 760, |
| "end": 783, |
| "text": "(McDonald et al., 2005)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 1350, |
| "end": 1361, |
| "text": "Figure 1(b)", |
| "ref_id": null |
| }, |
| { |
| "start": 1641, |
| "end": 1660, |
| "text": "Artzi et al. (2015)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 158, |
| "end": 166, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Another View of Reentrancy", |
| "sec_num": "2" |
| }, |
| { |
| "text": "If we consider the AMR tree with indexed nodes as the prediction target, then our approach to parsing is formalized as a two-stage process: node prediction and edge prediction. 1 An example of the parsing process is shown in Figure 2 . Node Prediction Given a input sentence w = w 1 , ..., w n , each w i a word in the sentence, our approach sequentially decodes a list of nodes u = u 1 , ..., u m and deterministically assigns their in-", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 225, |
| "end": 233, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "dices d = d 1 , ..., d m . P (u) = m i=1 P (u i | u <i , d <i , w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Note that we allow the same node to occur multi- 1 The two-stage process is similar to \"concept identification\" and \"relation identification\" in Flanigan et al. (2014) ; ; Lyu and Titov (2018) ; inter alia. Figure 2 : A two-stage process of AMR parsing. We remove senses (i.e., -01, -02, etc.) as they will be assigned in the post-processing step.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 50, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 145, |
| "end": 167, |
| "text": "Flanigan et al. (2014)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 172, |
| "end": 192, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 215, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "ple times in the list; multiple occurrences of a node will be assigned the same index. We choose to predict nodes sequentially rather than simultaneously, because (1) we believe the current node generation is informative to the future node generation; (2) variants of efficient sequence-to-sequence models (Bahdanau et al., 2014; Vinyals et al., 2015) can be employed to model this process. At the training time, we obtain the reference list of nodes and their indices using a pre-order traversal over the reference AMR tree. We also evaluate other traversal strategies, and will discuss their difference in Section 7.2. Edge Prediction Given a input sentence w, a node list u, and indices d, we look for the highest scoring parse tree y in the space Y(u) of valid trees over u with the constraint of d. A parse tree y is a set of directed head-modifier edges y = {(u i , u j ) | 1 \u2264 i, j \u2264 m}. In order to make the search tractable, we follow the arcfactored graph-based approach (McDonald et al., 2005; Kiperwasser and Goldberg, 2016) , decomposing the score of a tree to the sum of the score of its head-modifier edges:", |
| "cite_spans": [ |
| { |
| "start": 306, |
| "end": 329, |
| "text": "(Bahdanau et al., 2014;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 330, |
| "end": 351, |
| "text": "Vinyals et al., 2015)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 981, |
| "end": 1004, |
| "text": "(McDonald et al., 2005;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 1005, |
| "end": 1036, |
| "text": "Kiperwasser and Goldberg, 2016)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "parse(u) = arg max y\u2208Y(u) (u i ,u j )\u2208y score(u i , u j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Based on the scores of the edges, the highest scoring parse tree (i.e., maximum spanning arborescence) can be efficiently found using the Chu-Liu-Edmonnds algorithm. We further incorporate indices as constraints in the algorithm, which is described in Section 4.4. After obtaining the parse tree, we merge identically indexed nodes to recover the standard AMR graph. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Y q F / h o Q B M Q Q + I H U i / 3 v g 8 = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p Z h N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F A q a K F B C J 9 H A o k B C O x h f z / z 2 A 2 g j Y n W H k w T 8 i A 2 V C A V n a K X 7 p N 9 D e E I d Z U N Q 0 3 6 5 4 l b d O e g q 8 X J S I T k a / f J X b x D z N A K F X D J j u p 6 b o J 8 x j Y J L m J Z 6 q Y G E 8 T E b Q t d S x S I w f j a / e k r P r D K g Y a x t K a R z 9 f d E x i J j J l F g O y O G I 7 P s z c T / v G 6 K Y c 3 P h E p S B M U X i 8 J U U o z p L A I 6 E B o 4 y o k l j G t h b 6 V 8 x D T j a I M q 2 R C 8 5 Z d X S e u i 6 r l V 7 / a y U q / l c R T J C T k l 5 8 Q j V 6 R O b k i D N A k n m j y T V / L m P D o v z r v z s W g t O P n M M f k D 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "/ M H T e + S / Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" f 9 9 M 4 0 Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "q F / h o Q B M Q Q + I H U i / 3 v g 8 = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p Z h N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F A q a K F B C J 9 H A o k B C O x h f z / z 2 A 2 g j Y n W H k w T 8 i A 2 V C A V n a K X 7 p N 9 D e E I d Z U N Q 0 3 6 5 4 l b d O e g q 8 X J S I T k a / f J X b x D z N A K F X D J j u p 6 b o J 8 x j Y J L m J Z 6 q Y G E 8 T E b Q t d S x S I w f j a / e k r P r D K g Y a x t K a R z 9 f d E x i J j J l F g O y O G I 7 P s z c T / v G 6 K Y c 3 P h E p S B M U X i 8 J U U o z p L A I 6 E B o 4 y o k l j G t h b 6 V 8 x D T j a I M q 2 R C 8 5 Z d X S e u i 6 r l V 7 / a y U q / l c R T J C T k l 5 8 Q j V 6 R O b k i D N A k n m j y T V / L m P D o v z r v z s W g t O P n M M f k D 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "/ M H T e + S / Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" f 9 9 M 4 0 Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "q F / h o Q B M Q Q + I H U i / 3 v g 8 = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p Z h N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F A q a K F B C J 9 H A o k B C O x h f z / z 2 A 2 g j Y n W H k w T 8 i A 2 V C A V n a K X 7 p N 9 D e E I d Z U N Q 0 3 6 5 4 l b d O e g q 8 X J S I T k a / f J X b x D z N A K F X D J j u p 6 b o J 8 x j Y J L m J Z 6 q Y G E 8 T E b Q t d S x S I w f j a / e k r P r D K g Y a x t K a R z 9 f d E x i J j J l F g O y O G I 7 P s z c T / v G 6 K Y c 3 P h E p S B M U X i 8 J U U o z p L A I 6 E B o 4 y o k l j G t h b 6 V 8 x D T j a I M q 2 R C 8 5 Z d X S e u i 6 r l V 7 / a y U q / l c R T J C T k l 5 8 Q j V 6 R O b k i D N A k n m j y T V / L m P D o v z r v z s W g t O P n M M f k D 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "/ M H T e + S / Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" f 9 9 M 4 0 Y ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "q F / h o Q B M Q Q + I H U i / 3 v g 8 = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p Z h N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F A q a K F B C J 9 H A o k B C O x h f z / z 2 A 2 g j Y n W H k w T 8 i A 2 V C A V n a K X 7 p N 9 D e E I d Z U N Q 0 3 6 5 4 l b d O e g q 8 X J S I T k a / f J X b x D z N A K F X D J j u p 6 b o J 8 x j Y J L m J Z 6 q Y G E 8 T E b Q t d S x S I w f j a / e k r P r D K g Y a x t K a R z 9 f d E x i J j J l F g O y O G I 7 P s z c T / v G 6 K Y c 3 P h E p S B M U X i 8 J U U o z p L A I 6 E B o 4 y o k l j G t h b 6 V 8 x D T j a I M q 2 R C 8 5 Z d X S e u i 6 r l V 7 / a y U q / l c R T J C T k l 5 8 Q j V 6 R O b k i D N A k n m j y T V / L m P D o v z r v z s W g t O P n M M f k D 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "v l 9 a Z a 1 j G X z I = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p b h J 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F B E 0 U a C E T q K B q U B C O x h f z / z 2 A 2 g j 4 u g O J w n 4 i g 0 j E Q r O 0 E r 3 S b + H 8 I R a Z T j E a b 9 c c a v u H H S V e D m p k B y N f v m r N 4 h 5 q i B C L p k x X c 9 N 0 M + Y R s E l T E u 9 1 E D C + J g N o W t p x B Q Y P 5 t f P a V n V h n Q M N a 2 I q R z 9 f d E x p Q x E x X Y T s V w Z J a 9 m f i f 1 0 0 x r P m Z i J I U I e K L R W E q K c Z 0 F g E d C A 0 c 5 c Q S x r W w t 1 I + Y p p x t E G V b A j e 8 s u r p H V R 9 d y q d 3 t Z q d f y O I r k h J y S c + K R K 1 I n N 6 R B m o Q T T Z 7 J K 3 l z H p 0 X 5 9 3 5 W L Q W n H z m m P y B 8 / k D b f S T E g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" R q f 2 5 5 u a S R 5 j 8 l n v l 9 a Z a 1 j G X z I = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p b h J 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F B E 0 U a C E T q K B q U B C O x h f z / z 2 A 2 g j 4 u g O J w n 4 i g 0 j E Q r O 0 E r 3 S b + H 8 I R a Z T j E a b 9 c c a v u H H S V e D m p k B y N f v m r N 4 h 5 q i B C L p k x X c 9 N 0 M + Y R s E l T E u 9 1 E D C + J g N o W t p x B Q Y P 5 t f P a V n V h n Q M N a 2 I q R z 9 f d E x p Q x E x X Y T s V w Z J a 9 m f i f 1 0 0 x r P m Z i J I U I e K L R W E q K c Z 0 F g E d C A 0 c 5 c Q S x r W w t 1 I + Y p p x t E G V b A j e 8 s u r p H V R 9 d y q d 3 t Z q d f y O I r k h J y S c + K R K 1 I n N 6 R B m o Q T T Z 7 J K 3 l z H p 0 X 5 9 3 5 W L Q W n H z m m P y B 8 / k D b f S T E g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" R q f 2 5 5 u a S R 5 j 8 l n v l 9 a Z a 1 j G X z I = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p b h J 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F B E 0 U a C E T q K B q U B C O x h f z / z 2 A 2 g j 4 u g O J w n 4 i g 0 j E Q r O 0 E r 3 S b + H 8 I R a Z T j E a b 9 c c a v u H H S V e D m p k B y N f v m r N 4 h 5 q i B C L p k x X c 9 N 0 M + Y R s E l T E u 9 1 E D C + J g N o W t p x B Q Y P 5 t f P a V n V h n Q M N a 2 I q R z 9 f d E x p Q x E x X Y T s V w Z J a 9 m f i f 1 0 0 x r P m Z i J I U I e K L R W E q K c Z 0 F g E d C A 0 c 5 c Q S x r W w t 1 I + Y p p x t E G V b A j e 8 s u r p H V R 9 d y q d 3 t Z q d f y O I r k h J y S c + K R K 1 I n N 6 R B m o Q T T Z 7 J K 3 l z H p 0 X 5 9 3 5 W L Q W n H z m m P y B 8 / k D b f S T E g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" R q f 2 5 5 u a S R 5 j 8 l n v l 9 a Z a 1 j G X z I = \" > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E s M e C F 4 8 V 7 A e 0 s W y 2 k 3 b p b h J 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V H J o 8 l r H u B M y A F B E 0 U a C E T q K B q U B C O x h f z / z 2 A 2 g j 4 u g O J w n 4 i g 0 j E Q r O 0 E r 3 S b + H 8 I R a Z T j E a b 9 c c a v u H H S V e D m p k B y N f v m r N 4 h 5 q i B C L p k x X c 9 N 0 M + Y R s E l T E u 9 1 E D C + J g N o W t p x B Q Y P 5 t f P a V n V h n Q M N a 2 I q R z 9 f d E x p Q x E x X Y T s V w Z J a 9 m f i f 1 0 0 x r P m Z i J I U I e K L R W E q K c Z 0 F g E d C A 0 c 5 c Q S x r W w t 1 I + Y p p x t E G V b A j e 8 s u r p H V R 9 d y q d 3 t Z q d f y O I r k h J y S c + K R K 1 I n N 6 R B m o Q T T Z 7 J K 3 l z H p 0 X 5 9 3 5 W L Q W n H z m m P y B 8 / k D b f S T E g = = < / l a t e x i t > p src", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" L B J I G m i R a 8 y 7 f N w y I g z x Q 2 6 k q o Y = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S Q i 2 G P B i 8 c K 9 g P a W D b b T b t 0 d x N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e m A h u 0 P O + n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "V l D V p L G L d C Y l h g i v W R I 6 C d R L N i A w F a 4 f j 6 5 n f f m D a 8 F j d 4 S R h g S R D x S N O C V r p P u n 3 k D 2 h l p n R d N o v V 7 y q N 4 e 7 S v y c V C B H o 1 / + 6 g 1 i m k q m k A p i T N f 3 E g w y o p F T w a a l X m p Y Q u i Y D F n X U k U k M 0 E 2 v 3 r q n l l l 4 E a x t q X Q n a u / J z I i j Z n I 0 H Z K g i O z 7 M 3 E / 7 x u i l E t y L h K U m S K L h Z F q X A x d m c R u A O u G U U x s Y R Q z e 2 t L h 0 R T S j a o E o 2 B H / 5 5 V X S u q j 6 X t W / v a z U a 3 k c R T i B U z g H H 6 6 g D j f Q g C Z Q 0 P A M r / D m P D o v z r v z s W g t O P n M M f y B 8 / k D Y 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "q T C w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" L B J I G m i R a 8 y 7 f N w y I g z x Q 2 6 k q o Y = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S Q i 2 G P B i 8 c K 9 g P a W D b b T b t 0 d x N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e m A h u 0 P O + n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "V l D V p L G L d C Y l h g i v W R I 6 C d R L N i A w F a 4 f j 6 5 n f f m D a 8 F j d 4 S R h g S R D x S N O C V r p P u n 3 k D 2 h l p n R d N o v V 7 y q N 4 e 7 S v y c V C B H o 1 / + 6 g 1 i m k q m k A p i T N f 3 E g w y o p F T w a a l X m p Y Q u i Y D F n X U k U k M 0 E 2 v 3 r q n l l l 4 E a x t q X Q n a u / J z I i j Z n I 0 H Z K g i O z 7 M 3 E / 7 x u i l E t y L h K U m S K L h Z F q X A x d m c R u A O u G U U x s Y R Q z e 2 t L h 0 R T S j a o E o 2 B H / 5 5 V X S u q j 6 X t W / v a z U a 3 k c R T i B U z g H H 6 6 g D j f Q g C Z Q 0 P A M r / D m P D o v z r v z s W g t O P n M M f y B 8 / k D Y 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "q T C w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" L B J I G m i R a 8 y 7 f N w y I g z x Q 2 6 k q o Y = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S Q i 2 G P B i 8 c K 9 g P a W D b b T b t 0 d x N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e m A h u 0 P O + n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "V l D V p L G L d C Y l h g i v W R I 6 C d R L N i A w F a 4 f j 6 5 n f f m D a 8 F j d 4 S R h g S R D x S N O C V r p P u n 3 k D 2 h l p n R d N o v V 7 y q N 4 e 7 S v y c V C B H o 1 / + 6 g 1 i m k q m k A p i T N f 3 E g w y o p F T w a a l X m p Y Q u i Y D F n X U k U k M 0 E 2 v 3 r q n l l l 4 E a x t q X Q n a u / J z I i j Z n I 0 H Z K g i O z 7 M 3 E / 7 x u i l E t y L h K U m S K L h Z F q X A x d m c R u A O u G U U x s Y R Q z e 2 t L h 0 R T S j a o E o 2 B H / 5 5 V X S u q j 6 X t W / v a z U a 3 k c R T i B U z g H H 6 6 g D j f Q g C Z Q 0 P A M r / D m P D o v z r v z s W g t O P n M M f y B 8 / k D Y 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "q T C w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" L B J I G m i R a 8 y 7 f N w y I g z x Q 2 6 k q o Y = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S Q i 2 G P B i 8 c K 9 g P a W D b b T b t 0 d x N 2 J 2 o J / R 9 e P C j i 1 f / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e m A h u 0 P O + n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j Figure 3 : Extended pointer-generator network for node prediction. For each decoding time step, three probabilities p src , p tgt and p gen are calculated. The source and target attention distributions as well as the vocabulary distribution are weighted by these probabilities respectively, and then summed to obtain the final distribution, from which we make our prediction. Best viewed in color.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 396, |
| "end": 404, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "V l D V p L G L d C Y l h g i v W R I 6 C d R L N i A w F a 4 f j 6 5 n f f m D a 8 F j d 4 S R h g S R D x S N O C V r p P u n 3 k D 2 h l p n R d N o v V 7 y q N 4 e 7 S v y c V C B H o 1 / + 6 g 1 i m k q m k A p i T N f 3 E g w y o p F T w a a l X m p Y Q u i Y D F n X U k U k M 0 E 2 v 3 r q n l l l 4 E a x t q X Q n a u / J z I i j Z n I 0 H Z K g i O z 7 M 3 E / 7 x u i l E t y L h K U m S K L h Z F q X A x d m c R u A O u G U U x s Y R Q z e 2 t L h 0 R T S j a o E o 2 B H / 5 5 V X S u q j 6 X t W / v a z U a 3 k c R T i B U z g H H 6 6 g D j f Q g C Z Q 0 P A M r / D m P D o v z r v z s W g t O P n M M f y B 8 / k D Y 1 q T C w = = < / l a", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Formalization", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our model has two main modules: (1) an extended pointer-generator network for node prediction; and (2) a deep biaffine classifier for edge prediction. The two modules correspond to the two-stage process for AMR parsing, and they are jointly learned during training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Inspired by the self-copy mechanism in Zhang et al. (2018), we extend the pointer-generator network (See et al., 2017) for node prediction. The pointer-generator network was proposed for text summarization, which can copy words from the source text via pointing, while retaining the ability to produce novel words through the generator. The major difference of our extension is that it can copy nodes, not only from the source text, but also from the previously generated nodes on the target side. This target-side pointing is well-suited to our task as nodes we will predict can be copies of other nodes. While there are other pointer/copy networks Merity et al., 2016; Miao and Blunsom, 2016; , we found the pointer-generator network very effective at reducing data sparsity in AMR parsing, which will be shown in Section 7.2.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 118, |
| "text": "(See et al., 2017)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 650, |
| "end": 670, |
| "text": "Merity et al., 2016;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 671, |
| "end": 694, |
| "text": "Miao and Blunsom, 2016;", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As depicted in Figure 3 , the extended pointergenerator network consists of four major components: an encoder embedding layer, an encoder, a decoder embedding layer, and a decoder. Encoder Embedding Layer This layer converts words in input sentences into vector representations. Each vector is the concatenation of embeddings of GloVe (Pennington et al., 2014) , BERT (Devlin et al., 2018) , POS (part-of-speech) tags and anonymization indicators, and features learned by a character-level convolutional neural network (CharCNN, Kim et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 335, |
| "end": 360, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 368, |
| "end": 389, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 529, |
| "end": 546, |
| "text": "Kim et al., 2016)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 23, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Anonymization indicators are binary indicators that tell the encoder whether the word is an anonymized word. In preprocessing, text spans of named entities in input sentences will be replaced by anonymized tokens (e.g. person, country) to reduce sparsity (see the Appendix for details).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Except BERT, all other embeddings are fetched from their corresponding learned embedding lookup tables. BERT takes subword units as input, which means that one word may correspond to multiple hidden states of BERT. In order to accurately use these hidden states to represent each word, we apply an average pooling function to the outputs of BERT. Figure 4 illustrates the process of generating word-level embeddings from BERT. Encoder The encoder is a multi-layer bidirectional RNN (Schuster and Paliwal, 1997) :", |
| "cite_spans": [ |
| { |
| "start": 482, |
| "end": 510, |
| "text": "(Schuster and Paliwal, 1997)", |
| "ref_id": "BIBREF55" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 347, |
| "end": 355, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "h l i = [ \u2212 \u2192 f l (h l\u22121 i , h l i\u22121 ); \u2190 \u2212 f l (h l\u22121 i , h l i+1 )],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where \u2212 \u2192 f l and \u2190 \u2212 f l are two LSTM cells (Hochreiter and Schmidhuber, 1997); h l i is the l-th layer encoder hidden state at the time step i; h 0 i is the encoder embedding layer output for word w i . Decoder Embedding Layer Similar to the encoder embedding layer, this layer outputs vector representations for AMR nodes. The difference is that each vector is the concatenation of embeddings of GloVe, POS tags and indices, and feature vectors from CharCNN. POS tags of nodes are inferred at runtime: if a node is a copy from the input sentence, the POS tag of the corresponding word is used; if a node is a copy from the preceding nodes, the POS tag of its antecedent is used; if a node is a new node emitted from the vocabulary, an UNK tag is used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We do not include BERT embeddings in this layer because AMR nodes, especially their order, are significantly different from natural language text (on which BERT was pre-trained). We tried to use \"fixed\" BERT in this layer, which did not lead to improvement. 2 Decoder At each step t, the decoder (an l-layer unidirectional LSTM) receives hidden state s l\u22121 t from the last layer and hidden state s l t\u22121 from the previous time step, and generates hidden state s l t :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "s l t = f l (s l\u22121 t , s l t\u22121 ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where s 0 t is the concatenation (i.e., the inputfeeding approach, Luong et al., 2015) of two vectors: the decoder embedding layer output for the previous node u t\u22121 (while training, u t\u22121 is the previous node of the reference node list; at test time it is the previous node emitted by the decoder), and the attentional vector s t\u22121 from the previous step (explained later in this section). s l 0 is the concatenation of last encoder hidden states from \u2212 \u2192 f l and \u2190 \u2212 f l respectively. Source attention distribution a t src is calculated by additive attention (Bahdanau et al., 2014) :", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 86, |
| "text": "Luong et al., 2015)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 561, |
| "end": 584, |
| "text": "(Bahdanau et al., 2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "e t src = v src tanh(W src h l 1:n + U src s l t + b src ), a t src = softmax(e t src ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "and it is then used to produce a weighted sum of encoder hidden states, i.e., the context vector c t . Attentional vector s t combines both source and target side information, and it is calculated by an MLP (shown in Figure 3 ):", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 225, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "s t = tanh(W c [c t ; s l t ] + b c )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The attentional vector s t has 3 usages:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(1) it is fed through a linear layer and softmax to produce the vocabulary distribution:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "P vocab = softmax(W vocab s t + b vocab )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(2) it is used to calculate the target attention distribution a t tgt :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "e t tgt = v tgt tanh(W tgt s 1:t\u22121 + U tgt s t + b tgt ), a t tgt = softmax(e t tgt ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(3) it is used to calculate source-side copy probability p src , target-side copy probability p tgt , and generation probability p gen via a switch layer:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "[p src , p tgt , p gen ] = softmax(W switch s t + b switch )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Note that p src + p tgt + p gen = 1. They act as a soft switch to choose between copying an existing node from the preceding nodes by sampling from the target attention distribution a t tgt , or emitting a new node in two ways: (1) generating a new node from the fixed vocabulary by sampling from P vocab , or (2) copying a word (as a new node) from the input sentence by sampling from the source attention distribution a t src . The final probability distribution P (node) (u t ) for node u t is defined as follows. If u t is a copy of existing nodes, then:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "P (node) (u t ) = p tgt t\u22121 i:u i =ut a t tgt [i],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "otherwise:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "P (node) (u t ) = p gen P vocab (u t ) + p src n i:w i =ut a t src [i],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where a t [i] indexes the i-th element of a t . Note that a new node may have the same surface form as the existing node. We track their difference using indices. The index d t for node u t is assigned deterministically as below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "d t = t, if u t is a new node;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "d j , if u t is a copy of its antecedent u j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extended Pointer-Generator Network", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For the second stage (i.e., edge prediction), we employ a deep biaffine classifier, which was originally proposed for graph-based dependency parsing (Dozat and Manning, 2016) , and recently has been applied to semantic parsing (Peng et al., 2017a; Dozat and Manning, 2018) . As depicted in Figure 5 , the major difference of our usage is that instead of re-encoding AMR nodes, we directly use decoder hidden states from the extended pointer-generator network as the input to deep biaffine classifier. We find two advantages of using decoder hidden states as input:", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 174, |
| "text": "(Dozat and Manning, 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 227, |
| "end": 247, |
| "text": "(Peng et al., 2017a;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 248, |
| "end": 272, |
| "text": "Dozat and Manning, 2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 290, |
| "end": 298, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(1) through the input-feeding approach, decoder hidden states contain contextualized information from both the input sentence and the predicted nodes; (2) because decoder hidden states are used for both node prediction and edge prediction, we can jointly train the two modules in our model. Given decoder hidden states s 1 , ..., s m and a learnt vector representation s 0 of a dummy root, we follow Dozat and Manning (2016) , factorizing edge prediction into two components: one that predicts whether or not a directed edge (u k , u t ) exists between two nodes u k and u t , and another that predicts the best label for each potential edge.", |
| "cite_spans": [ |
| { |
| "start": 400, |
| "end": 424, |
| "text": "Dozat and Manning (2016)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Edge and label scores are calculated as below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "s (edge-head) t = MLP (edge-head) (s t ) s (edge-dep) t = MLP (edge-dep) (s t ) s (label-head) t = MLP (label-head) (s t ) s (label-dep) t = MLP (label-dep) (s t ) score (edge) k,t = Biaffine(s (edge-head) k , s (edge-dep) t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "score (label) k,t = Bilinear(s (label-head) k , s where MLP, Biaffine and Bilinear are defined as below:", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 13, |
| "text": "(label)", |
| "ref_id": null |
| }, |
| { |
| "start": 31, |
| "end": 43, |
| "text": "(label-head)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "MLP(x) = ELU(W x + b) Biaffine(x 1 , x 2 ) = x 1 U x 2 + W [x 1 ; x 2 ] + b Bilinear(x 1 , x 2 ) = x 1 U x 2 + b", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Given a node u t , the probability of u k being the edge head of u t is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "P (head) t (u k ) = exp(score (edge) k,t ) m j=1 exp(score (edge) j,t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The edge label probability for edge (u k , u t ) is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "P (label) k,t (l) = exp(score (label) k,t [l]) l exp(score (label) k,t [l ])", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Deep Biaffine Classifier", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The training objective is to jointly minimize the loss of reference nodes and edges, which can be decomposed to the sum of the negative log likelihood at each time step t for (1) the reference node u t , (2) the reference edge head u k of node u t , and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "(3) the reference edge label l between u k and u t :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "minimize \u2212 m t=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "[log P (node) (u t ) + log P (head)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "t (u k ) + log P (label) k,t (l) + \u03bbcovloss t ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "covloss t is a coverage loss to penalize repetitive nodes:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "covloss t = i min(a t src [i], cov t [i])", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": ", where cov t is the sum of source attention distributions over all previous decoding time steps: See See et al. (2017) for full details.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 119, |
| "text": "See See et al. (2017)", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "cov t = t\u22121 t =0 a t src .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For node prediction, based on the final probability distribution P (node) (u t ) at each decoding time step, we implement both greedy search and beam search to sequentially decode a node list u and indices d.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prediction", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "For edge prediction, given the predicted node list u, their indices d, and the edge scores S = {score (edge) i,j | 0 \u2264 i, j \u2264 m}, we apply the Chu-Liu-Edmonds algorithm with a simple adaption to find the maximum spanning tree (MST). As described in Algorithm 1, before calling the Chu-Liu-Edmonds algorithm, we first include a dummy root u 0 to ensure every node have a head, and then exclude edges whose source and destination nodes have the same indices, because these nodes will be merged into a single node to recover the standard AMR graph where self-loops are invalid. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prediction", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "AMR parsing approaches can be categorized into alignment-based, transition-based, grammarbased, and attention-based approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Alignment-based approaches were first explored by JAMR (Flanigan et al., 2014) , a pipeline of concept and relation identification with a graphbased algorithm. improved this by jointly learning concept and relation identification with an incremental model. Both approaches rely on features based on alignments. Lyu and Titov (2018) treated alignments as latent variables in a joint probabilistic model, leading to a substantial reported improvement. Our approach re-quires no explicit alignments, but implicitly learns a source-side copy mechanism using attention.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 78, |
| "text": "(Flanigan et al., 2014)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 311, |
| "end": 331, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Transition-based approaches began with Wang et al. (2015 Wang et al. ( , 2016 , who incrementally transform dependency parses into AMRs using transitonbased models, which was followed by a line of research, such as Puzikov et al. 2016 2017; Groschwitz et al. (2018) . A pre-trained aligner, e.g. Pourdamghani et al. (2014) ; Liu et al. (2018) , is needed for most parsers to generate training data (e.g., oracles for a transition-based parser). Our approach makes no significant use of external semantic resources, 3 and is aligner-free.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 56, |
| "text": "Wang et al. (2015", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 57, |
| "end": 77, |
| "text": "Wang et al. ( , 2016", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 241, |
| "end": 265, |
| "text": "Groschwitz et al. (2018)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 296, |
| "end": 322, |
| "text": "Pourdamghani et al. (2014)", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 325, |
| "end": 342, |
| "text": "Liu et al. (2018)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Grammar-based approaches are represented by Artzi et al. (2015) ; Peng et al. (2015) who leveraged external semantic resources, and employed CCG-based or SHRG-based grammar induction approaches converting logical forms into AMRs. Pust et al. (2015) recast AMR parsing as a machine translation problem, while also drawing features from external semantic resources.", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 63, |
| "text": "Artzi et al. (2015)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 66, |
| "end": 84, |
| "text": "Peng et al. (2015)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 230, |
| "end": 248, |
| "text": "Pust et al. (2015)", |
| "ref_id": "BIBREF53" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Attention-based parsing with Seq2Seq-style models have been considered (Barzdins and Gosko, 2016; Peng et al., 2017b) , but are limited by the relatively small amount of labeled AMR data. Konstas et al. (2017) overcame this by making use of millions of unlabeled data through self-training, while van Noord and Bos (2017b) showed significant gains via a characterlevel Seq2Seq model and a large amount of silverstandard AMR training data. In contrast, our approach supported by extended pointer generator can be effectively trained on the limited amount of labeled AMR data, with no data augmentation.", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 97, |
| "text": "(Barzdins and Gosko, 2016;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 98, |
| "end": 117, |
| "text": "Peng et al., 2017b)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 188, |
| "end": 209, |
| "text": "Konstas et al. (2017)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Anonymization is often used in AMR preprocessing to reduce sparsity (Werling et al., 2015; Peng et al., 2017b; Guo and Lu, 2018, inter alia) . Similar to Konstas et al. (2017) , we anonymize sub-graphs of named entities and other entities. Like Lyu and Titov (2018) , we remove senses, and use Stanford CoreNLP to lemmatize input sentences and add POS tags.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 90, |
| "text": "(Werling et al., 2015;", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 91, |
| "end": 110, |
| "text": "Peng et al., 2017b;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 111, |
| "end": 140, |
| "text": "Guo and Lu, 2018, inter alia)", |
| "ref_id": null |
| }, |
| { |
| "start": 154, |
| "end": 175, |
| "text": "Konstas et al. (2017)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 245, |
| "end": 265, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AMR Pre-and Post-processing", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In post-processing, we assign the most frequent sense for nodes (-01, if unseen) like Lyu and Titov (2018) , and restore wiki links using the DBpedia Spotlight API (Daiber et al., 2013) following Bjerva et al. (2016) ; van Noord and Bos (2017b). We add polarity attributes based on the rules observed from the training data. More details of preand post-processing are provided in the Appendix. We conduct experiments on two AMR general releases (available to all LDC subscribers): AMR 2.0 (LDC2017T10) and AMR 1.0 (LDC2014T12). Our model is trained using ADAM (Kingma and Ba, 2014) for up to 120 epochs, with early stopping based on the development set. Full model training takes about 19 hours on AMR 2.0 and 7 hours on AMR 1.0, using two GeForce GTX TI-TAN X GPUs. At training, we have to fix BERT parameters due to the limited GPU memory. We leave fine-tuning BERT for future work. Table 1 lists the hyper-parameters used in our full model. Both encoder and decoder embedding layers have GloVe and POS tag embeddings as well as CharCNN, but their parameters are not tied. We apply dropout (dropout rate = 0.33) to the outputs of each module.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 106, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 164, |
| "end": 185, |
| "text": "(Daiber et al., 2013)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 196, |
| "end": 216, |
| "text": "Bjerva et al. (2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 885, |
| "end": 892, |
| "text": "Table 1", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "AMR Pre-and Post-processing", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Corpus Parser F1(%) AMR 2.0 Buys and Blunsom (2017) 61.9 van Noord and Bos (2017b) 71.0 * Groschwitz et al. (2018) 71.0\u00b10.5 Lyu and Titov (2018) 74.4\u00b10.2 Naseem et al. (2019) 75.5", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 51, |
| "text": "Buys and Blunsom (2017)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 88, |
| "end": 114, |
| "text": "* Groschwitz et al. (2018)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 124, |
| "end": 144, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 154, |
| "end": 174, |
| "text": "Naseem et al. (2019)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Ours 76.3\u00b10.1 AMR 1.0 Flanigan et al. (2016) 66.0 Pust et al. (2015) 67.1 Wang and Xue (2017) 68.1 Guo and Lu (2018) 68.3\u00b10.4", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 44, |
| "text": "Flanigan et al. (2016)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 50, |
| "end": 68, |
| "text": "Pust et al. (2015)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 74, |
| "end": 93, |
| "text": "Wang and Xue (2017)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 99, |
| "end": 116, |
| "text": "Guo and Lu (2018)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Ours 70.2\u00b10.1 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "We compare our approach against the previous best approaches and several recent competitors. Table 2 summarizes their SMATCH scores on the test sets of two AMR general releases. On AMR 2.0, we outperform the latest push from Naseem et al. (2019) by 0.8% F1, and significantly improves Lyu and Titov (2018) 's results by 1.9% F1. Compared to the previous best attention-based approach (van Noord and Bos, 2017b) , our approach shows a substantial gain of 5.3% F1, with no usage of any silver-standard training data. On AMR 1.0 where the traininng instances are only around 10k, we improve the best reported results by 1.9% F1. Fine-grained Results In Table 3 , we assess the quality of each subtask using the AMR-evaluation tools (Damonte et al., 2017) . We see a notable increase on reentrancies, which we attribute to target-side copy (based on our ablation studies in the next section). Significant increases are also shown on wikification and negation, indicating the benefits of using DBpedia Spotlight API and negation detection rules in post-processing. On all other subtasks except named entities, our approach achieves competitive results to the previous best approaches (Lyu and Titov, 2018; Naseem et al., 2019) , and outperforms the previous best attention-based approach (van Noord and Bos, 2017b) . The difference of scores on named entities is mainly caused by anonymization methods used in preprocessing, which suggests a potential improvement by adapting the anonymization method presented in Lyu and Titov (2018) (Lyu and Titov, 2018; Guo and Lu, 2018) that are not using BERT. Beam search, commonly used in machine translation, is also helpful in our model. We provide side-by-side examples in the Appendix to further illustrate the contribution from each component, which are largely intuitive, with the exception of BERT embeddings. There the exact contribution of the component (qualitative, before/after ablation) stands out less: future work might consider a probing analysis with manually constructed examples, in the spirit of Linzen et al. (2016) ; Conneau et al. (2018) ; Tenney et al. (2019) . In the last row, we only evaluate model performance at the edge prediction stage by forcing our model to decode the reference nodes at the node prediction stage. The results mean if our model could make perfect prediction at the node prediction stage, the final SMATCH score will be substantially high, which identifies node prediction as the key to future improvement of our model. sys . we compute frequency, precision and recall of nodes from source z as below:", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 245, |
| "text": "Naseem et al. (2019)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 285, |
| "end": 305, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 384, |
| "end": 410, |
| "text": "(van Noord and Bos, 2017b)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 729, |
| "end": 751, |
| "text": "(Damonte et al., 2017)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1179, |
| "end": 1200, |
| "text": "(Lyu and Titov, 2018;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1201, |
| "end": 1221, |
| "text": "Naseem et al., 2019)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 1283, |
| "end": 1309, |
| "text": "(van Noord and Bos, 2017b)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 1509, |
| "end": 1529, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1530, |
| "end": 1551, |
| "text": "(Lyu and Titov, 2018;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1552, |
| "end": 1569, |
| "text": "Guo and Lu, 2018)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 2052, |
| "end": 2072, |
| "text": "Linzen et al. (2016)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 2075, |
| "end": 2096, |
| "text": "Conneau et al. (2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 2099, |
| "end": 2119, |
| "text": "Tenney et al. (2019)", |
| "ref_id": "BIBREF57" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 93, |
| "end": 100, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 650, |
| "end": 657, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": null |
| }, |
| { |
| "text": "frequency (z) = |N (z) ref | z |N (z) ref | precision (z) = |N (z) ref \u2229 N (z) sys | |N (z) sys | recall (z) = |N (z) ref \u2229 N (z) sys | |N (z)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": null |
| }, |
| { |
| "text": "ref | Figure 6 shows the frequency of nodes from difference sources, and their corresponding precision and recall based on our model prediction. Among all reference nodes, 43.8% are from vocabulary generation, 47.6% from source-side copy, and only 8.6% from target-side copy. On one hand, the highest frequency of source-side copy helps address sparsity and results in the highest precision and recall. On the other hand, we see space for improvement, especially on the relatively low recall of target-side copy, which is probably due to its low frequency. Node Linearization As decribed in Section 3, we create the reference node list by a preorder traversal over the gold AMR tree. As for the children of each node, we sort them in alphanumerical order. This linearization strategy has two advantages: (1) pre-order traversal guarantees that a head node (predicate) always comes in front of its children (arguments); (2) alphanumerical sort orders according to role ID (i.e., ARG0>ARG1>...>ARGn), following intuition from research in Thematic Hierarchies (Fillmore, 1968; Levin and Hovav, 2005 In Table 5 , we report SMATCH scores of full models trained and tested on data generated via our linearization strategy (Pre-order + Alphanum), as compared to two obvious alternates: the first alternate still runs a pre-order traversal, but it sorts the children of each node based on the their alignments to input words; the second one linearizes nodes purely based alignments. Alignments are created using the tool by Pourdamghani et al. (2014) . Clearly, our linearization strategy leads to much better results than the two alternates. We also tried other traversal strategies such as combining in-order traversal with alphanumerical sorting or alignment-based sorting, but did not get scores even comparable to the two alternates. 5 Average Pooling vs. Max Pooling In Figure 4 , we apply average pooling to the outputs (last-layer hidden states) of BERT in order to generate wordlevel embeddings for the input sentence. Table 6 shows scores of models using different pooling functions. Average pooling performs slightly better than max pooling. ", |
| "cite_spans": [ |
| { |
| "start": 1057, |
| "end": 1073, |
| "text": "(Fillmore, 1968;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1074, |
| "end": 1095, |
| "text": "Levin and Hovav, 2005", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1516, |
| "end": 1542, |
| "text": "Pourdamghani et al. (2014)", |
| "ref_id": "BIBREF52" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 6, |
| "end": 14, |
| "text": "Figure 6", |
| "ref_id": "FIGREF9" |
| }, |
| { |
| "start": 1099, |
| "end": 1106, |
| "text": "Table 5", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 1868, |
| "end": 1876, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 2020, |
| "end": 2027, |
| "text": "Table 6", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": null |
| }, |
| { |
| "text": "We proposed an attention-based model for AMR parsing where we introduced a series of novel components into a transductive setting that extend beyond what a typical NMT system would do on this task. Our model achieves the best performance on two AMR corpora. For future work, we would like to extend our model to other semantic parsing tasks (Oepen et al., 2014; Abend and Rappoport, 2013) . We are also interested in semantic parsing in cross-lingual settings (Zhang et al., 2018; Damonte and Cohen, 2018) . Figure 7 : An example AMR and the corresponding sentence before and after preprocessing. Senses are removed. The first named entity is replaced by \"HIGHWAY 0\"; the second named entity is replaced by \"COUN-TRY REGION 0\"; the first date entity replaced by \"DATE 0\".", |
| "cite_spans": [ |
| { |
| "start": 341, |
| "end": 361, |
| "text": "(Oepen et al., 2014;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 362, |
| "end": 388, |
| "text": "Abend and Rappoport, 2013)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 460, |
| "end": 480, |
| "text": "(Zhang et al., 2018;", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 481, |
| "end": 505, |
| "text": "Damonte and Cohen, 2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 508, |
| "end": 516, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In the next page, we provide examples from the test set, with side-by-side comparisons between the full model prediction and the model prediction after ablation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Side-by-Side Examples", |
| "sec_num": null |
| }, |
| { |
| "text": "Limited by the GPU memory, we do not fine-tune BERT on this task and leave it for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We only use POS tags in the core parsing task. In postprocessing, we use an entity linker as a common move for wikification like van Noord and Bos (2017b).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "All other hyper-parameter settings remain the same.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "van Noord and Bos (2017b) also investigated linearization order, and found that alignment-based ordering yielded the best results under their setup where AMR parsing is treated as a sequence-to-sequence learning problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/goodmami/penman/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their valuable feedback. This work was supported in part by the JHU Human Language Technology Center of Excellence (HLTCOE), and DARPA LORELEI and AIDA. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "A.1 AMR Pre-and Post-processing Firstly, we to run Standford CoreNLP like Lyu and Titov (2018) , lemmatizing input sentences and adding POS tags to each token. Secondly, we remove senses, wiki links and polarity attributes in AMR. Thirdly, we anonymize sub-graphs of named entities and * -entity in a way similar to Konstas et al. (2017) . Figure 7 shows an example before and after preprocessing. Subgraphs of named entities are headed by one of AMR's fine-grained entity types (e.g., highway, country region in Figure 7 ) that contain a :name role. Sub-graphs of other entities are headed by their corresponding entity type name (e.g., date-entity in Figure 7 ). We replace these sub-graphs with a token of a special pattern \"TYPE i\" (e.g. HIGHWAY 0, DATE 0 in Figure 7) , where \"TYPE\" indicates the AMR entity type of the corresponding sub-graph, and \"i\" indicates that it is the i-th occurrence of that type. On the training set, we use simple rules to find mappings between anonymized sub-graphs and spans of text, and then replace mapped text with the anonymized token we inserted into the AMR graph. Additionally, we build a mapping of Standford CoreNLP NER tags to AMR's fine-grained types based on the training set, which will be used in prediction. At test time, we normalize sentences to match our anonymized training data. For any entity span identified by Stanford CoreNLP, we replace it with a AMR entity type based on the mapping built during training. If no entry is found in the mapping, we replace entity spans with the coarse-grained NER tags from Stanford CoreNLP, which are also entity types in AMR.In post-processing, we deterministically generate AMR sub-graphs for anonymizations using the corresponding text span. We assign the most frequent sense for nodes (-01, if unseen) like Lyu and Titov (2018) . We add wiki links to named entities using the DBpedia Spotlight API (Daiber et al., 2013) following Bjerva et al. (2016) ; van Noord and Bos (2017b) with the confidence threshod at 0.5. We add polarity attributes based on Algorithm 2 where the four functions isNegation, modifiedWord, mappedNode, and addPolarity consists of simple rules observed from the training set. We use the PENMANCodec 6 to encode and decode both intermediate and final AMRs.Algorithm 2: Adding polarity attributes to AMR.Input : Sent. w = w1, ..., wn , Predicted AMR A Output: AMR with polarity attributes. for wi \u2208 w do if isNegation(wi) then wj \u2190 modifiedWord(wi, w); u k \u2190 mappedNode(wj, A); A \u2190 addPolarity(u k , A); end end return A;", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 94, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 316, |
| "end": 337, |
| "text": "Konstas et al. (2017)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1805, |
| "end": 1825, |
| "text": "Lyu and Titov (2018)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1896, |
| "end": 1917, |
| "text": "(Daiber et al., 2013)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1928, |
| "end": 1948, |
| "text": "Bjerva et al. (2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 340, |
| "end": 348, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 513, |
| "end": 521, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 653, |
| "end": 661, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 763, |
| "end": 772, |
| "text": "Figure 7)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Appendices", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Universal conceptual cognitive annotation (ucca)", |
| "authors": [ |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "228--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omri Abend and Ari Rappoport. 2013. Universal con- ceptual cognitive annotation (ucca). In Proceed- ings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 228-238. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Broad-coverage ccg semantic parsing with amr", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1699--1710", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D15-1198" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699-1710. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1409.0473" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "AMR parsing using stack-LSTMs", |
| "authors": [ |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Yaser", |
| "middle": [], |
| "last": "Al-Onaizan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1269--1275", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D17-1130" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miguel Ballesteros and Yaser Al-Onaizan. 2017. AMR parsing using stack-LSTMs. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 1269-1275, Copen- hagen, Denmark. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Abstract meaning representation for sembanking", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Banarescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Bonial", |
| "suffix": "" |
| }, |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Madalina", |
| "middle": [], |
| "last": "Georgescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Griffitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
| "volume": "", |
| "issue": "", |
| "pages": "178--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Riga at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on amr parsing accuracy", |
| "authors": [ |
| { |
| "first": "Guntis", |
| "middle": [], |
| "last": "Barzdins", |
| "suffix": "" |
| }, |
| { |
| "first": "Didzis", |
| "middle": [], |
| "last": "Gosko", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1143--1147", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1176" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guntis Barzdins and Didzis Gosko. 2016. Riga at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on amr pars- ing accuracy. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 1143-1147. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The meaning factory at semeval-2016 task 8: Producing amrs with boxer", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Bjerva", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| }, |
| { |
| "first": "Hessel", |
| "middle": [], |
| "last": "Haagsma", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1179--1184", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1182" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Bjerva, Johan Bos, and Hessel Haagsma. 2016. The meaning factory at semeval-2016 task 8: Producing amrs with boxer. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval-2016), pages 1179-1184. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Icl-hd at semeval-2016 task 8: Meaning representation parsing -augmenting amr parsing with a preposition semantic role labeling neural network", |
| "authors": [ |
| { |
| "first": "Lauritz", |
| "middle": [], |
| "last": "Brandt", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Grimm", |
| "suffix": "" |
| }, |
| { |
| "first": "Mengfei", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Versley", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1160--1166", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1179" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lauritz Brandt, David Grimm, Mengfei Zhou, and Yan- nick Versley. 2016. Icl-hd at semeval-2016 task 8: Meaning representation parsing -augmenting amr parsing with a preposition semantic role labeling neural network. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 1160-1166. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Oxford at semeval-2017 task 9: Neural amr parsing with pointeraugmented attention", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Buys", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", |
| "volume": "", |
| "issue": "", |
| "pages": "914--919", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S17-2157" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Buys and Phil Blunsom. 2017. Oxford at semeval- 2017 task 9: Neural amr parsing with pointer- augmented attention. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 914-919. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Smatch: an evaluation metric for semantic feature structures", |
| "authors": [ |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "748--752", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 748-752. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Parsing graphs with hyperedge replacement grammars", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Andreas", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Karl", |
| "middle": [ |
| "Moritz" |
| ], |
| "last": "Hermann", |
| "suffix": "" |
| }, |
| { |
| "first": "Bevan", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "924--932", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 924-932. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "On the shortest arborescence of a directed graph", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "J" |
| ], |
| "last": "Chu", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "H" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Science Sinica", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. J. Chu and T. H. Liu. 1965. On the shortest arbores- cence of a directed graph. Science Sinica, 14.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Lo\u00efc", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2126--2136", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Germ\u00e1n Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic proper- ties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2126-2136. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Improving efficiency and accuracy in multilingual entity extraction", |
| "authors": [ |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Daiber", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Jakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Hokamp", |
| "suffix": "" |
| }, |
| { |
| "first": "Pablo", |
| "middle": [ |
| "N" |
| ], |
| "last": "Mendes", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 9th International Conference on Semantic Systems (I-Semantics)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. In Pro- ceedings of the 9th International Conference on Se- mantic Systems (I-Semantics).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Crosslingual abstract meaning representation parsing", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Damonte", |
| "suffix": "" |
| }, |
| { |
| "first": "Shay", |
| "middle": [ |
| "B" |
| ], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "1146--1155", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N18-1104" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Damonte and Shay B. Cohen. 2018. Cross- lingual abstract meaning representation parsing. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1146-1155, New Orleans, Louisiana. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "An incremental parser for abstract meaning representation", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Damonte", |
| "suffix": "" |
| }, |
| { |
| "first": "Shay", |
| "middle": [ |
| "B" |
| ], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter", |
| "volume": "1", |
| "issue": "", |
| "pages": "536--546", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536-546. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Deep biaffine attention for neural dependency parsing", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1611.01734" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency pars- ing. arXiv preprint arXiv:1611.01734.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Simpler but more accurate semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "484--490", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484-490. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Optimum branchings. Mathematics and the Decision Sciences, Part", |
| "authors": [], |
| "year": 1968, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jack Edmonds. 1968. Optimum branchings. Math- ematics and the Decision Sciences, Part, 1(335- 345):26.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The case for case", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fillmore", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charles J. Fillmore. 1968. The case for case. Holt, Rinehart & Winston, New York.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Cmu at semeval-2016 task 8: Graph-based amr parsing with infinite ramp loss", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Flanigan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1202--1206", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1186" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. Cmu at semeval-2016 task 8: Graph-based amr parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202-1206. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "A discriminative graph-based parser for the abstract meaning representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Flanigan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1426--1436", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P14-1134" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discrim- inative graph-based parser for the abstract mean- ing representation. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426- 1436. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Abstract meaning representation parsing using lstm recurrent neural networks", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Foland", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "H" |
| ], |
| "last": "Martin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "463--472", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1043" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Foland and James H. Martin. 2017. Abstract meaning representation parsing using lstm recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 463-472. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alphabound", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Vlachos", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Naradowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1167--1172", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1180" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Goodman, Andreas Vlachos, and Jason Narad- owsky. 2016. Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alpha- bound. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1167-1172. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Amr dependency parsing with a typed semantic algebra", |
| "authors": [ |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Groschwitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Lindemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Meaghan", |
| "middle": [], |
| "last": "Fowlie", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1831--1841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2018. Amr dependency parsing with a typed semantic al- gebra. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1831-1841. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Incorporating copying mechanism in sequence-to-sequence learning", |
| "authors": [ |
| { |
| "first": "Jiatao", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [ |
| "K" |
| ], |
| "last": "Victor", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1631--1640", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-1154" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1631-1640. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Pointing the unknown words", |
| "authors": [ |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungjin", |
| "middle": [], |
| "last": "Ahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramesh", |
| "middle": [], |
| "last": "Nallapati", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "140--149", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-1014" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140- 149. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Better transitionbased amr parsing with a refined search space", |
| "authors": [ |
| { |
| "first": "Zhijiang", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1712--1722", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhijiang Guo and Wei Lu. 2018. Better transition- based amr parsing with a refined search space. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1712-1722. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Character-aware neural language models", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Sontag", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16", |
| "volume": "", |
| "issue": "", |
| "pages": "2741--2749", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2016. Character-aware neural lan- guage models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, pages 2741-2749. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Simple and accurate dependency parsing using bidirectional lstm feature representations", |
| "authors": [ |
| { |
| "first": "Eliyahu", |
| "middle": [], |
| "last": "Kiperwasser", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "313--327", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional lstm feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Neural amr: Sequence-to-sequence models for parsing and generation", |
| "authors": [ |
| { |
| "first": "Ioannis", |
| "middle": [], |
| "last": "Konstas", |
| "suffix": "" |
| }, |
| { |
| "first": "Srinivasan", |
| "middle": [], |
| "last": "Iyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Yatskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Yejin", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "146--157", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1014" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and gen- eration. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146-157. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Argument realization", |
| "authors": [ |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| }, |
| { |
| "first": "Malka", |
| "middle": [ |
| "Rappaport" |
| ], |
| "last": "Hovav", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beth Levin and Malka Rappaport Hovav. 2005. Argu- ment realization. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", |
| "authors": [ |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "521--535", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "An AMR aligner tuned by transition-based parser", |
| "authors": [ |
| { |
| "first": "Yijia", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2422--2430", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018. An AMR aligner tuned by transition-based parser. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2422-2430, Brussels, Bel- gium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Effective approaches to attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1421", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D15-1166" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Amr parsing as graph prediction with latent alignment", |
| "authors": [ |
| { |
| "first": "Chunchuan", |
| "middle": [], |
| "last": "Lyu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "397--407", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chunchuan Lyu and Ivan Titov. 2018. Amr parsing as graph prediction with latent alignment. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 397-407. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "The Stanford CoreNLP natural language processing toolkit", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mc-Closky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Association for Computational Linguistics (ACL) System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Online large-margin training of dependency parsers", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "91--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of the 43rd An- nual Meeting of the Association for Computational Linguistics (ACL'05), pages 91-98. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Pointer sentinel mixture models", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Merity", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Bradbury", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1609.07843" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Language as a latent variable: Discrete generative models for sentence compression", |
| "authors": [ |
| { |
| "first": "Yishu", |
| "middle": [], |
| "last": "Miao", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "319--328", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D16-1031" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 319-328. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Abstractive text summarization using sequence-tosequence rnns and beyond", |
| "authors": [ |
| { |
| "first": "Ramesh", |
| "middle": [], |
| "last": "Nallapati", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Cicero Dos Santos", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Xiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "280--290", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K16-1028" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Ab- stractive text summarization using sequence-to- sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280-290. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Rewarding smatch: Transition-based amr parsing with reinforcement learning", |
| "authors": [ |
| { |
| "first": "Tahira", |
| "middle": [], |
| "last": "Naseem", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Shah", |
| "suffix": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| }, |
| { |
| "first": "Radu", |
| "middle": [], |
| "last": "Florian", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1905.13370" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tahira Naseem, Abhishek Shah, Hui Wan, Radu Flo- rian, Salim Roukos, and Miguel Ballesteros. 2019. Rewarding smatch: Transition-based amr pars- ing with reinforcement learning. arXiv preprint arXiv:1905.13370.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Dealing with co-reference in neural semantic parsing", |
| "authors": [ |
| { |
| "first": "Rik", |
| "middle": [], |
| "last": "Van Noord", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2)", |
| "volume": "", |
| "issue": "", |
| "pages": "41--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rik van Noord and Johan Bos. 2017a. Dealing with co-reference in neural semantic parsing. In Proceed- ings of the 2nd Workshop on Semantic Deep Learn- ing (SemDeep-2), pages 41-49, Montpellier, France. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Neural semantic parsing by character-based translation: Experiments with abstract meaning representations", |
| "authors": [ |
| { |
| "first": "Rik", |
| "middle": [], |
| "last": "Van Noord", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Computational Linguistics in the Netherlands Journal", |
| "volume": "7", |
| "issue": "", |
| "pages": "93--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rik van Noord and Johan Bos. 2017b. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. Computa- tional Linguistics in the Netherlands Journal, 7:93- 108.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "SemEval 2014 task 8: Broad-coverage semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Hajic", |
| "suffix": "" |
| }, |
| { |
| "first": "Angelina", |
| "middle": [], |
| "last": "Ivanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "63--72", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/S14-2008" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63-72, Dublin, Ireland. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Deep multitask learning for semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "2037--2048", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1186" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hao Peng, Sam Thomson, and Noah A. Smith. 2017a. Deep multitask learning for semantic dependency parsing. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2037-2048. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "A synchronous hyperedge replacement grammar based approach for AMR parsing", |
| "authors": [ |
| { |
| "first": "Xiaochang", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Linfeng", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "32--41", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K15-1004" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement gram- mar based approach for AMR parsing. In Proceed- ings of the Nineteenth Conference on Computational Natural Language Learning, pages 32-41, Beijing, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Addressing the data sparsity issue in neural amr parsing", |
| "authors": [ |
| { |
| "first": "Xiaochang", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Chuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "366--375", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaochang Peng, Chuan Wang, Daniel Gildea, and Ni- anwen Xue. 2017b. Addressing the data sparsity issue in neural amr parsing. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, pages 366-375. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/D14-1162" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Aligning english strings with abstract meaning representation graphs", |
| "authors": [ |
| { |
| "first": "Nima", |
| "middle": [], |
| "last": "Pourdamghani", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "425--429", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/D14-1048" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning english strings with abstract meaning representation graphs. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 425-429. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Parsing english into abstract meaning representation using syntaxbased machine translation", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Pust", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1143--1154", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D15-1136" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing english into abstract meaning representation using syntax- based machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1143-1154. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "M2l at semeval-2016 task 8: Amr parsing with neural networks", |
| "authors": [ |
| { |
| "first": "Yevgeniy", |
| "middle": [], |
| "last": "Puzikov", |
| "suffix": "" |
| }, |
| { |
| "first": "Daisuke", |
| "middle": [], |
| "last": "Kawahara", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadao", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1154--1159", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1178" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yevgeniy Puzikov, Daisuke Kawahara, and Sadao Kurohashi. 2016. M2l at semeval-2016 task 8: Amr parsing with neural networks. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval-2016), pages 1154-1159. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Bidirectional recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Kuldip", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Paliwal", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "IEEE Transactions on Signal Processing", |
| "volume": "45", |
| "issue": "11", |
| "pages": "2673--2681", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Get to the point: Summarization with pointergenerator networks", |
| "authors": [ |
| { |
| "first": "Abigail", |
| "middle": [], |
| "last": "See", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1073--1083", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1099" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "What do you learn from context? probing for sentence structure in contextualized word representations", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Tenney", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Berlin", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Poliak", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mccoy", |
| "suffix": "" |
| }, |
| { |
| "first": "Najoung", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellie", |
| "middle": [], |
| "last": "Pavlick", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Grammar as a foreign language", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2773--2781", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Gram- mar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773-2781.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Camr at semeval-2016 task 8: An extended transition-based amr parser", |
| "authors": [ |
| { |
| "first": "Chuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoman", |
| "middle": [], |
| "last": "Pan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ji", |
| "middle": [], |
| "last": "Heng", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1173--1178", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1181" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. Camr at semeval-2016 task 8: An extended transition-based amr parser. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1173- 1178. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Getting the most out of amr parsing", |
| "authors": [ |
| { |
| "first": "Chuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1257--1268", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D17-1129" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chuan Wang and Nianwen Xue. 2017. Getting the most out of amr parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1257-1268. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "A transition-based algorithm for amr parsing", |
| "authors": [ |
| { |
| "first": "Chuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "366--375", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/N15-1040" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015. A transition-based algorithm for amr parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 366-375. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Robust subgraph generation improves abstract meaning representation parsing", |
| "authors": [ |
| { |
| "first": "Keenon", |
| "middle": [], |
| "last": "Werling", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabor", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "982--991", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P15-1095" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keenon Werling, Gabor Angeli, and Christopher D. Manning. 2015. Robust subgraph generation im- proves abstract meaning representation parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 982-991. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Cross-lingual decompositional semantic parsing", |
| "authors": [ |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xutai", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Duh", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1664--1675", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2018. Cross-lingual de- compositional semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 1664-1675. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Association for Computational Linguistics. Sentence: Route 288 , the circumferential highway running around the south -western quadrant of the Richmond New Urban Region", |
| "authors": [ |
| { |
| "first": "Junsheng", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Feiyu", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "U" |
| ], |
| "last": "Weiguang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ran", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanhui", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "680--689", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D16-1065" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junsheng Zhou, Feiyu Xu, Hans Uszkoreit, Weiguang QU, Ran Li, and Yanhui Gu. 2016. Amr parsing with an incremental joint model. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 680-689. Associ- ation for Computational Linguistics. Sentence: Route 288 , the circumferential highway running around the south -western quadrant of the Richmond New Urban Region , opened in late 2004 .", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Region\")) :mod (s / southwest)))) :mod (c2 / circumference)) :time", |
| "authors": [], |
| "year": null, |
| "venue": "Anonymized Sentence: HIGHWAY_0 , the circumferential highway running around the south -western quadrant of the COUNTRY_REGION_0", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anonymized Sentence: HIGHWAY_0 , the circumferential highway running around the south -western quadrant of the COUNTRY_REGION_0 , opened in late DATE_0 . Before preprocessing (o / open-01 :ARG1 (h / highway :wiki \"Virginia_State_Route_288\" :name (r / name :op1 \"Route\" :op2 288) :ARG1-of (r3 / run-04 :direction (a / around :op1 (q / quadrant :part-of (c / country-region :wiki - :name (r2 / name :op1 \"Richmond\" :op2 \"New\" :op3 \"Urban\" :op4 \"Region\")) :mod (s / southwest)))) :mod (c2 / circumference)) :time (l / late :op1 (d / date-entity :year 2004)))", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "-of (r3 / run :direction (a / around :op1 (q / quadrant :part-of (c / COUNTRY_REGION_0) :mod (s / southwest)))) :mod (c2 / circumference)) :time", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "After preprocessing (o / open :ARG1 (h / HIGHWAY_0 :ARG1-of (r3 / run :direction (a / around :op1 (q / quadrant :part-of (c / COUNTRY_REGION_0) :mod (s / southwest)))) :mod (c2 / circumference)) :time (l / late :op1 (d / DATE_0)))", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "Sentence: Smoke and clouds chase the flying waves Lemmas", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sentence: Smoke and clouds chase the flying waves Lemmas: [\"smoke\", \"and\", \"cloud\", \"chase\", \"the\", \"fly\", \"wave\"]", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Full Model", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Full Model (vv1 / chase-01 :ARG0 (vv2 / and :op1 (vv3 / smoke) :op2 (vv4 / cloud-01))", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "Without source-side copy, the prediction becomes totally different and inaccurate in this example. Sentence: Now we already have no cohesion! China needs to start a war! Full Model", |
| "authors": [], |
| "year": null, |
| "venue": "Figure", |
| "volume": "8", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Figure 8: Full model prediction vs. no source-side copy prediction. Tokens in blue are copied from the source side. Without source-side copy, the prediction becomes totally different and inaccurate in this example. Sentence: Now we already have no cohesion! China needs to start a war! Full Model (vv1 / multi-sentence :snt1 (vv2 / have-03 :ARG0 (vv3 / we) :ARG1 (vv4 / cohere-01) :polarity - :time (vv5 / already)) :snt2 (vv6 / need-01 :ARG0 (vv7 / country :name (vv8 / name :op1 \"China\") :wiki \"China\") :ARG1 (vv9 / start-01 :ARG0 vv7 :ARG1 (vv11 / war))", |
| "links": null |
| }, |
| "BIBREF70": { |
| "ref_id": "b70", |
| "title": "China\"). The full model correctly copies the first node (\"vv7 / country\") as ARG0 of \"start-01\". Without target-side copy, the model has to generate a new node with a different index, i.e., \"vv10 / country\". Sentence: The solemn and magnificent posture represents a sacred expectation for peace", |
| "authors": [], |
| "year": null, |
| "venue": "Full model prediction vs. no target-side copy prediction. Nodes in blue denote the same concept (i.e., the country", |
| "volume": "9", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Figure 9: Full model prediction vs. no target-side copy prediction. Nodes in blue denote the same concept (i.e., the country \"China\"). The full model correctly copies the first node (\"vv7 / country\") as ARG0 of \"start-01\". Without target-side copy, the model has to generate a new node with a different index, i.e., \"vv10 / country\". Sentence: The solemn and magnificent posture represents a sacred expectation for peace. Full Model (vv1 / represent-01 :ARG0 (vv2 / posture-01 :mod (vv3 / magnificent) :mod (vv4 / solemn)) :ARG1 (vv5 / expect-01 :ARG1 (vv6 / peace) :mod (vv7 / sacred)))", |
| "links": null |
| }, |
| "BIBREF71": { |
| "ref_id": "b71", |
| "title": "No Coverage Loss", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "No Coverage Loss (vv1 / represent-01 :ARG0 (vv2 / posture-01 :mod (vv3 / magnificent) :mod (vv4 / magnificent))", |
| "links": null |
| }, |
| "BIBREF72": { |
| "ref_id": "b72", |
| "title": "Without coverage loss, the model generates a repetitive modifier \"magnificent\". Sentence: Do it gradually if it's not something you're particularly comfortable with. Full Model (vv1 / have", |
| "authors": [], |
| "year": null, |
| "venue": "Figure", |
| "volume": "10", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Figure 10: Full model prediction vs. no coverage loss prediction. The full model correctly predicts the second modifier \"solemn\". Without coverage loss, the model generates a repetitive modifier \"magnificent\". Sentence: Do it gradually if it's not something you're particularly comfortable with. Full Model (vv1 / have-condition-91 :ARG1 (vv2 / do-02 :ARG0 (vv3 / you) :ARG1 (vv4 / it) :manner (vv5 / gradual))", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Two views of reentrancy in AMR for an example sentence \"The victim could help himself.\" (a) A standard AMR graph. (b) An AMR tree with node indices as an extra layer of annotation, where the corresponding graph can be recovered by merging nodes of the same index and unioning their incoming edges." |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "t e x i t s h a 1 _ b a s e 6 4 = \" R q f 2 5 5 u a S R 5 j 8 l n" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Word-level embeddings from BERT." |
| }, |
| "FIGREF5": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Deep biaffine classifier for edge prediction. Edge label prediction is not depicted in the figure." |
| }, |
| "FIGREF6": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Chu-Liu-Edmonds algo. w/ Adaption Input : Nodes u = u1, ..., um , Indices d = d1, ...dm , Edge scores S = {score (edge) i,j | 0 \u2264 i, j \u2264 m} Output: A maximum spanning tree. // Include the dummy root u0. V \u2190 {u0} \u222a u; d0 \u2190 0; // Exclude invalid edges. // di is the node index for node ui. E \u2190 {(ui, uj) | 0 \u2264 i, j \u2264 m; di = dj}; // Chu-Liu-Edmonds algorithm return MST(V, E, S, u0);" |
| }, |
| "FIGREF7": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "; Brandt et al. (2016); Goodman et al. (2016); Damonte et al. (2017); Ballesteros and Al-Onaizan" |
| }, |
| "FIGREF9": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Frequency, precision and recall of nodes from different sources, based on the AMR 2.0 test set. There are three sources for node prediction: vocabulary generation, source-side copy, or targetside copy. Let all reference nodes from source z be N (z) ref , and all system predicted nodes from z be N (z)" |
| }, |
| "TABREF0": { |
| "html": null, |
| "text": "The victim could help himself.", |
| "content": "<table><tr><td/><td/><td/><td/><td colspan=\"3\">Node Prediction</td><td/></tr><tr><td>possible</td><td>1</td><td>help</td><td>2</td><td>victim</td><td>3</td><td>victim</td><td>3</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Edge Prediction</td><td/></tr><tr><td colspan=\"2\">ARG1</td><td/><td/><td>ARG1 ARG0</td><td/><td/><td/></tr><tr><td>possible</td><td>1</td><td>help</td><td>2</td><td>victim</td><td>3</td><td>victim</td><td>3</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "html": null, |
| "text": "Hyper-parameter settings", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: SMATCH scores on the test sets of AMR 2.0</td></tr><tr><td>and 1.0. Standard deviation is computed over 3 runs</td></tr><tr><td>with different random seeds.</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: Fine-grained F1 scores on the AMR 2.0 test</td></tr><tr><td>set. vN'17 is van Noord and Bos (2017b); L'18 is Lyu</td></tr><tr><td>and Titov (2018); N'19 is Naseem et al. (2019).</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "html": null, |
| "text": "Ablation studies on components of our model. (Scores are sorted by the delta from the full model.)Ablation Study We consider the contributions of several model components inTable 4. The largest performance drop is from removing source-side copy, 4 showing its efficiency at reducing sparsity from open-class vocabulary entries. Removing target-side copy also leads to a large drop. Specifically, the subtask score of reentrancies drops down to 38.4% when target-side copy is disabled. Coverage loss is useful with regard to discouraging unnecessary repetitive nodes.", |
| "content": "<table><tr><td>In addition, our</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF10": { |
| "html": null, |
| "text": "SMATCH scores of full models trained and tested based on different node linearization strategies.", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF11": { |
| "html": null, |
| "text": "SMATCH scores based different pooling functions. Standard deviation is over 3 runs on the test data.", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |