ACL-OCL / Base_JSON /prefixS /json /S19 /S19-2002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-2002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:45:35.059651Z"
},
"title": "HLT@SUDA at SemEval-2019 Task 1: UCCA Graph Parsing as Constituent Tree Parsing",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"country": "China"
}
},
"email": "yzhang25@stu.suda.edu"
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"country": "China"
}
},
"email": "minzhang@suda.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a simple UCCA semantic graph parsing approach. The key idea is to convert a UCCA semantic graph into a constituent tree, in which extra labels are deliberately designed to mark remote edges and discontinuous nodes for future recovery. In this way, we can make use of existing syntactic parsing techniques. Based on the data statistics, we recover discontinuous nodes directly according to the output labels of the constituent parser and use a biaffine classification model to recover the more complex remote edges. The classification model and the constituent parser are simultaneously trained under the multi-task learning framework. We use the multilingual BERT as extra features in the open tracks. Our system ranks the first place in the six English/German closed/open tracks among seven participating systems. For the seventh cross-lingual track, where there is little training data for French, we propose a language embedding approach to utilize English and German training data, and our result ranks the second place.",
"pdf_parse": {
"paper_id": "S19-2002",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a simple UCCA semantic graph parsing approach. The key idea is to convert a UCCA semantic graph into a constituent tree, in which extra labels are deliberately designed to mark remote edges and discontinuous nodes for future recovery. In this way, we can make use of existing syntactic parsing techniques. Based on the data statistics, we recover discontinuous nodes directly according to the output labels of the constituent parser and use a biaffine classification model to recover the more complex remote edges. The classification model and the constituent parser are simultaneously trained under the multi-task learning framework. We use the multilingual BERT as extra features in the open tracks. Our system ranks the first place in the six English/German closed/open tracks among seven participating systems. For the seventh cross-lingual track, where there is little training data for French, we propose a language embedding approach to utilize English and German training data, and our result ranks the second place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Universal Conceptual Cognitive Annotation (UCCA) is a multi-layer linguistic framework for semantic annotation proposed by Abend and Rappoport (2013) . Figure 1 shows an example sentence and its UCCA graph.",
"cite_spans": [
{
"start": 123,
"end": 149,
"text": "Abend and Rappoport (2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 152,
"end": 160,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Words are represented as terminal nodes. Circles denote non-terminal nodes, and the semantic relation * Corresponding author, hlt.suda.edu.cn/zhenghua Figure 1 : A UCCA graph example from the German data. The English translation is \" I went around and groped . We assign a number to each non-terminal node to facilitate illustration.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 159,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "between two non-terminal nodes is represented by the label on the edge. One node may have multiple parents, among which one is annotated as the primary parent, marked by solid line edges, and others as remote parents, marked by dashed line edges. The primary edges form a tree structure, whereas the remote edges enable reentrancy, forming directed acyclic graphs (DAGs). 1 The second feature of UCCA is the existence of nodes with discontinuous leaves, known as discontinuity. For example, node 3 in Figure 1 is discontinuous because some terminal nodes it spans are not its descendants. Hershcovich et al. (2017) first propose a transition-based UCCA Parser, which is used as the baseline in the closed tracks of this shared task. Based on the recent progress on transitionbased parsing techniques, they propose a novel set of transition actions to handle both discontinuous and remote nodes and design useful features based on bidirectional LSTMs. Hershcovich et al. (2018) then extend their previous approach and propose to utilize the annotated data with other semantic formalisms such as abstract meaning representation (AMR), universal dependencies (UD), and bilexical Semantic Dependencies (SDP), via multi-task learning, which is used as the baseline in the open tracks.",
"cite_spans": [
{
"start": 372,
"end": 373,
"text": "1",
"ref_id": null
},
{
"start": 589,
"end": 614,
"text": "Hershcovich et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 951,
"end": 976,
"text": "Hershcovich et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 501,
"end": 509,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a simple UCCA semantic graph parsing approach by treating UCCA semantic graph parsing as constituent parsing. We first convert a UCCA semantic graph into a constituent tree by removing discontinuous and remote phenomena. Extra labels encodings are deliberately designed to annotate the conversion process and to recover discontinuous and remote structures. We heuristically recover discontinuous nodes according to the output labels of the constituent parser, since most discontinuous nodes share the same pattern according to the data statistics. As for the more complex remote edges, we use a biaffine classification model for their recovery. We directly employ the graph-based constituent parser of Stern et al. (2017) and jointly train the parser and the biaffine classification model via multi-task learning (MTL) . For the open tracks, we use the publicly available multilingual BERT as extra features. Our system ranks the first place in the six English/German closed/open tracks among seven participating systems. For the seventh cross-lingual track, where there is little training data for French, we propose a language embedding approach to utilize English and German training data, and our result ranks the second place.",
"cite_spans": [
{
"start": 728,
"end": 747,
"text": "Stern et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 839,
"end": 844,
"text": "(MTL)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our key idea is to convert UCCA graphs into constituent trees by removing discontinuous and remote edges and using extra labels for their future recovery. Our idea is inspired by the pseudo nonprojective dependency parsing approach propose by Nivre and Nilsson (2005) .",
"cite_spans": [
{
"start": 243,
"end": 267,
"text": "Nivre and Nilsson (2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Main Approach",
"sec_num": "2"
},
{
"text": "Given a UCCA graph as depicted in Figure 1 , we produce a constituent tree shown in Figure 2 based on our algorithm described as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 1",
"ref_id": null
},
{
"start": 84,
"end": 92,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-to-Tree Conversion",
"sec_num": "2.1"
},
{
"text": "1) Removal of remote edges. For nodes that have multiple parent nodes, we remove all remote edges and only keep the primary edge. To facilitate future recovery, we concatenate an extra \"remote\" to the label of the primary edge, indicat- ing that the corresponding node has other remote relations. We can see that the label of the child node 5 becomes \"A-remote\" after conversion in Figure 1 and 2.",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 390,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-to-Tree Conversion",
"sec_num": "2.1"
},
{
"text": "2) Handling discontinuous nodes. We call node 3 in Figure 1 a discontinuous node because the terminal nodes (also words or leaves) it spans are not continuous (\"lch ging umher und\" are not its descendants). Since mainstream constituent parsers cannot handle discontinuity, we try to remove discontinuous structures by moving specific edges in the following procedure.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-to-Tree Conversion",
"sec_num": "2.1"
},
{
"text": "Given a discontinuous node A = 3, we first process the leftmost non-descendant node B = \"lch . We go upwards along the edges until we find a node C = 2, whose father is either the lowest common ancestor (LCA) of A = 3 and B = \"lch or another discontinuous node. We denote the father of C = 2 as D = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-to-Tree Conversion",
"sec_num": "2.1"
},
{
"text": "Then we move C = 2 to be the child of A = 3, and concatenate the original edge label with an extra string (among \"ancestor 1/2/3/...\" and \"discontinuous\") for future recovery, where the number represents the number of edges between",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-to-Tree Conversion",
"sec_num": "2.1"
},
{
"text": "xi ... ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-to-Tree Conversion",
"sec_num": "2.1"
},
{
"text": "Remote recovery Constituent Parsing Figure 3 : The framework of MTL.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "MLPs and Biaffines MLPs",
"sec_num": null
},
{
"text": "the ancestor D = 1 and A = 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLPs and Biaffines MLPs",
"sec_num": null
},
{
"text": "After reorganizing the graph, we then restart and perform the same operations again until there is no discontinuity. Table 1 shows the statistics of the discontinuous structures in the English-Wiki data. We can see that D is mostly likely the LCA of A and B, and there is only one edge between D and A in more than 90% cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "MLPs and Biaffines MLPs",
"sec_num": null
},
{
"text": "Considering the skewed distribution, we only keep \"ancestor 1\" after graph-to-tree conversion, and treat others as continuous structures for simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLPs and Biaffines MLPs",
"sec_num": null
},
{
"text": "3) Pushing labels from edges into nodes. Since the labels are usually annotated in the nodes instead of edges in constituent trees, we push all labels from edges to the child nodes. We label the top node as \"ROOT\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLPs and Biaffines MLPs",
"sec_num": null
},
{
"text": "We directly adopt the minimal span-based parser of Stern et al. (2017) . Given an input sentence s = w 1 ...w n , each word w i is mapped into a dense vector x i via lookup operations.",
"cite_spans": [
{
"start": 51,
"end": 70,
"text": "Stern et al. (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "x i = e w i \u2295 e t i \u2295 ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "where e w i is the word embedding and e t i is the part-of-speech tag embedding. To make use of other auto-generated linguistic features, provided with the datasets, we also include the embeddings of the named entity tags and the dependency labels, but find limited performance gains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "Then, the parser employs two cascaded bidirectional LSTM layers as the encoder, and use the top-layer outputs as the word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "Afterwards, the parser represents each span w i ...w j as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "r i,j = (f j \u2212 f i ) \u2295 (b i \u2212 b j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "where f i and b i are the output vectors of the toplayer forward and backward LSTMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "The span representations are then fed into MLPs to compute the scores of span splitting and labeling. For inference, the parser performs greedy topdown searching to build a parse tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Parsing",
"sec_num": "2.2"
},
{
"text": "We borrow the idea of the state-of-the-art biaffine dependency parsing (Dozat and Manning, 2017) and build our remote edge recovery model. The model shares the same inputs and LSTM encoder as the constituent parser under the MTL framework (Collobert and Weston, 2008) . For each remote node, marked by \"-remote\" in the constituent tree, we consider all other non-terminal nodes as its candidate remote parents. Given a remote node A and another non-terminal node B, we first represent them as the span representations. r i,j and r i ,j , where i, i , j, j are the start and end word indices governed by the two nodes. Please kindly note that B may be a discontinuous node.",
"cite_spans": [
{
"start": 71,
"end": 96,
"text": "(Dozat and Manning, 2017)",
"ref_id": "BIBREF3"
},
{
"start": 239,
"end": 267,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Remote Edge Recovery",
"sec_num": "2.3"
},
{
"text": "Following Dozat and Manning (2017) , we apply two separate MLPs to the remote and candidate parent nodes respectively, producing",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Remote Edge Recovery",
"sec_num": "2.3"
},
{
"text": "r child i,j and r parent i ,j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remote Edge Recovery",
"sec_num": "2.3"
},
{
"text": "Finally, we compute a labeling score vector via a biaffine operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remote Edge Recovery",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(A \u2190 B) = r child i,j 1 T Wr parent i ,j",
"eq_num": "(1)"
}
],
"section": "Remote Edge Recovery",
"sec_num": "2.3"
},
{
"text": "where the dimension of the labeling score vector is the number of the label set, including a \"NOT-PARENT\" label. Training loss. We accumulate the standard cross-entropy losses of all remote and nonterminal node pairs. The parsing loss and the remote edge classification loss are added in the MTL framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remote Edge Recovery",
"sec_num": "2.3"
},
{
"text": "For the open tracks, we use the contextualized word representations produced by BERT (Devlin et al., 2018) as extra input features. 2 Following previous works, we use the weighted summation of the last four transformer layers and then multiply a task-specific weight parameter following (Peters et al., 2018) .",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 287,
"end": 308,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Use of BERT",
"sec_num": "2.4"
},
{
"text": "Because of little training data for French, we borrow the treebank embedding approach of Stymne et al. (2018) for exploiting multiple heterogeneous treebanks for the same language, and propose a language embedding approach to utilize English and German training data. The training datasets of the three languages are merged to train a single UCCA parsing model. The only modification is to concatenate each word position with an extra language embedding (of dimension 50), i.e. x i \u2295 e lang=en/de/f r to indicate which language this training sentence comes from. In this way, we expect the model can fully utilize all training data since most parameters are shared except the three language embedding vectors, and learn the language differences as well.",
"cite_spans": [
{
"start": 89,
"end": 109,
"text": "Stymne et al. (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Parsing",
"sec_num": "3"
},
{
"text": "Except BERT, all the data we use, including the linguistic features and word embeddings, are provided by the shared task organizer (Hershcovich et al., 2019) . We also adopt the averaged F1 score as the main evaluation metrics returned by the official evaluation scripts (Hershcovich et al., 2019) . We train each model for at most 100 iterations, and early stop training if the peak performance does not increase in 10 consecutive iterations. Table 2 shows the results on the dev data. We have experimented with different settings to gain insights on the contributions of different components. For the single-language models, it is clear that using pre-trained word embeddings outperforms using randomly initialized word embeddings by more than 1% F1 score on both English and German. Finetuning the pre-trained word embeddings leads to consistent yet slight performance improvement. In the open tracks, replacing word embedding with the BERT representation is also useful on English (2.8% increase) and German (1.2% increase). Concatenating pre-trained word embeddings with BERT outputs leads is also beneficial.",
"cite_spans": [
{
"start": 131,
"end": 157,
"text": "(Hershcovich et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 271,
"end": 297,
"text": "(Hershcovich et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 444,
"end": 451,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For the multilingual models, using randomly initialized word embeddings is better than pretrained word embeddings, which is contradictory to the single-language results. We suspect this is due to that the pre-trained word embeddings are independently trained for different languages and would lie in different semantic spaces with- out proper aligning. Using the BERT outputs is tremendously helpful, boosting the F1 score by more than 10%. We do not report the results on English and German for brevity since little improvement is observed for them. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In this paper, we describe our system submitted to SemEval 2019 Task 1. We design a simple UCCA semantic graph parsing approach by making full use of the recent advance in syntactic parsing community. The key idea is to convert UCCA graphs into constituent trees. The graph recovery Table 3 : Final results on the test data in each track. Please refer to the official webpage for more detailed results due to the limited space problem is modeled as another classification task under the MTL framework. For the cross-lingual parsing track, we design a language embedding approach to utilize the training data of resourcerich languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The full UCCA scheme also has implicit and linkage relations, which are overlooked in the community so far.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the multilingual cased BERT from https:// github.com/google-research/bert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous reviewers for the helpful comments. We also thank Chen Gong for her help on speeding up the minimal span parser. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61876116).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Universal Conceptual Cognitive Annotation (UCCA)",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "References Omri Abend",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "228--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Omri Abend and Ari Rappoport. 2013. Universal Conceptual Cognitive Annotation (UCCA). In Proc. of ACL, pages 228-238.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of ICML.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of ICLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A transition-based directed acyclic graph parser for ucca",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "1127--1138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for ucca. In Proc. of ACL, pages 1127-1138.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multitask parsing across semantic representations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "373--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018. Multitask parsing across semantic representa- tions. In Proc. of ACL, pages 373-385.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval 2019 task 1: Cross-lingual semantic parsing with ucca",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Zohar",
"middle": [],
"last": "Aizenbud",
"suffix": ""
},
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.02953"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Zohar Aizenbud, Leshem Choshen, Elior Sulem, Ari Rappoport, and Omri Abend. 2019. Semeval 2019 task 1: Cross-lingual semantic parsing with ucca. arXiv:1903.02953.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pseudoprojective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "99--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo- projective dependency parsing. In Proc. of ACL, pages 99-106.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A minimal span-based neural constituency parser",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "818--827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proc. of ACL, pages 818-827.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Parser training with heterogeneous treebanks",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "619--625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Stymne, Miryam de Lhoneux, Aaron Smith, and Joakim Nivre. 2018. Parser training with heterogeneous treebanks. In Proc. of ACL, pages 619-625.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 2: Constituent tree converted from UCCA gragh."
},
"TABREF0": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "Results on the dev data.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "lists our final results on the test data. Our system ranks the first place in six tracks (English/German closed/open) and the second place in the French open track. Note that we submitted a wrong result for the French open track during the evaluation phase by setting the wrong index of language, which leads to about 2% drop of averaged F1 score (0.752). Please refer to (Hershcovich et al., 2019) for the complete results and comparisons.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}