ACL-OCL / Base_JSON /prefixP /json /P16 /P16-1033.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P16-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:58:05.025058Z"
},
"title": "Active Learning for Dependency Parsing with Partial Annotation",
"authors": [
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"settlement": "Suzhou",
"country": "China"
}
},
"email": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"settlement": "Suzhou",
"country": "China"
}
},
"email": "minzhang@suda.edu"
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "zhangyue1107@qq.com"
},
{
"first": "Zhanyi",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"settlement": "Suzhou",
"country": "China"
}
},
"email": "liuzhanyi@baidu.com"
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"settlement": "Suzhou",
"country": "China"
}
},
"email": "wlchen@suda.edu"
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": "wuhua@baidu.com"
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "wanghaifeng@baidu.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Different from traditional active learning based on sentence-wise full annotation (FA), this paper proposes active learning with dependency-wise partial annotation (PA) as a finer-grained unit for dependency parsing. At each iteration, we select a few most uncertain words from an unlabeled data pool, manually annotate their syntactic heads, and add the partial trees into labeled data for parser retraining. Compared with sentence-wise FA, dependency-wise PA gives us more flexibility in task selection and avoids wasting time on annotating trivial tasks in a sentence. Our work makes the following contributions. First, we are the first to apply a probabilistic model to active learning for dependency parsing, which can 1) provide tree probabilities and dependency marginal probabilities as principled uncertainty metrics, and 2) directly learn parameters from PA based on a forest-based training objective. Second, we propose and compare several uncertainty metrics through simulation experiments on both Chinese and English. Finally, we conduct human annotation experiments to compare FA and PA on real annotation time and quality.",
"pdf_parse": {
"paper_id": "P16-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "Different from traditional active learning based on sentence-wise full annotation (FA), this paper proposes active learning with dependency-wise partial annotation (PA) as a finer-grained unit for dependency parsing. At each iteration, we select a few most uncertain words from an unlabeled data pool, manually annotate their syntactic heads, and add the partial trees into labeled data for parser retraining. Compared with sentence-wise FA, dependency-wise PA gives us more flexibility in task selection and avoids wasting time on annotating trivial tasks in a sentence. Our work makes the following contributions. First, we are the first to apply a probabilistic model to active learning for dependency parsing, which can 1) provide tree probabilities and dependency marginal probabilities as principled uncertainty metrics, and 2) directly learn parameters from PA based on a forest-based training objective. Second, we propose and compare several uncertainty metrics through simulation experiments on both Chinese and English. Finally, we conduct human annotation experiments to compare FA and PA on real annotation time and quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "During the past decade, supervised dependency parsing has gained extensive progress in boosting parsing performance on canonical texts, especially on texts from domains or genres similar to existing manually labeled treebanks (Koo and Collins, 2010; Zhang and Nivre, 2011) . However, the $ 0 I 1 saw 2 Sarah 3 with 4 a 5 telescope 6 Figure 1 : A partially annotated sentence, where only the heads of \"saw\" and \"with\" are decided.",
"cite_spans": [
{
"start": 226,
"end": 249,
"text": "(Koo and Collins, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 250,
"end": 272,
"text": "Zhang and Nivre, 2011)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 333,
"end": 341,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "upsurge of web data (e.g., tweets, blogs, and product comments) imposes great challenges to existing parsing techniques. Meanwhile, previous research on out-of-domain dependency parsing gains little success (Dredze et al., 2007; Petrov and McDonald, 2012) . A more feasible way for open-domain parsing is to manually annotate a certain amount of texts from the target domain or genre. Recently, several small-scale treebanks on web texts have been built for study and evaluation (Foster et al., 2011; Petrov and McDonald, 2012; Wang et al., 2014) .",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "(Dredze et al., 2007;",
"ref_id": "BIBREF4"
},
{
"start": 229,
"end": 255,
"text": "Petrov and McDonald, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 479,
"end": 500,
"text": "(Foster et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 501,
"end": 527,
"text": "Petrov and McDonald, 2012;",
"ref_id": "BIBREF33"
},
{
"start": 528,
"end": 546,
"text": "Wang et al., 2014)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Meanwhile, active learning (AL) aims to reduce annotation effort by choosing and manually annotating unlabeled instances that are most valuable for training statistical models (Olsson, 2009) . Traditionally, AL utilizes full annotation (FA) for parsing (Tang et al., 2002; Hwa, 2004; Lynn et al., 2012) , where a whole syntactic tree is annotated for a given sentence at a time. However, as commented by Mejer and Crammer (2012) , the annotation process is complex, slow, and prone to mistakes when FA is required. Particularly, annotators waste a lot of effort on labeling trivial dependencies which can be well handled by current statistical models (Flannery and Mori, 2015) .",
"cite_spans": [
{
"start": 176,
"end": 190,
"text": "(Olsson, 2009)",
"ref_id": "BIBREF31"
},
{
"start": 253,
"end": 272,
"text": "(Tang et al., 2002;",
"ref_id": "BIBREF40"
},
{
"start": 273,
"end": 283,
"text": "Hwa, 2004;",
"ref_id": "BIBREF11"
},
{
"start": 284,
"end": 302,
"text": "Lynn et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 404,
"end": 428,
"text": "Mejer and Crammer (2012)",
"ref_id": "BIBREF28"
},
{
"start": 651,
"end": 676,
"text": "(Flannery and Mori, 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, researchers report promising results with AL based on partial annotation (PA) for dependency parsing (Sassano and Kurohashi, 2010; Mirroshandel and Nasr, 2011; Majidi and Crane, 2013; Flannery and Mori, 2015) . They find that smaller units rather than sentences provide more flexibility in choosing potentially informative structures to annotate.",
"cite_spans": [
{
"start": 111,
"end": 140,
"text": "(Sassano and Kurohashi, 2010;",
"ref_id": "BIBREF36"
},
{
"start": 141,
"end": 169,
"text": "Mirroshandel and Nasr, 2011;",
"ref_id": "BIBREF30"
},
{
"start": 170,
"end": 193,
"text": "Majidi and Crane, 2013;",
"ref_id": "BIBREF21"
},
{
"start": 194,
"end": 218,
"text": "Flannery and Mori, 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Beyond previous work, this paper endeavors to more thoroughly study this issue, and has made substantial progress from the following perspectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) This is the first work that applies a stateof-the-art probabilistic parsing model to AL for dependency parsing. The CRF-based dependency parser on the one hand allows us to use probabilities of trees or marginal probabilities of single dependencies for uncertainty measurement, and on the other hand can directly learn parameters from partially annotated trees. Using probabilistic models may be ubiquitous in AL for relatively simpler tasks like classification and sequence labeling, but is definitely novel for dependency parsing which is dominated by linear models with perceptron-like training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Based on the CRF-based parser, we make systematic comparison among several uncertainty metrics for both FA and PA. Simulation experiments show that compared with using FA, AL with PA can greatly reduce annotation effort in terms of dependency number by 62.2% on Chinese and by 74.2% on English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) We build a visualized annotation platform and conduct human annotation experiments to compare FA and PA on real annotation time and quality, where we obtain several interesting observations and conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All codes, along with the data from human annotation experiments, are released at http: //hlt.suda.edu.cn/\u02dczhli for future research study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given an input sentence x = w 1 ...w n , the goal of dependency parsing is to build a directed dependency tree d = {h \u21b7 m : 0 \u2264 h \u2264 n, 1 \u2264 m \u2264 n}, where |d| = n and h \u21b7 m represents a dependency from a head word h to a modifier word m. Figure 1 depicts a partial tree containing two dependencies. 1 In this work, we for the first time apply a probabilistic CRF-based parsing model to AL for dependency parsing. We adopt the second-order graphbased model of McDonald and Pereira (2006) , which casts the problem as finding an optimal tree from a fully-connect directed graph and factors the score of a dependency tree into scores of pairs of sibling dependencies.",
"cite_spans": [
{
"start": 297,
"end": 298,
"text": "1",
"ref_id": null
},
{
"start": 457,
"end": 484,
"text": "McDonald and Pereira (2006)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 236,
"end": 244,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Probabilistic Dependency Parsing",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d * = arg max d\u2208Y(x) Score(x, d; w) Score(x, d; w) = \u2211 (h,s,m):h\u21b7s\u2208d, h\u21b7m\u2208d w \u2022 f (x, h, s, m)",
"eq_num": "(1)"
}
],
"section": "Probabilistic Dependency Parsing",
"sec_num": "2"
},
{
"text": "where s and m are adjacent siblings both modifying h; f (x, h, s, m) are the corresponding feature vector; w is the feature weight vector; Y(x) is the set of all legal trees for x according to the dependency grammar in hand; d * is the 1-best parse tree which can be gained efficiently via a dynamic programming algorithm (Eisner, 2000) . We use the state-of-the-art feature set listed in Bohnet (2010) . Under the log-linear CRF-based model, the probability of a dependency tree is:",
"cite_spans": [
{
"start": 322,
"end": 336,
"text": "(Eisner, 2000)",
"ref_id": "BIBREF5"
},
{
"start": 389,
"end": 402,
"text": "Bohnet (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Dependency Parsing",
"sec_num": "2"
},
{
"text": "p(d|x; w) = e Score(x,d;w) \u2211 d \u2032 \u2208Y(x) e Score(x,d \u2032 ;w) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Dependency Parsing",
"sec_num": "2"
},
{
"text": "Ma and Zhao (2015) give a very detailed and thorough introduction to CRFs for dependency parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Dependency Parsing",
"sec_num": "2"
},
{
"text": "Under the supervised learning scenario, a labeled training data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from FA",
"sec_num": "2.1"
},
{
"text": "D = {(x i , d i )} N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from FA",
"sec_num": "2.1"
},
{
"text": "is provided to learn w. The objective is to maximize the log likelihood of D:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from FA",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(D; w) = \u2211 N i=1 log p(d i |x i ; w)",
"eq_num": "(3)"
}
],
"section": "Learning from FA",
"sec_num": "2.1"
},
{
"text": "which can be solved by standard gradient descent algorithms. In this work, we adopt stochastic gradient descent (SGD) with L2-norm regularization for all CRF-based parsing models. 2 explored in this paper can be easily extended to the case of labeled dependency parsing. 2 We borrow the implementation of SGD in CRFsuite (http://www.chokkan.org/software/ crfsuite/), and use 100 sentences for a batch. Marcheggiani and Arti\u00e8res (2014) shows that marginal probabilities of local labels can be used as an effective uncertain metric for AL for sequence labeling problems. In the case of dependency parsing, the marginal probability of a dependency is the sum of probabilities of all legal trees that contain the dependency.",
"cite_spans": [
{
"start": 271,
"end": 272,
"text": "2",
"ref_id": null
},
{
"start": 402,
"end": 434,
"text": "Marcheggiani and Arti\u00e8res (2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from FA",
"sec_num": "2.1"
},
{
"text": "p(h \u21b7 m|x; w) = \u2211 d\u2208Y(x):h\u21b7m\u2208d p(d|x; w) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginal Probability of Dependencies",
"sec_num": "2.2"
},
{
"text": "Intuitively, marginal probability is a more principled metric for measuring reliability of a dependency since it considers all legal parses in the search space, compared to previous methods based on scores of local classifiers (Sassano and Kurohashi, 2010; Flannery and Mori, 2015) or votes of n-best parses (Mirroshandel and Nasr, 2011). Moreover, Li et al. (2014) find strong correlation between marginal probability and correctness of a dependency in cross-lingual syntax projection.",
"cite_spans": [
{
"start": 227,
"end": 256,
"text": "(Sassano and Kurohashi, 2010;",
"ref_id": "BIBREF36"
},
{
"start": 257,
"end": 281,
"text": "Flannery and Mori, 2015)",
"ref_id": "BIBREF6"
},
{
"start": 349,
"end": 365,
"text": "Li et al. (2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Marginal Probability of Dependencies",
"sec_num": "2.2"
},
{
"text": "This work adopts the standard pool-based AL framework (Lewis and Gale, 1994; McCallum and Nigam, 1998) . Initially, we have a small set of labeled seed data L, and a large-scale unlabeled data pool U. Then the procedure works as follows.",
"cite_spans": [
{
"start": 54,
"end": 76,
"text": "(Lewis and Gale, 1994;",
"ref_id": "BIBREF15"
},
{
"start": 77,
"end": 102,
"text": "McCallum and Nigam, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning for Dependency Parsing",
"sec_num": "3"
},
{
"text": "(1) Train a new parser on the current L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning for Dependency Parsing",
"sec_num": "3"
},
{
"text": "(2) Parse all sentences in U, and select a set of the most informative tasks U \u2032",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning for Dependency Parsing",
"sec_num": "3"
},
{
"text": "(3) Manually annotate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning for Dependency Parsing",
"sec_num": "3"
},
{
"text": "U \u2032 \u2192 L \u2032 (4) Expand labeled data: L \u222a L \u2032 \u2192 L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning for Dependency Parsing",
"sec_num": "3"
},
{
"text": "The above steps loop for many iterations until a predefined stopping criterion is met. The key challenge for AL is how to measure the informativeness of structures in concern. Following previous work on AL for dependency parsing, we make a simplifying assumption that if the current model is most uncertain about an output (sub)structure, the structure is most informative in terms of boosting model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning for Dependency Parsing",
"sec_num": "3"
},
{
"text": "Sentence-wise FA selects K most uncertain sentences in Step (2), and annotates their whole tree structures in Step (3). In the following, we describe several uncertainty metrics and investigate their practical effects through experiments. Given an unlabeled sentence x = w 1 ...w n , we use d * to denote the 1-best parse tree produced by the current model as in Eq. (1). For brevity, we omit the feature weight vector w in the equations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-wise FA",
"sec_num": "3.1"
},
{
"text": "Normalized tree score. Following previous works that use scores of local classifiers for uncertainty measurement (Sassano and Kurohashi, 2010; Flannery and Mori, 2015) , we use Score(x, d * ) to measure the uncertainty of x, assuming that the model is more uncertain about x if d * gets a smaller score. However, we find that directly using Score(x, d * ) always selects very short sentences due to the definition in Eq. (1). Thus we normalize the score with the sentence length n as follows. 3 Conf i(x) =",
"cite_spans": [
{
"start": 113,
"end": 142,
"text": "(Sassano and Kurohashi, 2010;",
"ref_id": "BIBREF36"
},
{
"start": 143,
"end": 167,
"text": "Flannery and Mori, 2015)",
"ref_id": "BIBREF6"
},
{
"start": 493,
"end": 494,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-wise FA",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Score(x, d * ) n 1.5",
"eq_num": "(5)"
}
],
"section": "Sentence-wise FA",
"sec_num": "3.1"
},
{
"text": "Normalized tree probability. The CRF-based parser allows us, for the first time in AL for dependency parsing, to directly use tree probabilities for uncertainty measurement. Unlike previous approximate methods based on k-best parses (Mirroshandel and Nasr, 2011), tree probabilities globally consider all parse trees in the search space, and thus are intuitively more consistent and proper for measuring the reliability of a tree. Our initial assumption is that the model is more uncertain about x if d * gets a smaller probability. However, we find that directly using p(d * |x) would select very long sentences because the solution space grows exponentially with sentence length. We find that the normalization strategy below works well. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-wise FA",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x) = n \u221a p(d * |x)",
"eq_num": "(6)"
}
],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Averaged marginal probability. As discussed in Section 2.2, the marginal probability of a dependency directly reflects its reliability, and thus can be regarded as another global measurement besides tree probabilities.In fact, we find that the effect of sentence length is naturally handled with the following metric. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x) = \u2211 h\u21b7m\u2208d * p(h \u21b7 m|x) n",
"eq_num": "(7)"
}
],
"section": "Conf i(",
"sec_num": null
},
{
"text": "3.2 Single Dependency-wise PA AL with single dependency-wise PA selects M most uncertain words from U in Step (2), and annotates the heads of the selected words in Step (3). After annotation, the newly annotated sentences with partial trees L \u2032 are added into L. Different from the case of sentence-wise FA, L \u2032 are also put back to U, so that new tasks can be further chosen from them. Marcheggiani and Arti\u00e8res (2014) make systematic comparison among a dozen uncertainty metrics for AL with PA for several sequence labeling tasks. We borrow three effective metrics according to their results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Marginal probability max. Suppose h 0 = arg max h p(h \u21b7 i|x) is the most likely head for i. The intuition is that the lower p(h 0 \u21b7 i) is, the more uncertain the model is on deciding the head of the token i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Conf i(x, i) = p(h 0 \u21b7 i|x)",
"eq_num": "(8)"
}
],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Marginal probability gap. Suppose h 1 = arg max h\u0338 =h 0 p(h \u21b7 i|x) is the second most likely head for i. The intuition is that the smaller the probability gap is, the more uncertain the model is about i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Conf i(x, i) = p(h 0 \u21b7 i|x) \u2212 p(h 1 \u21b7 i|x) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Marginal probability entropy. This metric considers the entropy of all possible heads for i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "The assumption is that the smaller the negative entropy is, the more uncertain the model is about i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Conf i(x, i) = \u2211 h p(h \u21b7 i|x) log p(h \u21b7 i|x)",
"eq_num": "(10)"
}
],
"section": "Conf i(",
"sec_num": null
},
{
"text": "3.3 Batch Dependency-wise PA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "In the framework of single dependency-wise PA, we assume that the selection and annotation of dependencies in the same sentence are strictly independent. In other words, annotators may be asked to annotate the head of one selected word after reading and understanding a whole (sometimes partial) sentence, and may be asked to annotate another selected word in the same sentence in next AL iteration. Obviously, frequently switching sentences incurs great waste of cognitive effort, $ 0 I 1 saw 2 Sarah 3 with 4 a 5 telescope 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Figure 2: An example parse forest converted from the partial tree in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "and annotating one dependency can certainly help decide another dependency in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Inspired by the work of Flannery and Mori (2015), we propose AL with batch dependencywise PA, which is a compromise between sentence-wise FA and single dependency-wise PA. In Step 2, AL with batch dependency-wise PA selects K most uncertain sentences from U, and also determines r% most uncertain words from each sentence at the same time. In Step 3, annotators are asked to label the heads of the selected words in the selected sentences. We propose and experiment with the following three strategies based on experimental results of sentence-wise FA and single dependency-wise PA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Averaged marginal probability & gap. First, select K sentences from U using averaged marginal probability. Second, select r% words using marginal probability gap for each selected sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Marginal probability gap. First, for each sentence in U, select r% most uncertain words according to marginal probability gap. Second, select K sentences from U using the averaged marginal probability gap of the selected r% words in a sentence as the uncertainty metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "Averaged marginal probability. This strategy is the same with the above strategy, except it measures the uncertainty of a word i according to the marginal probability of the dependency",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "pointing to i in d * , i.e., p(j \u21b7 i|x), where j \u21b7 i \u2208 d * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conf i(",
"sec_num": null
},
{
"text": "A major challenge for AL with PA is how to learn from partially labeled sentences, as depicted in Figure 1 . Li et al. (2014) show that a probabilistic CRF-based parser can naturally and effectively learn from PA. The basic idea is converting a partial tree into a forest as shown in Figure 2 , and using the forest as the gold-standard reference during training, also known as ambiguous labeling (Riezler et al., 2002; T\u00e4ckstr\u00f6m et al., 2013) .",
"cite_spans": [
{
"start": 109,
"end": 125,
"text": "Li et al. (2014)",
"ref_id": "BIBREF17"
},
{
"start": 397,
"end": 419,
"text": "(Riezler et al., 2002;",
"ref_id": "BIBREF35"
},
{
"start": 420,
"end": 443,
"text": "T\u00e4ckstr\u00f6m et al., 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": null
},
{
"start": 284,
"end": 292,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "For each remaining word without head, we add all dependencies linking to it as long as the new dependency does not violate the existing dependencies. We denote the resulting forest as Fj, whose probability is naturally the sum of probabilities of each tree d in F.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(F|x; w) = \u2211 d\u2208F p(d|x; w) = \u2211 d\u2208F e Score(x,d;w) \u2211 d \u2032 \u2208Y(x) e Score(x,d \u2032 ;w)",
"eq_num": "(11)"
}
],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "Suppose the partially labeled training data is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "D = {(x i , F i )} N i=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "Then its log likelihood is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(D; w) = \u2211 N i=1 log p(F i |x i ; w)",
"eq_num": "(12)"
}
],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "T\u00e4ckstr\u00f6m et al. 2013show that the partial derivative of the L(D; w) with regard to w (a.k.a the gradient) in both Equation 3and 12can be efficiently solved with the classic Inside-Outside algorithm. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from PA",
"sec_num": "3.4"
},
{
"text": "We use Chinese Penn Treebank 5.1 (CTB ) for Chinese and Penn Treebank (PTB ) for English. For both datasets, we follow the standard data split, and convert original bracketed structures into dependency structures using Penn2Malt with its default head-finding rules. To be more realistic, we use automatic part-of-speech (POS) tags produced by a state-of-the-art CRF-based tagger (94.1% on CTB -test, and 97.2% on PTB -test, nfold jackknifing on training data), since POS tags encode much syntactic annotation. Because AL experiments need to train many parsing models, we throw out all training sentences longer than 50 to speed up our experiments. Table 1 shows the data statistics.",
"cite_spans": [],
"ref_spans": [
{
"start": 648,
"end": 655,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "Following previous practice on AL with PA (Sassano and Kurohashi, 2010; Flannery and Mori, 2015), we adopt the following AL settings for both Chinese and English . The first 500 training sentences are used as the seed labeled data L. In the case of FA, K = 500 new sentences are selected and annotated at each iteration. In the case of single dependency-wise PA, we select and annotate M = 10, 000 dependencies, which roughly correspond to 500 sentences considering that the averaged sentence length is about 22.3 in CTB -train and 23.2 in PTB -train. In the case of batch dependency-wise PA, we set K = 500, and r = 20% for Chinese and r = 10% for English, considering that the parser trained on all data achieves about 80% and 90% accuracies. We measure parsing performance using the standard unlabeled attachment score (UAS) including punctuation marks. Please note that we always treat punctuation marks as ordinary words when selecting annotation tasks and calculating UAS, in order to make fair comparison between FA and PA. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "4.1 FA vs. Single Dependency-wise PA First, we make comparison on the performance of AL with FA and with single dependency-wise PA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "Results on Chinese are shown in Figure 3 . Following previous work, we use the number of annotated dependencies (x-axis) as the annotation cost in order to fairly compare FA and PA. We use FA with random selection as a baseline. We also draw the accuracy of the CRF-based parser trained on all training data, which can be regarded as the upper bound.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "For FA, the curve of the normalized tree score intertwines with that of random selection. Meanwhile, the performance of normalized tree probability is very close to that of averaged marginal probability, and both are clearly superior to the baseline with random selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "For PA, the difference among the three uncertainty metrics is small. The marginal probability gap clearly outperforms the other two metrics before 50, 000 annotated dependencies, and remains Parser trained on all data PA (single): marginal probability gap PA (single): marginal probability max PA (single): marginal probability entropy FA: averaged marginal probability FA: normalized tree probability FA: normalized tree score FA: random selection Parser trained on all data PA (single): marginal probability gap PA (single): marginal probability max PA (single): marginal probability entropy FA: averaged marginal probability FA: normalized tree probability FA: normalized tree score FA: random selection Figure 4 : FA vs. PA on PTB -dev.",
"cite_spans": [],
"ref_spans": [
{
"start": 707,
"end": 715,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "very competitive at all other points. The marginal probability max achieves best peak UAS, and even outperforms the parser trained on all data, which can be explained by small disturbance during complex model training. The marginal probability entropy, although being the most complex metric among the three, seems inferior all the time. It is clear that using PA can greatly reduce annotation effort compared with using FA in terms of annotated dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "Results on English are shown in Figure 4 . The overall findings are similar to those in Figure 3 , except that the distinction among different methods is more clear. For FA, normalized tree score is consistently better than the random baseline. Normalized tree probability always outperforms normalized tree score. Averaged marginal probability performs best, except being slightly inferior to normalized tree probability in earlier stages.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 4",
"ref_id": null
},
{
"start": 88,
"end": 96,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "For PA, it is consistent that marginal probability gap is better than marginal probability max, and marginal probability entropy is the worst.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "In summary, based on the results on the de- velopment data in Figure 3 and 4, the best AL method with PA only needs about 80,000 318,408 = 25% annotated dependencies on Chinese, and about 90,000 908,154 = 10% on English, to reach the same performance with parsers trained on all data. Moreover, the PA methods converges much faster than the FA ones, since for the same x-axis number, much more sentences (with partial trees) are used as training data for AL with PA than FA.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Simulation Experiments",
"sec_num": "4"
},
{
"text": "Then we make comparison on AL with single dependency-wise PA and with the more practical batch dependency-wise PA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single vs. Batch Dependency-wise PA",
"sec_num": "4.2"
},
{
"text": "Results on Chinese are shown in Figure 5 . We can see that the three strategies achieve very similar performance and are also very close to single dependency-wise PA. AL with batch dependencywise PA even achieves higher accuracy before 20, 000 annotated dependencies, which should be caused by the smaller active learning steps (about 2, 000 dependencies at each iteration, contrasting 10, 000 for single dependency-wise PA). When the training data runs out at about 7, 300 dependencies, AL with batch dependency-wise PA only lags behind with single dependency-wise PA by about 0.3%, which we suppose can be reduced if larger training data is available.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Single vs. Batch Dependency-wise PA",
"sec_num": "4.2"
},
{
"text": "Results on English are shown in Figure 6 , and are very similar to those on Chinese. One tiny difference is that the marginal probability gap is slightly worse that the other two metrics. The three uncertainty metrics have very similar accuracy curves, which are also very close to the curve of single dependency-wise PA. In addition, we also try r = 20% and find that results are inferior to r = 10%, indicating that the extra 10% annotation tasks are less valuable and contributive. Table 2 shows the results on test data. We compare our CRF-based parser with ZPar v6.0 8 , a state-ofthe-art transition-based dependency parser (Zhang and Nivre, 2011) . We train ZPar with default parameter settings for 50 iterations, and choose the model that performs best on dev data. We can see that when trained on all data, our CRFbased parser outperforms ZPar on both Chinese and English.",
"cite_spans": [
{
"start": 629,
"end": 652,
"text": "(Zhang and Nivre, 2011)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 6",
"ref_id": "FIGREF4"
},
{
"start": 485,
"end": 492,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Single vs. Batch Dependency-wise PA",
"sec_num": "4.2"
},
{
"text": "To compare FA and PA, we report the number of annotated dependencies needed under each AL strategy to achieve an accuracy lower by about 1% than the parser trained on all data. 9 FA (best) refers to FA with averaged marginal probability, and it needs 187, 051 187,123 = 20.3% less annotated dependencies than FA with random selection on Chinese, and 395,199\u2212197,907 395,199 = 50.0% less on English.",
"cite_spans": [
{
"start": 251,
"end": 255,
"text": "187,",
"ref_id": null
},
{
"start": 256,
"end": 259,
"text": "051",
"ref_id": null
},
{
"start": 350,
"end": 373,
"text": "395,199\u2212197,907 395,199",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results on Test Data",
"sec_num": "4.3"
},
{
"text": "PA (single) with marginal probability gap needs 149, 958 149,051 = 65.8% less annotated dependencies than FA (best) on Chinese, and 197, 448 197, 907 = 69.0% less on English. PA (batch) with marginal probability gap needs slightly more annotation than PA (single) on Chinese but slightly less annotation on English, and can reduce the amount of annotated dependencies by 149, [51] [52] [53] [54] [55] [56] 389 149,051 = 62.2% over FA (best) on Chi-8 http://people.sutd.edu.sg/\u02dcyue_zhang/doc/ 9 The gap 1% is chosen based on the curves on development data (Figure 3 and 4) with the following two considerations: 1) larger gap may lead to wrong impression that AL is weak; 2) smaller gap (e.g., 0.5%) cannot be reached for the worst AL method (FA: random). 197,907\u221251,016 197,907 = 74.2% on English.",
"cite_spans": [
{
"start": 48,
"end": 52,
"text": "149,",
"ref_id": null
},
{
"start": 53,
"end": 56,
"text": "958",
"ref_id": null
},
{
"start": 119,
"end": 127,
"text": "Chinese,",
"ref_id": null
},
{
"start": 128,
"end": 136,
"text": "and 197,",
"ref_id": null
},
{
"start": 137,
"end": 145,
"text": "448 197,",
"ref_id": null
},
{
"start": 146,
"end": 149,
"text": "907",
"ref_id": null
},
{
"start": 371,
"end": 375,
"text": "149,",
"ref_id": null
},
{
"start": 376,
"end": 380,
"text": "[51]",
"ref_id": null
},
{
"start": 381,
"end": 385,
"text": "[52]",
"ref_id": null
},
{
"start": 386,
"end": 390,
"text": "[53]",
"ref_id": null
},
{
"start": 391,
"end": 395,
"text": "[54]",
"ref_id": null
},
{
"start": 396,
"end": 400,
"text": "[55]",
"ref_id": null
},
{
"start": 401,
"end": 405,
"text": "[56]",
"ref_id": null
},
{
"start": 406,
"end": 409,
"text": "389",
"ref_id": null
},
{
"start": 755,
"end": 777,
"text": "197,907\u221251,016 197,907",
"ref_id": null
}
],
"ref_spans": [
{
"start": 555,
"end": 571,
"text": "(Figure 3 and 4)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Main Results on Test Data",
"sec_num": "4.3"
},
{
"text": "So far, we measure annotation effort in terms of the number of annotated dependencies and assume that it takes the same amount of time to annotate different words, which is obviously unrealistic. To understand whether active learning based on PA can really reduce annotation time over based on FA in practice, we build a web browser based annotation system, 10 and conduct human annotation experiments on Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Annotation Experiments",
"sec_num": "5"
},
{
"text": "In this part, we use CTB 7.0 which is a newer and larger version and covers more genres, and adopt the newly proposed Stanford dependencies (de Marneffe and Manning, 2008; Chang et al., 2009) which are more understandable for annotators. 11 Since manual syntactic annotation is very difficult and time-consuming, we only keep sentences with length [10, 20] in order to better measure annotation time by focusing on sentences of reasonable length, which leave us 12, 912 training sentences under the official data split. Then, we use a random half of training sentences to train a CRF-based parser, and select 20% most uncertain words with marginal probability gap for each sentence of the left half.",
"cite_spans": [
{
"start": 140,
"end": 171,
"text": "(de Marneffe and Manning, 2008;",
"ref_id": "BIBREF3"
},
{
"start": 172,
"end": 191,
"text": "Chang et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Annotation Experiments",
"sec_num": "5"
},
{
"text": "We employ 6 postgraduate students as our annotators who are at different levels of familiarity in syntactic annotation. Before annotation, the annotators are trained for about two hours by introducing the basic concepts, guidelines, and illustrating examples. Then, they are asked to practice on the annotation system for about another two hours. Finally, all annotators are required to formally annotate the same 100 sentences. The system is programed that each sentence has 3 FA submissions and 3 PA submissions. During formal annotation, the annotators are not allowed to discuss with each other or look up any guideline or documents, which may incur unnecessary inaccuracy in timing. Instead, the annotators can only decide the syntactic structures based on the basic knowledge of dependency grammar and one's understanding of the sentence structure. The annotation process lasts for about 5 hours. On average, each annotator completes 50 sentences with FA (763 dependencies) and 50 sentences with PA (178 dependencies). Table 3 lists the results in descending order of an annotator's experience in syntactic annotation. The first two columns compare the time needed for annotating a dependency in seconds. On average, annotating a dependency in PA takes about twice as much time as in FA, which is reasonable considering the words to be annotated in PA may be more difficult for annotators while the annotation of some tasks in FA may be very trivial and easy. Combined with the results in Table 2 , we may infer that to achieve 77.3% accuracy on CTB -test, AL with FA requires 149, 051 \u00d7 6.7 = 998, 641.7 seconds of annotation, whereas AL with batch dependency-wise PA needs 56, 389 \u00d7 13.6 = 766, 890.4 seconds. Thus, we may roughly say that AL with PA can reduce annotation time over FA by 998,641.7\u2212766,890.4 998,641.7 = 23.2%. We also report annotation accuracy according to the gold-standard Stanford dependencies converted from bracketed structures. 12 Overall, the accuracy of FA is 70.36 \u2212 59.06 = 11.30% higher 12 An anonymous reviewer commented that the direct comparison between an annotator's performance on PA and FA based on accuracy may be misleading since the FA and PA sentences for one annotator are mutually exclusive. than that of PA, which should be due to the trivial tasks in FA. To be more fair, we compare the accuracies of FA and PA on the same 20% selected difficult words, and find that annotators exhibit different responses to the switch. Annotator #4 achieve 12.58% higher accuracy when under PA than under FA. The reason may be that under PA, annotators can be more focused and therefore perform better on the few selected tasks. In contrast, some annotators may perform better under FA. For example, annotation accuracy of annotator #2 increases by 10.04% when switching from PA to FA, which may be due to that FA allows annotators to spend more time on the same sentence and gain help from annotating easier tasks. Overall, we find that the accuracy of PA is 59.06 \u2212 57.28 = 1.78% higher than that of FA, indicating that PA actually can improve annotation quality.",
"cite_spans": [
{
"start": 1797,
"end": 1816,
"text": "998,641.7\u2212766,890.4",
"ref_id": null
},
{
"start": 2025,
"end": 2027,
"text": "12",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1025,
"end": 1032,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 1495,
"end": 1502,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Human Annotation Experiments",
"sec_num": "5"
},
{
"text": "Recently, AL with PA attracts much attention in sentence-wise natural language processing such as sequence labeling and parsing. For sequence labeling, Marcheggiani and Arti\u00e8res (2014) systematically compare a dozen uncertainty metrics in token-wise AL with PA (without comparison with FA), whereas Settles and Craven (2008) investigate different uncertainty metrics in AL with FA. Li et al. (2012) propose to only annotate the most uncertain word boundaries in a sentence for Chinese word segmentation and show promising results on both simulation and human annotation experiments. All above works are based on CRFs and make extensive use of sequence probabilities and token marginal probability.",
"cite_spans": [
{
"start": 299,
"end": 324,
"text": "Settles and Craven (2008)",
"ref_id": "BIBREF37"
},
{
"start": 382,
"end": 398,
"text": "Li et al. (2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In parsing community, Sassano and Kurohashi (2010) select bunsetsu (similar to phrases) pairs with smallest scores from a local classifier, and let annotators decide whether the pair composes a dependency. They convert partially annotated instances into local dependency/non-dependency classification instances to help a simple shiftreduce parser. Mirroshandel and Nasr (2011) select most uncertain words based on votes of nbest parsers, and convert partial trees into full trees by letting a baseline parser perform constrained decoding in order to preserve partial annotation. Under a different query-by-committee AL framework, Majidi and Crane (2013) select most uncertain words using a committee of diverse parsers, and convert partial trees into full trees by letting the parsers of committee to decide the heads of remaining tokens. Based on a first-order (pointwise) Japanese parser, Flannery and Mori (2015) use scores of a local classifier for task selection, and treat PA as dependency/non-dependency instances (Flannery et al., 2011) . Different from above works, this work adopts a state-of-the-art probabilistic dependency parser, uses more principled tree probabilities and dependency marginal probabilities for uncertainty measurement, and learns from PA based on a forest-based training objective which is more theoretically sound.",
"cite_spans": [
{
"start": 891,
"end": 915,
"text": "Flannery and Mori (2015)",
"ref_id": "BIBREF6"
},
{
"start": 1021,
"end": 1044,
"text": "(Flannery et al., 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Most previous works on AL with PA only conduct simulation experiments. Flannery and Mori (2015) perform human annotation to measure true annotation time. A single annotator is employed to annotate for two hours alternating FA and PA (33% batch) every fifteen minutes. Beyond their initial expectation, they find that the annotation time per dependency is nearly the same for FA and PA (different from our findings) and gives a few interesting explanations.",
"cite_spans": [
{
"start": 71,
"end": 95,
"text": "Flannery and Mori (2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Under a non-AL framework, Mejer and Crammer (2012) propose an interesting light feedback scheme for dependency parsing by letting annotators decide the better one from top-2 parse trees produced by the current parsing model. Hwa (1999) pioneers the idea of using PA to reduce manual labeling effort for constituent grammar induction. She uses a variant Inside-Outside re-estimation algorithm (Pereira and Schabes, 1992) to induce a grammar from PA. Clark and Curran (2006) propose to train a Combinatorial Categorial Grammar parser using partially labeled data only containing predicate-argument dependencies. Tsuboi et al. (2008) extend CRFbased sequence labeling models to learn from incomplete annotations, which is the same with Marcheggiani and Arti\u00e8res (2014) . Li et al. (2014) propose a CRF-based dependency parser that can learn from partial tree projected from sourcelanguage structures in the cross-lingual parsing scenario. Mielens et al. (2015) propose to impute missing dependencies based on Gibbs sampling in order to enable traditional parsers to learn from partial trees.",
"cite_spans": [
{
"start": 26,
"end": 50,
"text": "Mejer and Crammer (2012)",
"ref_id": "BIBREF28"
},
{
"start": 225,
"end": 235,
"text": "Hwa (1999)",
"ref_id": "BIBREF10"
},
{
"start": 392,
"end": 419,
"text": "(Pereira and Schabes, 1992)",
"ref_id": "BIBREF32"
},
{
"start": 449,
"end": 472,
"text": "Clark and Curran (2006)",
"ref_id": "BIBREF2"
},
{
"start": 610,
"end": 630,
"text": "Tsuboi et al. (2008)",
"ref_id": "BIBREF41"
},
{
"start": 733,
"end": 765,
"text": "Marcheggiani and Arti\u00e8res (2014)",
"ref_id": "BIBREF22"
},
{
"start": 768,
"end": 784,
"text": "Li et al. (2014)",
"ref_id": "BIBREF17"
},
{
"start": 936,
"end": 957,
"text": "Mielens et al. (2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "This paper for the first time applies a state-ofthe-art probabilistic model to AL with PA for dependency parsing. It is shown that the CRF-based parser can on the one hand provide tree probabilities and dependency marginal probabilities as principled uncertainty metrics and on the other hand elegantly learn from partially annotated data. We have proposed and compared several uncertainty metrics through simulation experiments, and show that AL with PA can greatly reduce the amount of annotated dependencies by 62.2% on Chinese 74.2% on English. Finally, we conduct human annotation experiments on Chinese to compare PA and FA on real annotation time and quality. We find that annotating a dependency in PA takes about 2 times long as in FA. This suggests that AL with PA can reduce annotation time by 23.2% over with FA on Chinese. Moreover, the results also indicate that annotators tend to perform better under PA than FA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "For future work, we would like to advance this study in the following directions. The first idea is to combine uncertainty and representativeness for measuring informativeness of annotation targets in concern. Intuitively, it would be more profitable to annotate instances that are both difficult for the current model and representative in capturing common language phenomena. Second, we so far assume that the selected tasks are equally difficult and take the same amount of effort for human annotators. However, it is more reasonable that human are good at resolving some ambiguities but bad at others. Our plan is to study which syntactic structures are more suitable for human annotation, and balance informativeness of a candidate task and its suitability for human annotation. Finally, one anonymous reviewer comments that we may use automatically projected trees (Rasooli and Collins, 2015; Guo et al., 2015; Ma and Xia, 2014) as the initial seed labeled data, which is cheap and interesting.",
"cite_spans": [
{
"start": 871,
"end": 898,
"text": "(Rasooli and Collins, 2015;",
"ref_id": "BIBREF34"
},
{
"start": 899,
"end": 916,
"text": "Guo et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 917,
"end": 934,
"text": "Ma and Xia, 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In this work, we follow many previous works to focus on unlabeled dependency parsing (constructing the skeleton dependency structure). However, the proposed techniques",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We have also tried replacing n 1.5 with n (still prefer short sentences) and n 2 (bias to long sentences).4 We have also tried p(d * |x) \u00d7 f (n), where f (n) = log n or f (n) = \u221a n, but both work badly.5 We have also tried n \u221a \u220f h\u21b7m\u2208d * p(h \u21b7 m|x), leading to slightly inferior results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This work focuses on projective dependency parsing. Please refer toKoo et al. (2007),McDonald and Satta (2007), andSmith and Smith (2007) for building a probabilistic nonprojective parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Alternatively, we can exclude punctuation marks for task selection in AL with PA. Then, to be fair, we have to discard all dependencies pointing to punctuation marks in the case of FA. This makes the experiment setting more complicated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://hlt-service.suda.edu.cn/ syn-dep-batch. Please try.11 We use Stanford Parser 3.4 (2014-06-16) for constituentto-dependency structure conversion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous reviewers for the helpful comments. We also thank Junhui Li and Chunyu Kit for reading our paper and giving many good suggestions. Particularly, Zhenghua is very grateful to many of his students: Fangli Lu, Qiuyi Yan, and Yue Zhang build the annotation system; Jiayuan Chao, Wei Chen, Ziwei Fan, Die Hu, Qingrong Xia, and Yue Zhang participate in data annotation. This work was supported by National Natural Science Foundation of China (Grant No. 61502325, 61525205, 61572338).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Top accuracy and fast dependency parsing is not a contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of COLING, pages 89-97.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discriminative reordering with Chinese grammatical relations features",
"authors": [
{
"first": "Pi-Chuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Huihsin",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation (SSST-3) at NAACL HLT 2009",
"volume": "",
"issue": "",
"pages": "51--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative reordering with Chinese grammatical relations features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation (SSST-3) at NAACL HLT 2009, pages 51-59.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Partial training for a lexicalized-grammar parser",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James Curran. 2006. Partial training for a lexicalized-grammar parser. In Proceedings of the Human Language Technology Conference of the NAACL, pages 144-151.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Stanford typed dependencies representation",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation, pages 1-8.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Frustratingly hard domain adaptation for dependency parsing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"Pratim"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Dredze, John Blitzer, Partha Pratim Talukdar, Kuzman Ganchev, Jo\u00e3o Graca, and Fernando Pereira. 2007. Frustratingly hard domain adaptation for dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bilexical grammars and their cubic-time parsing algorithms",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Probabilistic and Other Parsing Technologies",
"volume": "",
"issue": "",
"pages": "29--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in Probabilistic and Other Parsing Technologies, pages 29-62.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Combining active learning and partial annotation for domain adaptation of a japanese dependency parser",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Flannery",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Flannery and Shinsuke Mori. 2015. Combining active learning and partial annotation for domain adaptation of a japanese dependency parser. In Proceedings of the 14th International Conference on Parsing Technologies, pages 11-19.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Training dependency parsers from partially annotated corpora",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Flannery",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miayo",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "776--784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Flannery, Yusuke Miayo, Graham Neubig, and Shinsuke Mori. 2011. Training dependency parsers from partially annotated corpora. In Proceedings of IJCNLP, pages 776-784.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From news to comment: Resources and benchmarks for parsing the language of web 2.0",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Cetinoglu",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "893--901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. From news to comment: Resources and benchmarks for parsing the language of web 2.0. In Proceedings of IJCNLP, pages 893- 901.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Cross-lingual dependency parsing based on distributed representations",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1234--1244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual depen- dency parsing based on distributed representations. In Proceedings of ACL, pages 1234-1244.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Supervised grammar induction using training data with limited constituent information",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "73--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent informa- tion. In Proceedings of ACL, pages 73-79.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sample selection for statistical parsing",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 2004,
"venue": "Computional Linguistics",
"volume": "30",
"issue": "3",
"pages": "253--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Hwa. 2004. Sample selection for statistical parsing. Computional Linguistics, 30(3):253-276.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A dependency parser for tweets",
"authors": [
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Archna",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1001--1012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In Proceedings of EMNLP, pages 1001-1012.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient thirdorder dependency parsers",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In ACL, pages 1-11.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Structured prediction models via the matrix-tree theorem",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "141--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of EMNLP-CoNLL, pages 141-150.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A sequential algorithm for training text classifiers",
"authors": [
{
"first": "D",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "William",
"middle": [
"A"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3-12.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Active learning for Chinese word segmentation",
"authors": [
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012: Posters",
"volume": "",
"issue": "",
"pages": "683--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shoushan Li, Guodong Zhou, and Chu-Ren Huang. 2012. Active learning for Chinese word segmen- tation. In Proceedings of COLING 2012: Posters, pages 683-692.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Soft cross-lingual syntax projection for dependency parsing",
"authors": [
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "783--793",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Soft cross-lingual syntax projection for dependency parsing. In COLING, pages 783-793.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Active learning and the irish treebank",
"authors": [
{
"first": "Teresa",
"middle": [],
"last": "Lynn",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "Elaine U1",
"middle": [],
"last": "Dhonnchadha",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ALTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teresa Lynn, Jennifer Foster, Mark Dras, and Elaine U1 Dhonnchadha. 2012. Active learning and the irish treebank. In Proceedings of ALTA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1337--1348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of ACL, pages 1337-1348.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Probabilistic models for high-order projective dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Arxiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Hai Zhao. 2015. Probabilistic mod- els for high-order projective dependency parsing. Arxiv, abs/1502.04174.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Active learning for dependency parsing by a committee of parsers",
"authors": [
{
"first": "Saeed",
"middle": [],
"last": "Majidi",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Crane",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IWPT",
"volume": "",
"issue": "",
"pages": "98--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saeed Majidi and Gregory Crane. 2013. Active learning for dependency parsing by a committee of parsers. In Proceedings of IWPT, pages 98-105.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "An experimental comparison of active learning strategies for partially labeled sequences",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Arti\u00e8res",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "898--906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Thierry Arti\u00e8res. 2014. An experimental comparison of active learning strategies for partially labeled sequences. In Proceedings of EMNLP, pages 898-906.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Employing EM and pool-based active learning for text classification",
"authors": [],
"year": null,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "350--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Employing EM and pool-based active learning for text classification. In Proceedings of ICML, pages 350-358.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [],
"year": null,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81-88.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "On the complexity of non-projective data-driven dependency parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Tenth International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "121--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Giorgio Satta. 2007. On the com- plexity of non-projective data-driven dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies, pages 121- 132.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Training dependency parser using light feedback",
"authors": [
{
"first": "Avihai",
"middle": [],
"last": "Mejer",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avihai Mejer and Koby Crammer. 2012. Training dependency parser using light feedback. In Proceedings of NAACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Parse imputation for dependency annotations",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Mielens",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "1385--1394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Mielens, Liang Sun, and Jason Baldridge. 2015. Parse imputation for dependency annotations. In Proceedings of ACL-IJCNLP, pages 1385-1394.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Active learning for dependency parsing using partially annotated sentences",
"authors": [
{
"first": "Abolghasem",
"middle": [],
"last": "Seyed",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Mirroshandel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nasr",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 12th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "140--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active learning for dependency parsing using partially annotated sentences. In Proceedings of the 12th International Conference on Parsing Technologies, pages 140-149.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A literature survey of active machine learning in the context of natural language processing",
"authors": [
{
"first": "Fredrik",
"middle": [],
"last": "Olsson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing. Technical report, Swedish Institute of Computer Science.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Insideoutside reestimation from partially bracketed corpora",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Workshop on Speech and Natural Language (HLT)",
"volume": "",
"issue": "",
"pages": "122--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed cor- pora. In Proceedings of the Workshop on Speech and Natural Language (HLT), pages 122-127.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Overview of the 2012 shared task on parsing the web",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2012,
"venue": "Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of Non- Canonical Language (SANCL).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Density-driven cross-lingual transfer of dependency parsers",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh",
"suffix": ""
},
{
"first": "Rasooli",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "328--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2015. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of EMNLP, pages 328-338.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Tracy",
"middle": [
"H"
],
"last": "King",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"M"
],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Crouch",
"suffix": ""
},
{
"first": "John",
"middle": [
"T"
],
"last": "Maxwell",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "271--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of 40th An- nual Meeting of the Association for Computational Linguistics, pages 271-278.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Using smaller constituents rather than sentences in active learning for japanese dependency parsing",
"authors": [
{
"first": "Manabu",
"middle": [],
"last": "Sassano",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "356--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manabu Sassano and Sadao Kurohashi. 2010. Using smaller constituents rather than sentences in active learning for japanese dependency parsing. In Proceedings of ACL, pages 356-365.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "An analysis of active learning strategies for sequence labeling tasks",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1070--1079",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of EMNLP, pages 1070-1079.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Probabilistic models of nonprojective dependency trees",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "132--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of EMNLP-CoNLL, pages 132-140.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Target language adaptation of discriminative transfer parsers",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "1061--1071",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of NAACL, pages 1061-1071.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Active learning for statistical natural language parsing",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Tang, Xiaoqiang Luo, and Salim Roukos. 2002. Active learning for statistical natural language parsing. In Proceedings of ACL, pages 120-127.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Training conditional random fields using incomplete annotations",
"authors": [
{
"first": "Yuta",
"middle": [],
"last": "Tsuboi",
"suffix": ""
},
{
"first": "Hisashi",
"middle": [],
"last": "Kashima",
"suffix": ""
},
{
"first": "Hiroki",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "897--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuta Tsuboi, Hisashi Kashima, Hiroki Oda, Shinsuke Mori, and Yuji Matsumoto. 2008. Training condi- tional random fields using incomplete annotations. In Proceedings of COLING, pages 897-904.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Dependency parsing for weibo: An efficient probabilistic logic programming approach",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1152--1158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Yang Wang, Lingpeng Kong, Kathryn Mazaitis, and William W Cohen. 2014. Depen- dency parsing for weibo: An efficient probabilistic logic programming approach. In Proceedings of EMNLP, pages 1152-1158.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Transition-based dependency parsing with rich non-local features",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "188--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL, pages 188-193.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"text": "FA vs. PA on CTB -dev.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "all data PA (single): marginal probability gap PA (batch 20%): marginal probability gap PA (batch 20%): averaged marginal probability PA (batch 20%): averaged marginal probability & gap",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "Single vs. batch dependency-wise PA on CTB -dev. all data PA (single): marginal probability gap PA (batch 10%): marginal probability gap PA (batch 10%): averaged marginal probability PA (batch 10%): averaged marginal probability & gap PA (batch 20%): marginal probability gap",
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"num": null,
"text": "Single vs. batch dependency-wise PA on PTB -dev.",
"type_str": "figure"
},
"TABREF1": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Data statistics.",
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Results on test data.",
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Statistics of human annotation.",
"type_str": "table"
}
}
}
}