ACL-OCL / Base_JSON /prefixQ /json /Q16 /Q16-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q16-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:06:34.688429Z"
},
"title": "The Galactic Dependencies Treebanks: Getting More Data by Synthesizing New Languages",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "eisner@jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We release Galactic Dependencies 1.0-a large set of synthetic languages not found on Earth, but annotated in Universal Dependencies format. This new resource aims to provide training and development data for NLP 491",
"pdf_parse": {
"paper_id": "Q16-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "We release Galactic Dependencies 1.0-a large set of synthetic languages not found on Earth, but annotated in Universal Dependencies format. This new resource aims to provide training and development data for NLP 491",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "methods that aim to adapt to unfamiliar languages. Each synthetic treebank is produced from a real treebank by stochastically permuting the dependents of nouns and/or verbs to match the word order of other real languages. We discuss the usefulness, realism, parsability, perplexity, and diversity of the synthetic languages. As a simple demonstration of the use of Galactic Dependencies, we consider single-source transfer, which attempts to parse a real target language using a parser trained on a \"nearby\" source language. We find that including synthetic source languages somewhat increases the diversity of the source pool, which significantly improves results for most target languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some potential NLP tasks have very sparse data by machine learning standards, as each of the IID training examples is an entire language. For instance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "\u2022 typological classification of a language on various dimensions; \u2022 adaptation of any existing NLP system to new, low-resource languages; \u2022 induction of a syntactic grammar from text;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "\u2022 discovery of a morphological lexicon from text;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "\u2022 other types of unsupervised discovery of linguistic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Given a corpus or other data about a language, we might aim to predict whether it is an SVO language, or to learn to pick out its noun phrases. For such problems, a single training or test example corresponds to an entire human language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Unfortunately, we usually have only from 1 to 40 languages to work with. In contrast, machine learning methods thrive on data, and recent AI successes have mainly been on tasks where one can train richly parameterized predictors on a huge set of IID (input, output) examples. Even 7,000 training examplesone for each language or dialect on Earth-would be a small dataset by contemporary standards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "As a result, it is challenging to develop systems that will discover structure in new languages in the same way that an image segmentation method, for example, will discover structure in new images. The limited resources even make it challenging to develop methods that handle new languages by unsupervised, semi-supervised, or transfer learning. Some such projects evaluate their methods on new sentences of the same languages that were used to develop the methods in the first place-which leaves one worried that the methods may be inadvertently tuned to the development languages and may not be able to discover correct structure in other languages. Other projects take care to hold out languages for evaluation Cotterell et al., 2015) , but then are left with only a few development languages on which to experiment with different unsupervised methods and their hyperparameters.",
"cite_spans": [
{
"start": 715,
"end": 738,
"text": "Cotterell et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "If we had many languages, then we could develop better unsupervised language learners. Even better, we could treat linguistic structure discovery as a supervised learning problem. That is, we could train a system to extract features from the surface of a language that are predictive of its deeper structure. Principles & Parameters theory (Chomsky, 1981) conjectures that such features exist and that the juvenile human brain is adapted to extract them.",
"cite_spans": [
{
"start": 340,
"end": 355,
"text": "(Chomsky, 1981)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Our goal in this paper is to release a set of about 50,000 high-resource languages that could be used to train supervised learners, or to evaluate lesssupervised learners during development. These \"unearthly\" languages are intended to be at least sim-ilar to possible human languages. As such, they provide useful additional training and development data that is slightly out of domain (reducing the variance of a system's learned parameters at the cost of introducing some bias). The initial release as described in this paper (version 1.0) is available at https://github.com/gdtreebank/ gdtreebank. We plan to augment this dataset in future work ( \u00a78).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "In addition to releasing thousands of treebanks, we provide scripts that can be used to \"translate\" other annotated resources into these synthetic languages. E.g., given a corpus of English sentences labeled with sentiment, a researcher could reorder the words in each English sentence according to one of our English-based synthetic languages, thereby obtaining labeled sentences in the synthetic language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Synthetic data generation is a well-known trick for effectively training a large model on a small dataset. Abu-Mostafa (1995) reviews early work that provided \"hints\" to a learning system in the form of virtual training examples. While datasets have grown in recent years, so have models: e.g., neural networks have many parameters to train. Thus, it is still common to create synthetic training examples-often by adding noise to real inputs or otherwise transforming them in ways that are expected to preserve their labels. Domains where it is easy to exploit these invariances include image recognition (Simard et al., 2003; Krizhevsky et al., 2012) , speech recognition (Jaitly and Hinton, 2013; Cui et al., 2015) , information retrieval (Vilares et al., 2011) , and grammatical error correction (Rozovskaya and Roth, 2010).",
"cite_spans": [
{
"start": 605,
"end": 626,
"text": "(Simard et al., 2003;",
"ref_id": "BIBREF34"
},
{
"start": 627,
"end": 651,
"text": "Krizhevsky et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 673,
"end": 698,
"text": "(Jaitly and Hinton, 2013;",
"ref_id": "BIBREF16"
},
{
"start": 699,
"end": 716,
"text": "Cui et al., 2015)",
"ref_id": null
},
{
"start": 741,
"end": 763,
"text": "(Vilares et al., 2011)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Synthetic datasets have also arisen recently for semantic tasks in natural language processing. bAbI is a dataset of facts, questions, and answers, generated by random simulation, for training machines to do simple logic (Weston et al., 2016) . Hermann et al. (2015) generate reading comprehension questions and their answers, based on a large set of newssummarization pairs, for training machine readers. Serban et al. (2016) used RNNs to generate 30 million factoid questions about Freebase, with answers, for training question-answering systems. obtain data to train semantic parsers in a new domain by first generating synthetic (utterance, logical form) pairs and then asking human annotators to paraphrase the synthetic utterances into more natural human language.",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Weston et al., 2016)",
"ref_id": "BIBREF42"
},
{
"start": 406,
"end": 426,
"text": "Serban et al. (2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In speech recognition, morphology-based \"vocabulary expansion\" creates synthetic word forms (Rasooli et al., 2014; Varjokallio and Klakow, 2016) .",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Rasooli et al., 2014;",
"ref_id": "BIBREF30"
},
{
"start": 115,
"end": 144,
"text": "Varjokallio and Klakow, 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Machine translation researchers have often tried to automatically preprocess parse trees of a source language to more closely resemble those of the target language, using either hand-crafted or automatically extracted rules (Dorr et al., 2002; Collins et al., 2005, etc. ; see review by Howlett and Dras, 2011) .",
"cite_spans": [
{
"start": 224,
"end": 243,
"text": "(Dorr et al., 2002;",
"ref_id": "BIBREF7"
},
{
"start": 244,
"end": 270,
"text": "Collins et al., 2005, etc.",
"ref_id": null
},
{
"start": 287,
"end": 310,
"text": "Howlett and Dras, 2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A treebank is a corpus of parsed sentences of some language. We propose to derive each synthetic treebank from some real treebank. By manipulating the existing parse trees, we obtain a useful corpus for our synthetic language-a corpus that is already tagged, parsed, and partitioned into training/development/test sets. Additional data in the synthetic language can be obtained, if desired, by automatically parsing additional real-language sentences and manipulating these trees in the same way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Language Generation",
"sec_num": "3"
},
{
"text": "We begin with the Universal Dependencies collection version 1.2 (Nivre et al., 2015 (Nivre et al., , 2016 , 1 or UD. This provides manually edge-labeled dependency treebanks in 37 real languages, in a consistent style and format-the Universal Dependencies format. An example appears in Figure 1 .",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "(Nivre et al., 2015",
"ref_id": null
},
{
"start": 84,
"end": 105,
"text": "(Nivre et al., , 2016",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 286,
"end": 294,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "In this paper, we select a substrate language S represented in the UD treebanks, and systematically reorder the dependents of some nodes in the S trees, to obtain trees of a synthetic language S .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "Specifically, we choose a superstrate language R V , and write S = S[R V /V] to denote a (projective) synthetic language obtained from S by permuting the dependents of verbs (V) to match the ordering statistics of the R V treebanks. We can similarly permute the dependents of nouns (N about 93% of S's nodes (Table 2) , as UD treats adpositions and conjunctions as childless dependents.",
"cite_spans": [],
"ref_spans": [
{
"start": 308,
"end": 317,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "For example, English[French/N, Hindi/V] is a synthetic language based on an English substrate, but which adopts subject-object-verb (SOV) word order from the Hindi superstrate and noun-adjective word order from the French superstrate ( Figure 1 ). Note that it still uses English lexical items.",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 244,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "Our terms \"substrate\" and \"superstrate\" are borrowed from the terminology of creoles, although our synthetic languages are unlike naturally occurring creoles. Our substitution notation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "S = S[R N /N, R V /V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "] is borrowed from the logic and programming languages communities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "There may be more adventurous ways to manufacture synthetic languages (see \u00a78 for some options). However, we emphasize that our current method is designed to produce fairly realistic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.2"
},
{
"text": "First, we retain the immediate dominance structure and lexical items of the substrate trees, altering only their linear precedence relations. Thus each sentence remains topically coherent; nouns continue to be distinguished by case according to their role in the clause structure; wh-words continue to ccommand gaps; different verbs (e.g., transitive vs. intransitive) continue to be associated with different subcategorization frames; and so on. These im-portant properties would not be captured by a simple context-free model of dependency trees, which is why we modify real sentences rather than generating new sentences from such a model. In addition, our method obviously preserves the basic context-free properties, such as the fact that verbs typically subcategorize for one or two nominal arguments (Naseem et al., 2010) .",
"cite_spans": [
{
"start": 807,
"end": 828,
"text": "(Naseem et al., 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.2"
},
{
"text": "Second, by drawing on real superstrate languages, we ensure that our synthetic languages use plausible word orders. For example, if R V is a V2 language that favors SVO word order but also allows OVS, then S will match these proportions. Similarly, S will place adverbs in reasonable positions with respect to the verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.2"
},
{
"text": "We note, however, that our synthetic languages might violate some typological universals or typological tendencies. For example, R V might prescribe head-initial verb orderings while R N prescribes head-final noun orderings, yielding an unusual language. Worse, we could synthesize a language that uses free word order (from R V ) even though nouns (from S) are not marked for case. Such languages are rare, presumably for the functionalist reason that sentences would be too ambiguous. One could automatically filter out such an implausible language S , or downweight it, upon discovering that a parser for S was much less accurate on held-out data than a comparable parser for S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.2"
},
{
"text": "We also note that our reordering method ( \u00a74) does ignore some linguistic structure. For example, we do not currently condition the order of the dependent subtrees on their heaviness or on the length of resulting dependencies, and thus we will not faithfully model phenomena like heavy-shift (Hawkins, 1994; Eisner and Smith, 2010) . Nor will we model the relative order of adjectives. We also treat all verbs interchangeably, and thus use the same word orders-drawn from R V -for both main clauses and embedded clauses. This means that we will never produce a language like German (which uses V2 order in main clauses and SOV order in embedded clauses), even if R V = German. All of these problems could be addressed by enriching the features that are described in the next section.",
"cite_spans": [
{
"start": 292,
"end": 307,
"text": "(Hawkins, 1994;",
"ref_id": "BIBREF13"
},
{
"start": 308,
"end": 331,
"text": "Eisner and Smith, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.2"
},
{
"text": "Let X be a part-of-speech tag, such as Verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Dependent Order",
"sec_num": "4"
},
{
"text": "To produce a dependency tree in language S = S[R X /X], we start with a projective dependency tree in language S. 3 For each node x in the tree that is tagged with X, we stochastically select a new ordering for its dependent nodes, including a position in this ordering for the head x itself. Thus, if node x has n \u2212 1 dependents, then we must sample from a probability distribution over n! orderings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Dependent Order",
"sec_num": "4"
},
{
"text": "Our job in this section is to define this probability distribution. Using \u03c0 = (\u03c0 1 , . . . , \u03c0 n ) to denote an ordering of these n nodes, we define a log-linear model over the possible values of \u03c0:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Dependent Order",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (\u03c0 | x) = 1 Z(x) exp 1\u2264i<j\u2264n \u03b8 \u2022 f (\u03c0, i, j)",
"eq_num": "(1)"
}
],
"section": "Modeling Dependent Order",
"sec_num": "4"
},
{
"text": "Here Z(x) is the normalizing constant for node x. \u03b8 is the parameter vector of the model. f extracts a sparse feature vector that describes the ordered pair of nodes \u03c0 i , \u03c0 j , where the ordering \u03c0 would place \u03c0 i to the left of \u03c0 j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Dependent Order",
"sec_num": "4"
},
{
"text": "To sample exactly from the distribution p \u03b8 , 4 we must explicitly compute all n! unnormalized prob-abilities and their sum Z(x). Fortunately, we can compute each unnormalized probability in just O(1) amortized time, if we enumerate the n! orderings \u03c0 using the Steinhaus-Johnson-Trotter algorithm (Sedgewick, 1977) . This enumeration sequence has the property that any two consecutive permutations \u03c0, \u03c0 differ by only a single swap of some pair of adjacent nodes. Thus their probabilities are closely related: the sum in equation (1) can be updated in O(1) time by subtracting \u03b8 \u2022 f (\u03c0, i, i + 1) and adding \u03b8 \u2022 f (\u03c0 , i, i + 1) for some i. The other O(n 2 ) summands are unchanged.",
"cite_spans": [
{
"start": 298,
"end": 315,
"text": "(Sedgewick, 1977)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient sampling",
"sec_num": "4.1"
},
{
"text": "In addition, if n \u2265 8, we avoid this computation by omitting the entire tree from our treebank; so we have at most 7! = 5040 summands.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient sampling",
"sec_num": "4.1"
},
{
"text": "Our feature functions ( \u00a74.4) are fixed over all languages. They refer to the 17 node labels (POS tags) and 40 edge labels (dependency relations) that are used consistently throughout the UD treebanks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training parameters on a real language",
"sec_num": "4.2"
},
{
"text": "For each UD language L and each POS tag X, we find parameters \u03b8 L X that globally maximize the unregularized log-likelihood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training parameters on a real language",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 L X = argmax \u03b8 x log p \u03b8 (\u03c0 x | x)",
"eq_num": "(2)"
}
],
"section": "Training parameters on a real language",
"sec_num": "4.2"
},
{
"text": "Here x ranges over all nodes tagged with X in the projective training trees of the L treebank, omitting nodes with n \u2265 7 for speed. The expensive part of this computation is the gradient of log Z(x), which is an expected feature vector. To compute this expectation efficiently, we again take care to loop over the permutations in Steinhaus-Johnson-Trotter order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training parameters on a real language",
"sec_num": "4.2"
},
{
"text": "A given language L may not use all of the tags and relations. Universal features that mention unused tags or relations do not affect (2), and their weights remain at 0 during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training parameters on a real language",
"sec_num": "4.2"
},
{
"text": "We use (1) to permute the X nodes of substrate language S into an order resembling superstrate language R X . In essence, this applies the R X ordering model to out-of-domain data, since the X nodes may have rather different sets of dependents in the S treebank than in the R X treebank. We mitigate this issue in two ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting parameters of a synthetic language",
"sec_num": "4.3"
},
{
"text": "First, our ordering model (1) is designed to be more robust to transfer than, say, a Markov model. The position of each node is influenced by all n \u2212 1 other nodes, not just by the two adjacent nodes. As a result, the burden of explaining the ordering is distributed over more features, and we hope some of these features will transfer to S. For example, suppose R X lacks adverbs and yet we wish to use \u03b8 R X X to permute a sequence of S that contains adverbs. Even though the resulting order must disrupt some familiar non-adverb bigrams by inserting adverbs, other features-which consider non-adjacent tagswill still favor an R X -like order for the non-adverbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting parameters of a synthetic language",
"sec_num": "4.3"
},
{
"text": "Second, we actually sample the reordering from a distribution p \u03b8 with an interpolated parameter vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting parameters of a synthetic language",
"sec_num": "4.3"
},
{
"text": "\u03b8 = \u03b8 S X = (1 \u2212 \u03bb)\u03b8 R X X + \u03bb\u03b8 S X ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting parameters of a synthetic language",
"sec_num": "4.3"
},
{
"text": "where \u03bb = 0.05. This gives a weighted product of experts, in which ties are weakly broken in favor of the substrate ordering. (Ties arise when R X is unfamiliar with some tags that appear in S, e.g., adverb.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting parameters of a synthetic language",
"sec_num": "4.3"
},
{
"text": "We write t i for the POS tag of node \u03c0 i , and r i for the dependency relation of \u03c0 i to the head node. If \u03c0 i is itself the head, then necessarily t i = X, 5 and we specially define r i = head. In our feature vector f (\u03c0, i, j), the features with the following names have value 1, while all others have value 0:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "\u2022 L.t i .r i and L.t i and L.r i , provided that r j = head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "For example, L.ADJ will fire on each ADJ node to the left of the head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "\u2022 L.t i .r i .t j .r j and L.t i .t j and L.r i .r j , pro- vided that r i = head, r j = head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "These features detect the relative order of two siblings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "\u2022 d.t i .r i .t j .r j , d.t i .t j , and d.r i .r j , where d is l (left)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": ", m (middle), or r (right) according to whether the head position h satisfies i < j < h, i < h < j, or h < i < j. For example, l.nsubj.dobj will fire on SOV clauses. This is a specialization of the previous feature, and is skipped if i = h or j = h.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "\u2022 A.t i .r i .t j .r j and A.t i .t j and A.r i .r j , provided that j = i + 1. These \"bigram features\" detect two adjacent nodes. For this feature and the next one, we extend the summation in (1) to allow 0 \u2264 i < j \u2264 n + 1, taking t 0 = r 0 = BOS (\"beginning of sequence\") and t n+1 = r n+1 = EOS (\"end of sequence\"). Thus, a bigram feature such as A.DET.EOS would fire on DET when it falls at the end of the sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "\u2022 H.t i .r i .t i+1 .r i+1 .....t j .r j , provided that i+2 \u2264 j \u2264 i+4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "Among features of this form, we keep only the 10% that fire most frequently in the training data. These \"higher-order kgram\" features memorize sequences of lengths 3 to 5 that are common in the language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "Notice that for each non-H feature that mentions both tags t and relations r, we also defined two backoff features, omitting the t fields or r fields respectively. Using the example from Figure 1 , for subtree DET ADJ NOUN this particular future det amod the features that fire are ",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 195,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "Template Features L.t i .r i L.DET.det, L.ADJ.amod L.t i .r i .t j .r j L.DET.det.ADJ.amod d.t i .r i .t j .r j l.DET.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "4.4"
},
{
"text": "In Galactic Dependencies v1.0, or GD, we release real and synthetic treebanks based on UD v1.2. Each synthetic treebank is a modified work that is freely licensed under the same CC or GPL license as its substrate treebank. We provide all languages of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Resource",
"sec_num": "5"
},
{
"text": "S, S[R V /N], S[R N /V], and S[R N /N, R V /V],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Resource",
"sec_num": "5"
},
{
"text": "where the substrate S and the superstrates R N and Train Dev Test cs, es, fr, hi, de, it, la itt, no, ar, pt en, nl, da, fi, got, grc, et, la proiel, grc proiel, bg la, hr, ga, he, hu, fa, ta, cu, el, ro, sl, ja ktc, sv, fi ftb, id, eu, pl Table 1 : The 37 real UD languages. Following the usual setting of rich-to-poor transfer, we take the 10 largest non-English languages (left column) as our pool of real source languages, which we can combine to synthesize new languages. The remaining languages are our lowresource target languages. We randomly hold out 17 non-English languages (right column) as the test languages for our final result table. During development, we studied and graphed performance on the remaining 10 languages (middle column)-including English for interpretability.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 247,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Resource",
"sec_num": "5"
},
{
"text": "R V each range over the 37 available languages. (R N = S or R V = S gives \"self-permutation\"). This yields 37 \u00d7 38 \u00d7 38 = 53, 428 languages in total.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Resource",
"sec_num": "5"
},
{
"text": "Each language is provided as a directory of 3 files: training, development, and test treebanks. The directories are systematically named: for example, English[French/N, Hindi/V] can be found in directory en\u223cfr@N\u223chi@V.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Resource",
"sec_num": "5"
},
{
"text": "Our treebanks provide alignment information, to facilitate error analysis as well as work on machine translation. Each word in a synthetic sentence is annotated with its original position in the substrate sentence. Thus, all synthetic treebanks derived from the same substrate treebank are node-to-node aligned to the substrate treebank and hence to one another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Resource",
"sec_num": "5"
},
{
"text": "In addition to the generated data, we also provide the parameters \u03b8 L X of our ordering models; code for training new ordering models; and code for producing new synthetic trees and synthetic languages. Our code should produce reproducible results across platforms, thanks to Java's portability and our standard random number seed of 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Resource",
"sec_num": "5"
},
{
"text": "How do the synthetic languages compare to the real ones? For analysis and experimentation, we partition the real UD languages into train/dev/test (Table 1). (This is orthogonal to the train/dev/test split of each language's treebank.) Table 2 shows some properties of the real training languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploratory Data Analysis",
"sec_num": "6"
},
{
"text": "In this section and the next, we use the Yara Table 2 : Some statistics on the 10 real training languages. When two numbers are separated by \"/\", the second represents the full UD treebank, and the first comes from our GD version, which discards non-projective trees and high-fanout trees (n \u2265 8). UAS is the language's parsability: the unlabeled attachment score on its dev sentences after training on its train sentences. T is the percentage of GD tokens that are touched by reordering (namely N, V, and their dependents). R \u2208 [0, 1] measures the freeness of the language's word order, as the conditional cross-entropy of our trained ordering model p \u03b8 relative to that of a uniform distribution:",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploratory Data Analysis",
"sec_num": "6"
},
{
"text": "R = H(p,p \u03b8 ) H(p,punif) = meanx[\u2212 log 2 p \u03b8 (\u03c0 * (x)|x)] meanx[\u2212 log 2 1/n(x)!]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory Data Analysis",
"sec_num": "6"
},
{
"text": ", where x ranges over all N and V tokens in the dev sentences, n(x) is 1 + the number of dependents of x, and \u03c0 * (x) is the observed ordering at x. parser (Rasooli and Tetreault, 2015), a fast arc-eager transition-based projective dependency parser, with beam size of 8. We train only delexicalized parsers, whose input is the sequence of POS tags. Parsing accuracy is evaluated by the unlabeled attachment score (UAS), that is, the fraction of word tokens in held-out (dev) data that are assigned their correct parent. For language modeling, we train simple trigram backoff language models with add-1 smoothing, and we measure predictive accuracy as the perplexity of held-out (dev) data. Figures 2-3 show how the parsability and perplexity of a real training language usually get worse when we permute it. We could have discarded lowparsability synthetic languages, on the functionalist grounds that they would be unlikely to survive as natural languages anywhere in the galaxy. However, the curves in these figures show that most synthetic languages have parsability and perplexity within the plausible range of natural languages, so we elected to simply keep all of them in our collection.",
"cite_spans": [],
"ref_spans": [
{
"start": 691,
"end": 702,
"text": "Figures 2-3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploratory Data Analysis",
"sec_num": "6"
},
{
"text": "An interesting exception in Figure 2 is Latin Figure 2 : Parsability of real versus synthetic languages (defined as in Table 2 real pos synthetic pos real word synthetic word Figure 3 : Perplexity of the POS tag sequence, as well as the word sequence, of real versus synthetic languages. Words with count < 10 are mapped to an OOV symbol.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 2",
"ref_id": null
},
{
"start": 46,
"end": 54,
"text": "Figure 2",
"ref_id": null
},
{
"start": 119,
"end": 126,
"text": "Table 2",
"ref_id": null
},
{
"start": 175,
"end": 183,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploratory Data Analysis",
"sec_num": "6"
},
{
"text": "(la itt), whose poor parsability-at least by a delexicalized parser that does not look at word endingsmay be due to its especially free word order (Table 2). When we impose another language's more consistent word order on Latin, it becomes more parsable. Elsewhere, permutation generally hurts, perhaps because a real language's word order is globally optimized to enhance parsability. It even hurts slightly when we randomly \"self-permute\" S trees to use other word orders that are common in S itself! Presumably this is because the authors of the original S sentences chose, or were required, to order each constituent in a way that would enhance its parsability in context: see the last paragraph of \u00a73.2. Synthesizing languages is a balancing act. The synthetic languages are not useful if all of them are too conservatively close to their real sources to add Figure 4 : Each point represents a language. The color of a synthetic language is the same as its substrate language. Dev languages are shown in black. This 2dimensional embedding was constructed using metric multidimensional scaling (Borg and Groenen, 2005 ) on a symmetrized version of our dissimilarity matrix (which is not itself a metric). The embedded distances are reasonably faithful to the symmetrized dissimilarities: metric MDS achieves a low value of 0.20 on its \"stress\" objective, and we find that Kendall's tau = 0.76, meaning that if one pair of languages is displayed as farther apart than another, then in over 7/8 of cases, that pair is in fact more dissimilar. Among the real languages, note the clustering of Italic languages (pt, es, fr, it), Germanic languages (de, no, en, nl, da), Slavic languages (cs, bg), and Uralic languages (et, fi). Outliers are Arabic (ar), the only Afroasiatic language here, and Hindi (hi), the only SOV language, whose permutations are less outr\u00e9 than it is. diversity-or too radically different to belong in the galaxy of natural languages. Fortunately, we are at neither extreme. Figure 4 visualizes a small sample of 110 languages from our collection. 6 For each ordered pair of languages (S, T ), we defined the dissimilarity d(S, T ) as the decrease in UAS when we parse the dev data of T using a parser trained on S instead of one trained on T . Small dissimilarity (i.e., good parsing transfer) translates to small distance in the figure. The figure shows that the permutations of a substrate language (which share its color) can be radically different from it, as we already saw above. Some may be unnatural, but others are similar to other real languages, including held-out dev languages. Thus Dutch (nl) and Estonian (et) have close synthetic neighbors within this small sample, although they have no close real neighbors.",
"cite_spans": [
{
"start": 1098,
"end": 1121,
"text": "(Borg and Groenen, 2005",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 864,
"end": 872,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1998,
"end": 2006,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploratory Data Analysis",
"sec_num": "6"
},
{
"text": "We now illustrate the use of GD by studying how expanding the set of available treebanks can improve a simple NLP method, related to Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "An Experiment",
"sec_num": "7"
},
{
"text": "Dependency parsing of low-resource languages has been intensively studied for years. A simple method is called \"single-source transfer\": parsing a target language T with a parser that was trained on a source language S, where the two languages are syntactically similar. Such single-source transfer parsers (Ganchev et al., 2010; McDonald et al., 2011; Ma and Xia, 2014; Guo et al., 2015; Duong et al., 2015; Rasooli and Collins, 2015) are not state-ofthe-art, but they have shown substantial improvements over fully unsupervised grammar induction systems (Klein and Manning, 2004; Smith and Eisner, 2006; .",
"cite_spans": [
{
"start": 307,
"end": 329,
"text": "(Ganchev et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 330,
"end": 352,
"text": "McDonald et al., 2011;",
"ref_id": "BIBREF21"
},
{
"start": 353,
"end": 370,
"text": "Ma and Xia, 2014;",
"ref_id": "BIBREF19"
},
{
"start": 371,
"end": 388,
"text": "Guo et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 389,
"end": 408,
"text": "Duong et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 409,
"end": 435,
"text": "Rasooli and Collins, 2015)",
"ref_id": "BIBREF28"
},
{
"start": 556,
"end": 581,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 582,
"end": 605,
"text": "Smith and Eisner, 2006;",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Single-source transfer",
"sec_num": "7.1"
},
{
"text": "It is permitted for S and T to have different vocabularies. The S parser can nonetheless parse T (as in Figure 4 )-provided that it is a \"delexicalized\" parser that only cares about the POS tags of the input words. In this case, we require only that the target sentences have already been POS tagged using the same tagset as S: in our case, the UD tagset.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single-source transfer",
"sec_num": "7.1"
},
{
"text": "We evaluate single-source transfer when the pool of m source languages consists of n real UD languages, plus m \u2212 n synthetic GD languages derived by \"remixing\" just these real languages. 7 We try various values of n and m, where n can be as large as 10 (training languages from Table 1 ) and m can be as large as n \u00d7 (n + 1) \u00d7 (n + 1) \u2264 1210 (see \u00a75).",
"cite_spans": [
{
"start": 187,
"end": 188,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.2"
},
{
"text": "Given a real target language T from outside the pool, we select a single source language S from the pool, and try to parse UD sentences of T with a parser trained on S. We evaluate the results on T by measuring the unlabeled attachment score (UAS), that is, the fraction of word tokens that were assigned their correct parent. In these experiments (unlike those of \u00a76), we always evaluate fairly on T 's full dev or test set from UD-not just the sentences we kept for its GD version (cf. Table 2) . 8 The hope is that a large pool will contain at least one language-real or synthetic-that is \"close\" to T . We have two ways of trying to select a source S with this property:",
"cite_spans": [
{
"start": 499,
"end": 500,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 488,
"end": 496,
"text": "Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.2"
},
{
"text": "Supervised selection selects the S whose parser achieves the highest UAS on 100 training sentences of language T . This requires 100 good trees for T , which could be obtained with a modest investment-a single annotator attempting to follow the UD annotation standards in a consistent way on 100 sentences of T , without writing out formal Tspecific guidelines. (There is no guarantee that selecting a parser on training data will choose well for the test sentences of T . We are using a small amount of data to select among many dubious parsers, many of which achieve similar results on the training sentences of T . Furthermore, in the UD treebanks, the test sentences of T are sometimes drawn from a different distribution than the training sentences.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.2"
},
{
"text": "Unsupervised selection selects the S whose training sentences had the best \"coverage\" of the POS tag sequences in the actual data from T that we aim to parse. More precisely, we choose the S that maximizes p S (tag sequences from T )-in other words, the maximum-likelihood S-where p S is our trigram language model for the tag sequences of S. This approach is loosely inspired by S\u00f8gaard (2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.2"
},
{
"text": "Our most complete visualization is Figure 5 , which we like to call the \"kite graph\" for its appearance. We plot the UAS on the development treebank of T as a function of n, m, and the selection method. As Appendix A details, each point on this graph is actually an average over 10,000 experiments that make random choices of T (from the UD development languages), the n real languages (from the UD training languages), and the m \u2212 n synthetic languages (from the GD languages derived from the n real lan-2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 2 10 2 11 m = number of source languages Each point is the mean dev UAS over 10,000 experiments. We use paler lines in the same color and style to show the considerable variance of these UAS scores. These essentially delimit the interdecile range from the 10th to the 90th percentile of UAS score. However, if the plot shows a mean of 57, an interdecile range from 53 to 61 actually means that the middle 80% of experiments were within \u00b14 percentage points of the mean UAS for their target language. (In other words, before computing this range, we adjust each UAS score for target T by subtracting the mean UAS from the experiments with target T , and adding back the mean UAS from all 10,000 experiments (e.g., 57).)",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.3"
},
{
"text": "Notice that on the n = 10 curve, there is no variation among experiments either at the minimum m (where the pool always consists of all 10 real languages) or at the maximum m (where the pool always consists of all 1210 galactic languages). guages). We see from the black lines that increasing the number of real languages n is most beneficial. But crucially, when n is fixed in practice, gradually increasing m by remixing the real languages does lead to meaningful improvements. This is true for both selection methods. Supervised selection is markedly better than unsupervised. Figure 6 : Chance that selecting a source from m languages achieves strictly better dev UAS than just selecting from the n real languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 580,
"end": 588,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.3"
},
{
"text": "The \"selection graph\" in Figure 6 visualizes the same experiments in a different way. Here we ask about the fraction of experiments in which using the full pool of m source languages was strictly better than using only the n real languages. We find that when m has increased to its maximum, the full pool nearly always contains a synthetic source language that gets better results than anything in the real pool. After all, our generation of \"random\" languages is a scattershot attempt to hit the target: the more languages we generate, the higher our chances of coming close. However, our selection methods only manage to pick a better language in about 60% of those experiments. Figure 7 offers a fine-grained look at which real and synthetic source languages S succeeded best when T = English. Each curve shows a different superstrate, with the x-axis ranging over substrates. (The figure omits the hundreds of synthetic source languages that use two distinct superstrates, R V = R N .) Real languages are shown as solid black dots, and are often beaten by synthetic languages. For comparison, this graph also plots results that \"cheat\" by using English supervision.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 6",
"ref_id": null
},
{
"start": 681,
"end": 689,
"text": "Figure 7",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.3"
},
{
"text": "The above graphs are evaluated on development sentences in development languages. For our final results, Table 3 , we finally allow ourselves to try transferring to the UD test languages, and we eval- The points where R = S are specially colored in black; these are instances of self-permutation ( \u00a75). We also add \"cheating results\" where English itself is used as the substrate (left column) and/or the superstrate (solid black line at top). Thus, the large black dot at the upper left is a supervised English parser.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.3"
},
{
"text": "uate on test sentences. The comparison is similar to the comparison in the selection graph: do the synthetic treebanks add value? We use our largest source pools, n = 10 and m = 1210. With supervised selection, selecting the source language from the full pool of m options (not just the n real languages) tends to achieve significantly better UAS on the target language, often dramatically so. On average, the UAS on the test languages increases by 2.3 percentage points, and this increase is statistically significant across these 17 data points. Even with unsupervised selection, UAS still increases by 1.2 points on average, but this difference could be a chance effect. The results above use gold POS tag sequences for T . These may not be available if T is a low-resource language; see Appendix B for a further experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7.3"
},
{
"text": "Many of the curves in Figures 5-6 still seem to be increasing steadily at maximum m, which suggests that we would benefit from finding ways to generate even more synthetic languages. Diversity of languages seems to be crucial, since adding new real languages improves performance much faster than remixing existing languages. This suggests that we should explore making more extensive changes to the UD treebanks (see \u00a78) . Surprisingly, Figures 5-6 show improvements even when n = 1. Evidently, self-permutation of a single language introduces some useful variety, perhaps by transporting specialized word orders (e.g., English still allows some limited V2 constructions) into contexts where the source language would not ordinarily allow them but the target language does. Figure 5 shows why unsupervised selection is considerably worse on average than supervised selection. Its 90th percentile is comparable, but at the 10th percentile-presumably representing experiments where no good sources are available-the unsupervised heuristic has more trouble at choosing among the mediocre options. The supervised method can actually test these options using the true loss function. Figure 7 is interesting to inspect. English is essentially a Germanic language with French influence due to the Norman conquest, so it is reassuring that German and French substrates can each be improved by using the other as a superstrate. We also see that Arabic and Hindi are the worst source languages for English, but that Hindi[Arabic/V] is considerably better. This is because Hindi is reasonably similar to English once we correct its SOV word order to SVO (via almost any superstrate).",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 421,
"text": "\u00a78)",
"ref_id": null
},
{
"start": 438,
"end": 449,
"text": "Figures 5-6",
"ref_id": "FIGREF3"
},
{
"start": 775,
"end": 783,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 1179,
"end": 1187,
"text": "Figure 7",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.4"
},
{
"text": "This paper is the first release of a novel resource, the Galactic Dependencies treebank collection, that may unlock a wide variety of research opportunities (discussed in \u00a71). Our empirical studies show that the synthetic languages in this collection remain somewhat natural while improving the diversity of the collection. As a simplistic but illustrative use of the resource, we carefully evaluated its impact on the naive technique of single-source transfer parsing. We found that performance could consistently be improved by adding synthetic languages to the pool of sources, assuming gold POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "There are several non-trivial opportunities for improving and extending our treebank collection in future releases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "1. Our current method is fairly conservative, only synthesizing languages with word orders already attested in our small collection of real languages. This does not increase the diversity of the pool as much as when we add new real languages. Thus, we are particularly interested in generating a wider range of synthetic languages. We could condition reorderings on the surrounding tree structure, as noted in \u00a73.2. We could choose reordering parameters \u03b8 X more adventurously than by drawing them from a single known superstrate language. We could go beyond reordering, to systematically choose what function words (determiners, prepositions, particles), function morphemes, or punctuation symbols 9 should appear in the synthetic tree, or to otherwise alter the structure of the tree (Dorr, 1993) . These options may produce implausible languages. To mitigate this, we could filter or reweight our sample of synthetic languages-via rejection sampling or importance sampling-so that they are distributed more like real languages, as measured by their parsabilities, dependency lengths, and estimated WALS features (Dryer and Haspelmath, 2013) .",
"cite_spans": [
{
"start": 786,
"end": 798,
"text": "(Dorr, 1993)",
"ref_id": "BIBREF6"
},
{
"start": 1115,
"end": 1143,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "2. Currently, our reordering method only generates projective dependency trees. We should extend it to allow non-projective trees as well-for example, by pseudo-projectivizing the substrate treebank (Nivre and Nilsson, 2005) and then deprojectivizing it after reordering.",
"cite_spans": [
{
"start": 199,
"end": 224,
"text": "(Nivre and Nilsson, 2005)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "3. The treebanks of real languages can typically be augmented with larger unannotated corpora in those languages (Majli\u0161, 2011) , which can be used to train word embeddings and language models, and can also be used for self-training and bootstrapping methods. We plan to release comparable unannotated corpora for our synthetic languages, by au- 9 Our current handling of punctuation produces unnatural results, and not merely because we treat all tokens with tag PUNCT as interchangeable. Proper handling of punctuation and capitalization would require more than just reordering. For example, \"Jane loves her dog, Lexie.\" should reorder into \"Her dog, Lexie, Jane loves.\", which has an extra comma and an extra capital. Accomplishing this would require first recovering a richer tree for the original sentence, in which the appositive Lexie is bracketed by a pair of commas and the name Jane is doubly capitalized. These extra tokens were not apparent in the original sentence's surface form because the final comma was absorbed into the adjacent period, and the startof-sentence capitalization was absorbed into the intrinsic capitalization of Jane (Nunberg, 1990) . The tokenization provided by the UD treebanks unfortunately does not attempt to undo these orthographic processes, even though it undoes some morphological processes such as contraction. tomatically parsing and permuting the unnanotated corpora of their substrate languages.",
"cite_spans": [
{
"start": 113,
"end": 127,
"text": "(Majli\u0161, 2011)",
"ref_id": "BIBREF20"
},
{
"start": 346,
"end": 347,
"text": "9",
"ref_id": null
},
{
"start": 1151,
"end": 1166,
"text": "(Nunberg, 1990)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "4. At present, all languages derived from an English substrate use the English vocabulary. In the future, we plan to encipher that vocabulary separately for each synthetic language, perhaps choosing a cipher so that the result loosely conforms to the realistic phonotactics and/or orthography of some superstrate language. This would let multilingual methods exploit lexical features without danger of overfitting to specific lexical items that appear in many synthetic training languages. Alphabetic ciphers can preserve features of words that are potentially informative for linguistic structure discovery: their cooccurrence statistics, their length and phonological shape, and the sharing of substrings among morphologically related words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "5. Finally, we note that this paper has focused on generating a broadly reusable collection of synthetic treebanks. For some applications (including singlesource transfer), one might wish to tailor a synthetic language on demand, e.g., starting with one of our treebanks but modifying it further to more closely match the surface statistics of a given target language (Dorr et al., 2002) . In our setup, this would involve actively searching the space of reordering parameters, using algorithms such as gradient ascent or simulated annealing.",
"cite_spans": [
{
"start": 368,
"end": 387,
"text": "(Dorr et al., 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "We conclude by revisiting our opening point. Unsupervised discovery of linguistic structure is difficult. We often do not know quite what function to maximize, or how to globally maximize it. If we could make labeled languages as plentiful as labeled images, then we could treat linguistic structure discovery as a problem of supervised prediction-one that need not succeed on all formal languages, but which should generalize at least to the domain of possible human languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "The mean lines in the \"kite graph\" (Figure 5 ) are actually obtained by averaging 10,000 graphs. Each of these graphs is \"smooth\" because it incrementally adds new languages as n or m increases. Pseudocode to generate one such graph is given as Algorithm 1; all random choices are made uniformly.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 44,
"text": "(Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "A Constructing the Kite Graph",
"sec_num": null
},
{
"text": "Algorithm 1 Data collection for one graph for n = 1 to |L| do 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Constructing the Kite Graph",
"sec_num": null
},
{
"text": "L \u2190 a filtered version of L that excludes languages with substrates or superstrates outside {L 1 , . . . , L n } 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Constructing the Kite Graph",
"sec_num": null
},
{
"text": "for n = 1 to |L | do 9: B Experiment with Noisy Tags Table 4 repeats the single-source transfer experiment using noisy automatic POS tags for T for both parser input and unsupervised selection. We obtained the tags using RDRPOSTagger (Nguyen et al., 2014) trained on just 100 gold-tagged sentences (the same set used for supervised selection). The low tagging accuracy does considerably degrade UAS and muddies the usefulness of the synthetic sources. Table 4 : Tagging accuracy on the 10 dev languages, and UAS of the selected source parser with these noisy targetlanguage tag sequences. The results are formatted as in Table 3 , but here all results are on dev sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 62,
"text": "Tags Table 4",
"ref_id": null
},
{
"start": 454,
"end": 461,
"text": "Table 4",
"ref_id": null
},
{
"start": 623,
"end": 630,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "A Constructing the Kite Graph",
"sec_num": null
},
{
"text": "P \u2190 {L 1 , . . . , L n , L 1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Constructing the Kite Graph",
"sec_num": null
},
{
"text": "http://universaldependencies.org 2 In practice, this means applying a single permutation model to permute the dependents of every word tagged as NOUN (common noun), PROPN (proper noun), or PRON (pronoun).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our method can only produce projective trees. This is because it recursively generates a node's dependent subtrees, one at a time, in some chosen order. Thus, to be safe, we only apply our method to trees that were originally projective. See \u00a78.4 We could alternatively have used MCMC sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recall that for each head POS X of language L, we learn a separate ordering model with parameter vector \u03b8 L X .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For each of the 10 real training languages, we sampled 9 synthetic languages: 3 N-permuted, 3 V-permuted and 3 {N, V}permuted. We also included all 10 training + 10 dev languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The m \u2212 n GD treebanks are comparatively impoverished because-in the current GD release-they include only projective sentences(Table 2). The n UD treebanks are unfiltered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Yara parser can only produce projective parses. It attempts to parse all test sentences of T projectively, but sadly ignores non-projective training sentences of S (as can occur for real S).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements This work was funded by the U. S. National Science Foundation under Grant No. 1423276. Our data release is derived from the Universal Dependencies project, whose many selfless contributors have our gratitude. We would also like to thank Matt Gormley and Sharon Li for early discussions and code prototypes, Mohammad Sadegh Rasooli for guidance on working with the Yara parser, and Jiang Guo, Tim Vieira, Adam Teichert, and Nathaniel Filardo for additional useful discussion. Finally, we thank TACL editors Joakim Nivre and Lillian Lee and the anonymous reviewers for several suggestions that improved the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Modern Multidimensional Scaling: Theory and Applications",
"authors": [
{
"first": "Ingwer",
"middle": [],
"last": "Borg",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Patrick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Groenen",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ingwer Borg and Patrick J.F. Groenen. Modern Multidi- mensional Scaling: Theory and Applications. 2005.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Lectures on Government and Binding: The Pisa Lectures. Holland: Foris Publications",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. Lectures on Government and Binding: The Pisa Lectures. Holland: Foris Publications, 1981.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Clause restructuring for statistical machine translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Ivona",
"middle": [],
"last": "Kucerova",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "531--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Philipp Koehn, and Ivona Kucerova. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the As- sociation for Computational Linguistics, pages 531- 540, 2005.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Modeling word forms using latent underlying morphs and phonology",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "433--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. Mod- eling word forms using latent underlying morphs and phonology. Transactions of the Association for Com- putational Linguistics, 3:433-447, 2015.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Data augmentation for deep neural network acoustic modeling",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Vaibhava",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": null,
"venue": "September 2015",
"volume": "23",
"issue": "",
"pages": "1469--1477",
"other_ids": {
"DOI": [
"10.1109/TASLP.2015.2438544"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaodong Cui, Vaibhava Goel, and Brian Kingsbury. Data augmentation for deep neural network acoustic modeling. IEEE/ACM Transactions on Audio, Speech and Language Processing, 23(9):1469-1477, Septem- ber 2015. ISSN 2329-9290.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Machine Translation: A View from the Lexicon",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie J. Dorr. Machine Translation: A View from the Lexicon. MIT Press, Cambridge, MA, 1993.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "DUSTer: A method for unraveling crosslanguage divergences for statistical word-level alignment",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Pearl",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 5th Conference of the Association for Machine Translation in the Americas on Machine Translation: From Research to Real Users, AMTA '02",
"volume": "",
"issue": "",
"pages": "31--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie J. Dorr, Lisa Pearl, Rebecca Hwa, and Nizar Habash. DUSTer: A method for unraveling cross- language divergences for statistical word-level align- ment. In Proceedings of the 5th Conference of the As- sociation for Machine Translation in the Americas on Machine Translation: From Research to Real Users, AMTA '02, pages 31-43, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The World Atlas of Language Structures Online",
"authors": [
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Dryer",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Haspelmath",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew S. Dryer and Martin Haspelmath, editors. The World Atlas of Language Structures Online.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Cross-lingual transfer for unsupervised dependency parsing without parallel data",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 19th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "113--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. Cross-lingual transfer for unsupervised dependency parsing without parallel data. In Proceedings of the 19th Conference on Computational Natural Language Learning, pages 113-122, 2015.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Favor short dependencies: Parsing with soft and hard constraints on dependency length",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Trends in Parsing Technology: Dependency Parsing, Domain Adaptation, and Deep Parsing",
"volume": "",
"issue": "",
"pages": "121--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and Noah A. Smith. Favor short dependen- cies: Parsing with soft and hard constraints on depen- dency length. In Harry Bunt, Paola Merlo, and Joakim Nivre, editors, Trends in Parsing Technology: Depen- dency Parsing, Domain Adaptation, and Deep Parsing, chapter 8, pages 121-150. 2010.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Posterior regularization for structured latent variable models",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev, Joao Gra\u00e7a, Jennifer Gillenwater, and Ben Taskar. Posterior regularization for structured la- tent variable models. Journal of Machine Learning Research, 11:2001-2049, 2010.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cross-lingual dependency parsing based on distributed representations",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1234--1244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. Cross-lingual dependency pars- ing based on distributed representations. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1234-1244, 2015.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Performance Theory of Order and Constituency",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hawkins",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hawkins. A Performance Theory of Order and Con- stituency. Cambridge University Press, 1994.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1684--1692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Pro- cessing Systems, pages 1684-1692, 2015.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Clause restructuring for SMT not absolutely helpful",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Howlett",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "384--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Howlett and Mark Dras. Clause restructuring for SMT not absolutely helpful. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 384-388, 2011. Erratum at https://www.aclweb.org/anthology/P/ P11/P11-2067e1.pdf.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Vocal tract length perturbation (VTLP) improves speech recognition",
"authors": [
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 30th International Conference on Machine Learning Workshop on Deep Learning for Audio, Speech and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navdeep Jaitly and Geoffrey E. Hinton. Vocal tract length perturbation (VTLP) improves speech recognition. In Proceedings of the 30th International Conference on Machine Learning Workshop on Deep Learning for Audio, Speech and Language, 2013.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Corpus-based induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "478--485",
"other_ids": {
"DOI": [
"10.3115/1218955.1219016"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher Manning. Corpus-based in- duction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguis- tics, pages 478-485, 2004.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "ImageNet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "25",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutional neu- ral networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural In- formation Processing Systems 25, pages 1097-1105. 2012.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1337--1348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Fei Xia. Unsupervised dependency pars- ing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1337-1348, 2014.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Majli\u0161",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Majli\u0161. W2C-web to corpus-corpora, 2011. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University in Prague.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multisource transfer of delexicalized dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. Multi- source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62- 72, 2011.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using universal linguistic knowledge to guide grammar induction",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1234--1244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. Using universal linguistic knowledge to guide grammar induction. In Proceedings of the 2010 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1234-1244, 2010.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "RDRPOSTagger: A ripple down rules-based part-of-speech tagger",
"authors": [],
"year": 2014,
"venue": "Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Dai Quoc Nguyen, Dang Duc Pham, and Son Bao Pham. RDRPOSTagger: A ripple down rules-based part-of-speech tagger. In Proceedings of the Demonstrations at the 14th Conference of the Eu- ropean Chapter of the Association for Computational Linguistics, pages 17-20, 2014.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Pseudo-projective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "99--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Jens Nilsson. Pseudo-projective de- pendency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguis- tics, pages 99-106, 2005.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Haji\u010d, Christopher Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the 10th International Conference on Language Resources and Evaluation, pages 1659- 1666, 2016.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The Linguistics of Punctuation. Number 18 in CSLI Lecture Notes. Center for the Study of Language and Information",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Nunberg",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Nunberg. The Linguistics of Punctuation. Num- ber 18 in CSLI Lecture Notes. Center for the Study of Language and Information, 1990.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Density-driven cross-lingual transfer of dependency parsers",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh",
"suffix": ""
},
{
"first": "Rasooli",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "328--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli and Michael Collins. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 328-338, 2015.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Yara parser: A fast and accurate dependency parser",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh Rasooli",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"R"
],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2015,
"venue": "Computing Research Repository",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.06733"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli and Joel R. Tetreault. Yara parser: A fast and accurate dependency parser. Computing Research Repository, arXiv:1503.06733 (version 2), 2015.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised morphology-based vocabulary expansion",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh Rasooli",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lippincott",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1349--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli, Thomas Lippincott, Nizar Habash, and Owen Rambow. Unsupervised morphology-based vocabulary expansion. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1349-1359, June 2014.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Training paradigms for correcting errors in grammar and usage",
"authors": [
{
"first": "Alla",
"middle": [],
"last": "Rozovskaya",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "154--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alla Rozovskaya and Dan Roth. Training paradigms for correcting errors in grammar and usage. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 154-162, 2010.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Permutation generation methods",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Sedgewick",
"suffix": ""
}
],
"year": 1977,
"venue": "ACM Computing Surveys",
"volume": "9",
"issue": "2",
"pages": "137--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Sedgewick. Permutation generation methods. ACM Computing Surveys, 9(2):137-164, 1977.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Generating factoid questions with recurrent neural networks: The 30m factoid questionanswer corpus",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garc\u00eda-Dur\u00e1n",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Chandar",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "588--598",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1056"
]
},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alberto Garc\u00eda-Dur\u00e1n, Caglar Gul- cehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. Generating factoid questions with recurrent neural networks: The 30m factoid question- answer corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 588-598. Associ- ation for Computational Linguistics, 2016.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Best practices for convolutional neural networks applied to visual document analysis",
"authors": [
{
"first": "Patrice",
"middle": [
"Y"
],
"last": "Simard",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Steinkraus",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Platt",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 7th International Conference on Document Analysis and Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrice Y. Simard, Dave Steinkraus, and John C. Platt. Best practices for convolutional neural networks ap- plied to visual document analysis. In Proceedings of the 7th International Conference on Document Analy- sis and Recognition, pages 958-, 2003.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Annealing structural bias in multilingual weighted grammar induction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "569--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. Annealing struc- tural bias in multilingual weighted grammar induction. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguistics, pages 569-576, 2006.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Data point selection for cross-language adaptation of dependency parsers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "682--686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. Data point selection for cross-language adaptation of dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 682-686, 2011.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Grammar Induction and Parsing with Dependency-and-Boundary Models",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky. Grammar Induction and Parsing with Dependency-and-Boundary Models. PhD thesis, Computer Science Department, Stanford University, Stanford, CA, 2013.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Breaking out of local optima with count transforms and model recombination: A study in grammar induction",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1983--1995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Juraf- sky. Breaking out of local optima with count trans- forms and model recombination: A study in grammar induction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1983-1995, 2013.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Unsupervised morph segmentation and statistical language models for vocabulary expansion",
"authors": [
{
"first": "Matti",
"middle": [],
"last": "Varjokallio",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "175--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matti Varjokallio and Dietrich Klakow. Unsupervised morph segmentation and statistical language models for vocabulary expansion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 175-180, 2016.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Managing misspelled queries in IR applications",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Otero",
"suffix": ""
}
],
"year": 2011,
"venue": "Information Processing & Management",
"volume": "47",
"issue": "2",
"pages": "263--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Vilares, Manuel Vilares, and Juan Otero. Manag- ing misspelled queries in IR applications. Information Processing & Management, 47(2):263-286, 2011.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Building a semantic parser overnight",
"authors": [
{
"first": "Yushi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1332--1342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yushi Wang, Jonathan Berant, and Percy Liang. Build- ing a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), pages 1332-1342, 2015.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Towards AI-complete question answering: A set of prerequisite toy tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards AI-complete question an- swering: A set of prerequisite toy tasks. In Proceed- ings of the International Conference on Learning Rep- resentations, 2016.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The original UD tree for a short English sentence, and its \"translations\" into three synthetic languages, which are obtained by manipulating the tree. (Moved constituents are underlined.) Each language has a different distribution over surface part-of-speech sequences."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Comprehensive results for single-source transfer from a pool of m languages (the horizontal axis) synthesized from n real languages. For each color 1, 2, . . . , n, the upper dashed line shows the UAS achieved by supervised selection; the lower solid line shows unsupervised selection; and the shaded area highlights the difference. The black dashed and solid lines connect the points where m = n, showing how rapidly UAS increases with n when only real languages are used."
},
"FIGREF6": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "UAS performance of different source parsers when applied to English development sentences. The x axis shows the 10 real training languages S, in decreasing order of their UAS performance (plotted as large black dots). For each superstrate R, we plot a curve showing-for each substrate S-the best UAS of the languages S[R/N], S[R/V] and S[R/N, R/V]."
},
"FIGREF7": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Sets T (target languages), S (real source languages), S (synthetic source languages) Output: Sets of data points D sup , D unsup 1"
},
"TABREF1": {
"num": null,
"text": "det.ADJ.amod A.t 1 .r 1 .t 2 .r 2 A.BOS.BOS.DET.det, A.DET.det.ADJ.amod, A.ADJ.amod.NOUN.head, A.NOUN.head.EOS.EOS plus backoff features and H features (not shown).",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF5": {
"num": null,
"text": "Our final comparison on the 17 test languages appears in the upper part of this table.",
"content": "<table><tr><td>We ask whether single-source transfer to these 17 real target languages is improved by augmenting the source pool of 10 real lan-guages with 1200 synthetic languages. When different languages are selected in these two settings, we boldface the setting with higher test UAS, or both settings if they are not significantly different (paired permutation test by sentence, p &lt; 0.05). For completeness, we extend the ta-ble with the 10 development languages. The \"Avg.\" lines report the average of 17 test or 27 test+dev languages. The two supervised-selection averages are significantly different (paired permutation test by language, p &lt; 0.05).</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"text": ". . . , L n }",
"content": "<table><tr><td>10: 11: 12: 13:</td><td>m \u2190 |P| D sup \u2190 D sup \u222a {(n, m, UAS sup (P, T ))} D unsup \u2190 D unsup \u222a {(n, m, UAS unsup (P, T ))}</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"text": "53.24 55.08 53.24 53.24 nl 71.70 39.40 38.99 42.42 42.75 et 72.88 45.19 54.81 56.07 55.09 la proiel 71.83 37.25 38.26 37.25 38.10 da 78.04 47.98 43.40 47.98 45.89 en 77.33 48.29 44.40 48.29 48.15 grc 68.80 32.15 32.15 33.52 34.36 grc proiel 72.93 42.46 41.39 43.49 44.19 fi 65.65 29.59 28.81 36.85 36.90 got 76.66 44.77 44.05 44.77 46.83 Avg. 73.42 42.03 42.13 44.39 44.55",
"content": "<table><tr><td/><td>tag unsupervised (weakly) superv.</td></tr><tr><td>target</td><td>real +synth real +synth</td></tr><tr><td>bg</td><td>78.33</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}