| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:34:48.128057Z" |
| }, |
| "title": "Syntax-driven Iterative Expansion Language Models for Controllable Text Generation", |
| "authors": [ |
| { |
| "first": "Noe", |
| "middle": [], |
| "last": "Casas", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Lucy Software, United Language Group * TALP Research Center", |
| "institution": "Universitat Polit\u00e8cnica de Catalunya", |
| "location": {} |
| }, |
| "email": "noe.casas@upc.edu" |
| }, |
| { |
| "first": "Jose", |
| "middle": [ |
| "A R" |
| ], |
| "last": "Fonollosa", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Lucy Software, United Language Group * TALP Research Center", |
| "institution": "Universitat Polit\u00e8cnica de Catalunya", |
| "location": {} |
| }, |
| "email": "jose.fonollosa@upc.edu" |
| }, |
| { |
| "first": "Marta", |
| "middle": [ |
| "R" |
| ], |
| "last": "Costa-Juss\u00e0", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Lucy Software, United Language Group * TALP Research Center", |
| "institution": "Universitat Polit\u00e8cnica de Catalunya", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The dominant language modeling paradigm handles text as a sequence of discrete tokens. While that approach can capture the latent structure of the text, it is inherently constrained to sequential dynamics for text generation. We propose a new paradigm for introducing a syntactic inductive bias into neural text generation, where the dependency parse tree is used to drive the Transformer model to generate sentences iteratively. Our experiments show that this paradigm is effective at text generation, with quality between LSTMs and Transformers, and comparable diversity, requiring less than half their decoding steps, and its generation process allows direct control over the syntactic constructions of the generated text, enabling the induction of stylistic variations.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The dominant language modeling paradigm handles text as a sequence of discrete tokens. While that approach can capture the latent structure of the text, it is inherently constrained to sequential dynamics for text generation. We propose a new paradigm for introducing a syntactic inductive bias into neural text generation, where the dependency parse tree is used to drive the Transformer model to generate sentences iteratively. Our experiments show that this paradigm is effective at text generation, with quality between LSTMs and Transformers, and comparable diversity, requiring less than half their decoding steps, and its generation process allows direct control over the syntactic constructions of the generated text, enabling the induction of stylistic variations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The currently dominant text generation paradigm is based on generating a sequence of discrete tokens in a left-to-right autoregressive way. Most neural language models (LMs) fall into this autoregressive generation category. Some neural architectures are sequential in nature, such as those based on recurrent neural networks (RNNs), lending themselves naturally to the autoregressive approach when used together with teacher forcing (Williams and Zipser, 1989) . Other architectures, such as Transformer (Vaswani et al., 2017) , while not intrinsically sequential, have also been targeted for sequential generation. On the other hand, some recent lines of research have focused on nonsequential generation. In this work, we propose a new paradigm for text generation and language modeling called Iterative Expansion Language Model, which generates the final sequence following a token ordering defined by the sentence dependency parse by iteratively expanding each level of the tree.", |
| "cite_spans": [ |
| { |
| "start": 434, |
| "end": 461, |
| "text": "(Williams and Zipser, 1989)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 505, |
| "end": 527, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we provide an overview of works related to ours, including dependency treedriven LMs ( \u00a72.1), syntax-driven generation ( \u00a72.2), insertion-based approaches ( \u00a72.3) and iterative refinement approaches ( \u00a72.4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The use of dependency parse trees to drive a language model was first proposed by Chelba et al. (1997) , with a similar structure to an n-gram LM, but where the context of a word is its preceding bigram plus a list of preceding words whose parent does not precede it. Shen et al. (2008) make use of the dependency tree in a probabilistic LM, computing the probability of each word conditioned on its parent and the sibling words between both. Mirowski and Vlachos (2015) propose a dependency LM based on RNNs, where the dependency tree is decomposed into a collection of unrolls, that is, paths from the root to one of the leaves, and where the probability of a word can be predicted from these unrolls. Buys and Blunsom (2018) propose a shift-reduce transition-based LSTM (Hochreiter and Schmidhuber, 1997) dependency LM that can be used for language modeling and generation by means of dynamic programming.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 102, |
| "text": "Chelba et al. (1997)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 268, |
| "end": 286, |
| "text": "Shen et al. (2008)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 443, |
| "end": 470, |
| "text": "Mirowski and Vlachos (2015)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 704, |
| "end": 727, |
| "text": "Buys and Blunsom (2018)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 773, |
| "end": 807, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency LMs", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Recurrent neural network grammars (Dyer et al., 2016) are recursive models that operate with a stack of symbols that can be populated with terminals or nonterminals, or \"reduced\" to generate a syntactic constituent, obtaining as a result a sentence and its associated constituency parse tree. Shen et al. (2018) use skip-connections to integrate constituent relations with RNNs, learning the underlying dependency structures by leveraging a syntactic distance together with structured attention. Akoury et al. (2019) use a simplified constituency tree as latent variables, modeling it autoregressively to later use it as input for a nonautoregressive transformer that generates the output sentence.", |
| "cite_spans": [ |
| { |
| "start": 34, |
| "end": 53, |
| "text": "(Dyer et al., 2016)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 293, |
| "end": 311, |
| "text": "Shen et al. (2018)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 496, |
| "end": 516, |
| "text": "Akoury et al. (2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntax-driven Generation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Ordered neurons (Shen et al., 2019) are modified LSTMs where the latent sentence tree structure is used to control the dependencies between recurrent units with a special \"master\" input and forget gates. propose a conditional generative model that iteratively generates tokens plus the position at which they should be inserted within the sequence. Emelianenko et al. (2019) further propose to optimize the generation order by sampling from the ordering permutations. Instead, optimize a lower bound of the marginalized probability over every possible ordering. Gu et al. (2019a) handle the generation order as a latent variable that is captured as the relative position through self-attention, optimizing the ELBO to train the model. Levenshtein Transformer (Gu et al., 2019b ) is a non-autoregressive approach trained with reinforcement learning (RL) to generate token insertion and deletion actions. While it benefits from the same generation speed-ups over autoregressive models as our model, it has the added difficulty of learning an insertion/deletion policy using RL without any linguistically or empirically motivated priors, which can be slow or difficult to obtain convergence in practice. By comparison, our approachmakes uses a linguistically motivated prior for word insertion in a fully supervised way, avoiding the optimization difficulties of RL. Welleck et al. (2019) use cost minimization imitation learning to learn a policy to generate a binary tree that is used to drive the token generation. Lee et al. (2018) propose a latent variable nonautoregressive machine translation model where first the target length is predicted by the model, and then, the decoder is iteratively applied to its own output to refine it.", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 35, |
| "text": "(Shen et al., 2019)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 349, |
| "end": 374, |
| "text": "Emelianenko et al. (2019)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 562, |
| "end": 579, |
| "text": "Gu et al. (2019a)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 759, |
| "end": 776, |
| "text": "(Gu et al., 2019b", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1364, |
| "end": 1385, |
| "text": "Welleck et al. (2019)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 1515, |
| "end": 1532, |
| "text": "Lee et al. (2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntax-driven Generation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Mask-predict (Ghazvininejad et al., 2019 ) also predicts the target sentence length and then nonautoregressively predicts the sentence itself, iteratively refining it a fixed number of times, masking out and regenerating the tokens it is least confident about. Lawrence et al. (2019) follow a similar approach and start with a sequence of placeholder tokens (all the same) of a specified length, and they iteratively replace them with normal tokens via masked LM-style inference. As the masking strategy for the training data, the authors propose different stochastic processes to randomly select which placeholders are to be uncovered.", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 40, |
| "text": "(Ghazvininejad et al., 2019", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 261, |
| "end": 283, |
| "text": "Lawrence et al. (2019)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Iterative Refinement", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Our proposal is to train a new kind of language model where the token generation order is driven by the dependency parse tree of the sentence and where the generation process is iterative. The input vocabulary contains terminal tokens as well as non-terminal special tokens called dependency placeholders, each of which is associated with one of the possible dependency relations to the heads. For the dependency tree in Figure 1 The input of the first iteration is the sequence with the [ROOT] element. At each iteration, the model receives as input a sequence I tok with tokens from the input vocabulary and non-autoregressively generates two new sequences, each with the same length as the input.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 421, |
| "end": 429, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Iterative Expansion LMs", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The first output sequence, O tok , contains tokens from a vocabulary with all possible textual tokens (terminal tokens). The second output, O exp , is a sequence of tokens called expansion placeholders, which are taken from a separate vocabulary. Each expansion placeholder is associated with a pattern describing the left and right dependencies of the token at that position in the O tok sequence. An example of dependency expansion could be [nsubj-advmod-HEAD-xcomp] for the word \"likes\" in the dependency parse tree from Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 524, |
| "end": 532, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Iterative Expansion LMs", |
| "sec_num": "3" |
| }, |
| { |
| "text": "After each iteration, the output of the model is expanded. 1 This consists of creating a new sequence by combining the tokens from I tok , O tok and O exp . This process is illustrated in Figure 2 , making use of the dependency tree from Figure 1 .", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 60, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 188, |
| "end": 196, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 238, |
| "end": 246, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Iterative Expansion LMs", |
| "sec_num": "3" |
| }, |
| { |
| "text": "When there is a padding token [pad] in the output (either O tok or O exp ), this means that the output at that position is ignored when computing the loss function. This occurs when the terminal token has already been computed in previous iterations and has therefore been received as part of I tok , and the model does not need to compute it again.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Iterative Expansion LMs", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Note also that an empty dependencies token [HEAD] marks the end of a branch and that there is no need for an end of sequence token <eos>. As shown in the example from Figure 1 , the generation of different branches occurs in parallel, needing only 3 iterations to generate a 6-token sentence. The strategy for composing tree expansion tokens (e.g., [nsubj-advmod-HEAD-xcomp]) may not scale well when single words have many direct dependencies. To alleviate this, we introduce a preprocessing step to modify the dependency tree so that every word has at most one dependency to the left and one to the right. For each word with more than one dependency on any of its sides, we rearrange the tree to force left-to-right dependencies. Although this tree binarization reduces the degree of parallelism, it reduces data sparsity and allows handling constructions with a number of dependencies may otherwise be too large for the model to properly capture, such as enumerations (e.g., \"I bought a pair of shoes, an umbrella, a beautiful jacket and a bracelet\").", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 175, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Iterative Expansion LMs", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Iterative expansion LMs can be naturally extended to subword vocabularies, like byte-pair encoding (BPE; Sennrich et al., 2016): for each word, we decompose its node in the tree into as many nodes as subwords in the word, rearranging the tree so that the head of the old word is now the head of the first subword, and each subsequent subword depends on the previous one, while every dependency of the old word node now depends on the last subword.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Iterative Expansion LMs", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The neural architecture proposed is based on a Transformer decoder (Vaswani et al., 2017) . To generate the dual output (terminal tokens and expansion placeholders) we condition the generation of terminals on the expansions: the probability distribution over the expansion token space is generated first by projecting from one of the intermediate layers' hidden states. We sample from it and use the resulting expansion IDs as an index to a trainable expansion embedding layer; the embedded vectors are added to the hidden state used to generate them for use as input to subsequent layers.", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 89, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Architecture", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "As described in Section 3, the input and output token vocabularies are different: the latter only contains terminal tokens (plus some special tokens such as [PAD]); the former also contains dependency placeholders. However, for practical purposes, at the model level, we define both vocabularies to be the same, both with terminal tokens and dependency placeholders, and we mask the entries of dependency placeholders in the final softmax.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Architecture", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To inject the syntactic dependency information as input into the model, we add a layer of learned positional embeddings containing the position of the head of each token, and we refer to this embedding layer as head position embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Architecture", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The self-attention mask used in Transformer to force causality is not used in our proposal. The input is therefore not masked at all, and the token predictions have access to the full input sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Architecture", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For training iterative expansion LMs, the main input of the model is the tokens at one of the levels of the dependency parse tree (I tok ), while the output is the following level tokens (O tok ) and expansion placeholders (O exp ). A secondary input to the model are the dependency indexes, which are used in the head position embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The model is trained with the categorical crossentropy for both tokens and expansion placeholders, then adding both sublosses into the final loss (with equal weights). Tokens generated in previous iterations appear as [PAD] tokens in the expected output and are ignored when computing the loss.", |
| "cite_spans": [ |
| { |
| "start": 218, |
| "end": 223, |
| "text": "[PAD]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Training takes place in batches; as the trainable unit is a level transition, a training batch is composed of level transitions from different sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In iterative expansion LMs, inference takes place iteratively. The initial state is a batch of [ROOT] tokens, together with the head positions initialized to the special value representing the root node and, in constrained attention variants, a mask with the self-dependency of the single node in each sentence in the batch. At each iteration, the model generates the probability distributions for terminal tokens and expansion tokens. We use nucleus sampling (Holtzman et al., 2020) to sample from them. The terminal token sequences are expanded according to the expansion tokens (see \u00a73), and these are the inputs for the following iteration if there are still unfinished branches. Before sampling from the token and expansion probability distributions, we mask the <unk> token and the dependency placeholders to avoid generating them.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 101, |
| "text": "[ROOT]", |
| "ref_id": null |
| }, |
| { |
| "start": 460, |
| "end": 483, |
| "text": "(Holtzman et al., 2020)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference and Text Generation", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Although iterative expansion LMs could be subject to beam search across iterations, we have not covered such a possibility as part of this work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference and Text Generation", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We conducted experiments on unconditional text generation following the methodology used by Caccia et al. (2020). The goal is to assess both the quality and diversity of the text generated by the model and the baselines. For the quality evaluation, we use the BLEU score (Papineni et al., 2002) over the test set, where each generated sentence is evaluated against the whole test set as a reference. For diversity, we used the self-BLEU score (Zhu et al., 2018) , computed using as references the rest of the generated sentences. For each model, the temperature of the final softmax \u03c4 is tuned to generate text in the closest quality/diversity regime to the training data.", |
| "cite_spans": [ |
| { |
| "start": 271, |
| "end": 294, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 443, |
| "end": 461, |
| "text": "(Zhu et al., 2018)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unconditional Text Generation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Iterative expansion LMs are compared against a standard LM baselines, namely, AWD-LSTM 2 (Merity et al., 2018 ) and a Transformer LM (Vaswani et al., 2017) , both with word (w) and BPE subword (sw) vocabularies. The models were trained on the EMNLP2017 News dataset, which contains news in English, enriched with dependency annotations by corenlp, an automatic annotation tool that provides pre-trained models. Syntax-driven generation baseline models were not included because the only model with an available implementation that is able to do unsupervised text generation are RNNGs, but they proved not to scale even to medium-sized datasets like EMNLP2017 News. When sampling from models, we use nucleus sampling (Holtzman et al., 2020) , a form of ancestral sampling that constrains the candidate pool by discarding the distribution tail. Samples from the training and validation data are included for reference. Full hyperparameters and data processing details are described in Appendices D and B.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 109, |
| "text": "(Merity et al., 2018", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 133, |
| "end": 155, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 716, |
| "end": 739, |
| "text": "(Holtzman et al., 2020)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unconditional Text Generation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Iterative expansion LMs drive the generation of text with the dependency parse tree. It is possible to influence the generated trees by altering artificially the probability of the different expansion tokens. To demonstrate this, we modified the decoding process of iterative expansion LMs to force the probability of generating adjectival constructions to be higher than normal, aiming at generating a more descriptive style: during decoding, we multiply the probabilities of the expansion placeholders that express adjectival dependencies (i.e. those containing adjectival modifier \"amod\" relations), and renormalize the probabilities by dividing by the sum.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Style Variation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We conducted this experiment with the wordlevel models trained on EMNLP2017 News data. We compute the ratio of adjectives per sentence to verify the increased presence of adjectives, while controlling quality and diversity measures over the generated text for potential degradation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Style Variation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We assess the ability of iterative expansion LMs to unconditionally generate text in terms quality (BLEU-5) vs. diversity (self BLEU-5), comparing against sequential baselines, each with a softmax temperature \u03c4 tuned separately.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In order to tune the output softmax termperature \u03c4 , we generated text with each model at different temperatures and chose the value of \u03c4 that was the most similar to a sample from the training data in terms of BLEU-5 against a sample from the validation set (proxy for quality) and self BLEU-5 = 0.7 = 0.8 = 0.9 = 1.0 = 1.1 = 1.2 = 0.7 = 0.8 = 0.9 = 1.0 = 1.1 = 1.2 Figure 3 : Quality vs. diversity on EMNLP2017 News (BLEU-5). Models with word-level vocabulary on the left and subword-level on the right. The point marker is color-filled for the chosen value of \u03c4 . Each point represents the average over 20 generated text samples, and is surrounded by a small colored ellipse representing the standard deviation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 367, |
| "end": 375, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u03c4 ITEXP (w) AWD-LSTM (w) Transformer (w) valid \u2191 self \u2193 valid \u2191 self \u2193 valid \u2191 self \u2193 0.70", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "30.1 \u00b1 0.8 22.3 \u00b1 1.0 39.2 \u00b1 0.9 33.4 \u00b1 1.1 40.5 \u00b1 0.6 35.0 \u00b1 1.1 0.80 26.8 \u00b1 0.8 16.0 \u00b1 1.0 33.0 \u00b1 0.7 23.2 \u00b1 1.0 35.8 \u00b1 0.7 26.3 \u00b1 0.8 0.90 23.5 \u00b1 0.7 12.4 \u00b1 0.7 26.0 \u00b1 0.6 14.7 \u00b1 0.8 30.4 \u00b1 0.7 19.0 \u00b1 0.8 1.00 20.0 \u00b1 0.6 9.4 \u00b1 0.5 19.4 \u00b1 0.6 9.0 \u00b1 0.6 25.2 \u00b1 0.5 13.3 \u00b1 0.5 1.10", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "16.4 \u00b1 0.5 6.8 \u00b1 0.5 13.4 \u00b1 0.4 5.0 \u00b1 0.4 19.9 \u00b1 0.6 9.0 \u00b1 0.6 1.20", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "13.4 \u00b1 0.6 5.1 \u00b1 0.4 9.0 \u00b1 0.5 2.9 \u00b1 0.3 15.8 \u00b1 0.5 6.2 \u00b1 0.5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u03c4 ITEXP (sw) AWD-LSTM (sw) Transformer (sw) valid \u2191 self \u2193 valid \u2191 self \u2193 valid \u2191 self \u2193 0.70", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "28.6 \u00b1 0.9 20.3 \u00b1 1.1 39.0 \u00b1 0.8 33.5 \u00b1 1.1 36.9 \u00b1 0.7 30.6 \u00b1 1.2 0.80 25.5 \u00b1 0.5 15.1 \u00b1 0.7 32.3 \u00b1 0.7 22.4 \u00b1 0.7 32.5 \u00b1 0.7 22.4 \u00b1 1.0 0.90 22.7 \u00b1 0.6 11.5 \u00b1 0.7 25.6 \u00b1 0.6 14.3 \u00b1 0.6 27.8 \u00b1 0.7 16.0 \u00b1 0.8 1.00 19.9 \u00b1 0.6 9.2 \u00b1 0.5 19.2 \u00b1 0.5 8.9 \u00b1 0.5 22.9 \u00b1 0.8 11.0 \u00b1 0.7 1.10", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "16.9 \u00b1 0.8 7.0 \u00b1 0.6 13.9 \u00b1 0.5 5.5 \u00b1 0.4 18.4 \u00b1 0.7 7.6 \u00b1 0.6 1.20", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "14.1 \u00b1 0.6 5.4 \u00b1 0.5 9.7 \u00b1 0.4 3.3 \u00b1 0.3 14.5 \u00b1 0.5 5.2 \u00b1 0.5 Table 1 : Validation and self BLEU-5 scores of the text generated by the word-level (top) and subword-level (bottom) models under study at different temperatures \u03c4 , showing the average and standard deviation over 20 different generated text samples. The selected generation regime is highlighted for each model, being the closest to the training sample, which has a validation BLEU-5 of 17.8 and a self BLEU-5 of 6.6.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 62, |
| "end": 69, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(proxy for diversity). Each model was used to generate 20 samples of 400 sentences, and self-BLEU5 and validation-BLEU5 were computed over each of them, taking the average and the standard deviation. Figure 3 and Table 1 show these BLEU values, highlighting the chosen \u03c4 for each model. Given the low values for the standard deviation, we decided not to include it in subsequent tables to avoid unnecessary clutter. Note that in all BLEU vs. self-BLEU figures, each model is shown as a different line (each with its own color and/or dashed pattern) and that the data points computed for each temperature value are plotted with a specific marker shape (square, diamond, triangle, or flipped triangle).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 200, |
| "end": 208, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 213, |
| "end": 220, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Apart from BLEU scores, we also include extra quality measures, namely the perplexity obtained by other language models: an AWD-LSTM wordlevel LM and a Transformer word-level LM, both trained on EMNLP2017 News, plus OpenAI GPT-2 (1.5 B parameters) (Radford et al., 2019) . The results are shown in Table 2 . These results show how the generated text improves over AWD-LSTM in terms of quality by all measures, with a comparable level of diversity. In comparison to the Transformer, while the quality measured with BLEU-5 is better for ITEXP, the rest of the quality measures indicate that the text generated by the Transformer is of better quality. Table 3 : ITEXP (w, \u03c4 = 1.0) with increased adjectives.", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 270, |
| "text": "(Radford et al., 2019)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 298, |
| "end": 305, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 649, |
| "end": 656, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The results of the styled text generation experiments, shown in Table 3 , confirm that the style of the resulting text can be successfully modulated to the desired degree and that the quality and diversity are only slightly degraded at moderate increases of the probability of adjectival clause generation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 64, |
| "end": 71, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In order to better assess the quality of the generated text, we also include a human evaluation. For this, we took a sample of 60 sentences of each model under study, including also a sample of the same size from the validation data, to serve as reference. The sentences were evaluated by a pool of annotators, who were requested to rate the sentence in an integer scale from 1 to 5, taking into account its fluency and correctness.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Human Evaluation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The pack of sentences rated by each annotator contained 10 sentences from each of the models under evaluation. Each sentence under evaluation was part of the packs of 3 evaluators; this redundancy was used to measure the discrepancies in the rating of each sentence among annotators, which was quantified by means of the average per-sentence standard deviation. Table 4 shows the statistics of the obtained ratings, were we can see the average rating of the sentences generated by each model, together with the average per-sentence standard deviation, to understand how different the ratings for each sentence were among the different evaluator ratings. We can see that the highest human ratings were obtained by the Transformer, both with word and subwordlevel vocabularies, followed by ITEXP and then AWD-LSTM. Table 5 shows the human evaluation for the models from the style variation experiments presented in Table 3 . As we can see, there is a small degradation in quality as we force high levels of adjectival presence. Table 5 : Human evaluation for ITEXP (w) models with increased adjectival construction probability.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 362, |
| "end": 369, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 813, |
| "end": 820, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 913, |
| "end": 920, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1026, |
| "end": 1033, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Human Evaluation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Given that the generation process in iterative expansion LMs is not sequential, we studied the distribution of the sentence lengths it generates. This is shown in Figure 4 for the text generated by a word-level iterative expansion LM trained on EMNLP2017 News, along with the lengths of a sample from the training data. Iterative expansion LMs generate the dependency parse tree as they generate text. We studied the depths of the dependency trees of generated text in relation to those parsed from the training data, as shown in Figure 5 . We also measured the degree to which the generated trees adhere to the trees obtained by parsing their lexicalized representation. Specifically, we computed the labeled and unlabeled attachment scores between both for the text generated at different softmax temperatures \u03c4 . Attachment scores are the standard performance measure in dependency parsing and are computed as the percentage of words that have been assigned the same head as the reference tree, over a test set. The attachment score is \"labeled\" if the dependency label is taken into account or \"unlabeled\" otherwise. As shown in Table 6 , the obtained labeled attachment scores (LAS) and unlabeled attachment scores (UAS) are very high across the different values of the generation temperature \u03c4 . \u03c4 0.7 0.8 0.9 1.0 1.2 LAS 96.4 95.3 94.2 92.3 86.2 UAS 98.0 97.3 96.5 95.2 90.7 Table 6 : Attachment scores of the generated trees.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 163, |
| "end": 171, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 530, |
| "end": 538, |
| "text": "Figure 5", |
| "ref_id": "FIGREF6" |
| }, |
| { |
| "start": 1133, |
| "end": 1140, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 1382, |
| "end": 1389, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Further Comparison with Real Text", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Text generation with autoregressive models like LSTM or Transformer models offers a linear computational complexity with respect to the length of the generated sequence. In comparison, the dependency tree-driven decoding used by iterative expansion LMs generates text in parallel for each branch in the tree. If the tree was a perfectly balanced binary tree, then the computational complexity would be logarithmic. However, dependency trees in general are not balanced and, given the tree binarization postprocessing that we introduce, the parallelization is slightly reduced. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Ratio of tree-based decoding steps with respect to sequential decoding Binarized tree Non-binarized tree Ideal binary tree Figure 6 : Histogram of the ratio of the decoding steps needed to generate a sentence with tree-based decoding with respect to sequential generation. Figure 6 shows the speedup of the needed decoding steps of tree-based decoding with respect of auto-regressive decoding, taking a sample of the training data and computing the needed steps to decode them should the sentences have an idealized binary dependency parse tree, a normal parse tree, and a binarized parse tree. On average, the binarized parse tree, which is the decoding used by iterative expansion LMS, needs only 45% of the decoding steps needed by autoregressive decoding.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 728, |
| "end": 736, |
| "text": "Figure 6", |
| "ref_id": null |
| }, |
| { |
| "start": 878, |
| "end": 886, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Quantification of the Generation Speedup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "American students were 62 percent more likely to die in a heart attack during the first week of 2004, according to the study. For 150 days, Hillary Clinton will do more to improve access to affordable quality care, support and education funding for millions of Americans, she says. For those on this list, it's likely that I would rather be able to train them up, she said. He made it clear the SNP repeated on Friday as a response, saying they discussed a contract getting the extra cost here. He'll pay $25, 000 for rent and more buses and bring his collection to The Academy on Channel 31. Six years later, at least eight people died as a result of the shooting. The health prime minister told CNN Thursday that he was willing to back up against the US and remove all of the relevant items at the end of the transition. Then, another man told police that was a friend's friend, and as a child, he made the decision to call his mother. They are 40 -60 among the top 50, 000 women in the last year in that group since 2014 -15. They've worked hard on Twitter and they think they've tried to focus on our sport, she said. We like to think that if you try to get this game done, we can get a lower success rate out of 15. Table 7 : Samples of text generated by iterative expansion LMs with word vocabulary. I feel that they're going to Syria because we had this explanation, that they have an indication of their advance. The girl's mother told the group of three she needed treatment and the family said her daughter would still be alive with another child. But she added: \"The data is important to the EU that the UK can attract more businesses. Though he also spoke to Mr Wilson on Saturday morning at the Netherlands Police trial, Johnson referred it to the No. 1 commission. It's a collective belief and it's a statement to us, he said. It's just the first thing we're feeling now and I don't like it. So if you want to be sitting in a garden, you have to wait for something to make sure that this does not end. So, for example, we need to argue about what the president did, but I'm just interested in having any talk. The British defence ministry confirmed action had been taken at the hospital but could not confirm the details until now. We'll ask for a fair share of Russia to stop border security, particularly for people of color, he added. Table 8 : Samples of text generated by iterative expansion LMs with subword vocabulary. Table 7 shows a selection of text samples generated by iterative expansion LMs with a word-level vocabulary, while Table 8 shows samples generated with a subword-level vocabulary. We can see that, despite being generated non-sequentially and each branch of the dependency parse tree being generated in parallel, the resulting sentences maintain coherence and syntactic agreement, confirming that conditioning on the token dependencies in the parse tree provides enough information to generate it while speeding up the decoding process.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1221, |
| "end": 1228, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 2352, |
| "end": 2359, |
| "text": "Table 8", |
| "ref_id": null |
| }, |
| { |
| "start": 2440, |
| "end": 2447, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 2555, |
| "end": 2562, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Quantification of the Generation Speedup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "In this work, we presented iterative expansion LMs, which are iterative non-autoregressive text generation models that rely on syntactic dependency trees to generate sentence tokens in parallel. As opposed to other syntax-driven generation mechanisms, the training of iterative expansion LMs can be naturally computed in batches and they are amenable to subword-level vocabularies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We showed that our proposed method generates text with quality between LSTMs and Transformers, with comparable diversity, both regarding automatic measurements and human judgement, while generating text in half of the decoding steps needed by sequential LMs, and also allowing direct control over the generation process at the syntactic level, enabling the induction of stylistic variations in the generated text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our code is available as open source at https:// github.com/noe/iterative_expansion_lms .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The expansion of the output to be fed as input in the next iteration occurs in the CPU outside of the neural model itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Abbreviation of ASGD weight-dropped LSTM, where ASGD stands for averaged stochastic gradient descent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work is partially supported by Lucy Software / United Language Group (ULG) and the Catalan Agency for Management of University and Research Grants (AGAUR) through an Industrial Ph.D. Grant. This work also is supported in part by the Spanish Ministerio de Econom\u00eda y Competitividad, the European Regional Development Fund through the postdoctoral senior grant Ram\u00f3n y Cajal and by the Agencia Estatal de Investigaci\u00f3n through the projects EUR2019-103819, PCIN-2017-079 and PID2019-107579RB-I00 / AEI / 10.13039/501100011033", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Syntactically supervised transformers for faster neural machine translation", |
| "authors": [ |
| { |
| "first": "Nader", |
| "middle": [], |
| "last": "Akoury", |
| "suffix": "" |
| }, |
| { |
| "first": "Kalpesh", |
| "middle": [], |
| "last": "Krishna", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1269--1281", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1122" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically supervised transformers for faster neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1269-1281, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Neural syntactic generative models with exact marginalization", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Buys", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "942--952", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N18-1086" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Buys and Phil Blunsom. 2018. Neural syntactic generative models with exact marginalization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 942-952, New Orleans, Louisiana. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Language gans falling short", |
| "authors": [ |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Caccia", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucas", |
| "middle": [], |
| "last": "Caccia", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Fedus", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Larochelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Charlin", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2020. Language gans falling short. In International Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "KERMIT: Generative insertion-based modeling for sequences", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Chan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikita", |
| "middle": [], |
| "last": "Kitaev", |
| "suffix": "" |
| }, |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Guu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Stern", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1906.01604" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. KERMIT: Gener- ative insertion-based modeling for sequences. arXiv preprint arXiv:1906.01604.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Structure and performance of a dependency language model", |
| "authors": [ |
| { |
| "first": "Ciprian", |
| "middle": [], |
| "last": "Chelba", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Engle", |
| "suffix": "" |
| }, |
| { |
| "first": "Frederick", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Jimenez", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| }, |
| { |
| "first": "Lidia", |
| "middle": [], |
| "last": "Mangu", |
| "suffix": "" |
| }, |
| { |
| "first": "Harry", |
| "middle": [], |
| "last": "Printz", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Ristad", |
| "suffix": "" |
| }, |
| { |
| "first": "Ronald", |
| "middle": [], |
| "last": "Rosenfeld", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of Eurospeech", |
| "volume": "", |
| "issue": "", |
| "pages": "2775--2778", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ciprian Chelba, David Engle, Frederick Jelinek, Victor Jimenez, Sanjeev Khudanpur, Lidia Mangu, Harry Printz, Eric Ristad, Ronald Rosenfeld, Andreas Stol- cke, and Dekai Wu. 1997. Structure and perfor- mance of a dependency language model. In In Pro- ceedings of Eurospeech, pages 2775-2778.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Recurrent neural network grammars", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adhiguna", |
| "middle": [], |
| "last": "Kuncoro", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "199--209", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N16-1024" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, California. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Sequence modeling with unconstrained generation order", |
| "authors": [ |
| { |
| "first": "Dmitrii", |
| "middle": [], |
| "last": "Emelianenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Elena", |
| "middle": [], |
| "last": "Voita", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Serdyukov", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dmitrii Emelianenko, Elena Voita, and Pavel Serdyukov. 2019. Sequence modeling with un- constrained generation order. In Advances in", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Neural Information Processing Systems", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "32", |
| "issue": "", |
| "pages": "7698--7709", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Neural Information Processing Systems 32, pages 7698-7709. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Mask-predict: Parallel decoding of conditional masked language models", |
| "authors": [ |
| { |
| "first": "Marjan", |
| "middle": [], |
| "last": "Ghazvininejad", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "6114--6123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel de- coding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6114- 6123, Hong Kong, China. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Insertion-based decoding with automatically inferred generation order", |
| "authors": [ |
| { |
| "first": "Jiatao", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "7", |
| "issue": "", |
| "pages": "661--676", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/tacl_a_00292" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiatao Gu, Qi Liu, and Kyunghyun Cho. 2019a. Insertion-based decoding with automatically in- ferred generation order. Transactions of the Asso- ciation for Computational Linguistics, 7:661-676.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Levenshtein transformer", |
| "authors": [ |
| { |
| "first": "Jiatao", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Changhan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "32", |
| "issue": "", |
| "pages": "11179--11189", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019b. Levenshtein transformer. In Advances in Neural Information Processing Systems 32, pages 11179- 11189. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The curious case of neural text degeneration", |
| "authors": [ |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Holtzman", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Buys", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxwell", |
| "middle": [], |
| "last": "Forbes", |
| "suffix": "" |
| }, |
| { |
| "first": "Yejin", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Attending to future tokens for bidirectional sequence generation", |
| "authors": [ |
| { |
| "first": "Carolin", |
| "middle": [], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "Bhushan", |
| "middle": [], |
| "last": "Kotnis", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathias", |
| "middle": [], |
| "last": "Niepert", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carolin Lawrence, Bhushan Kotnis, and Mathias Niepert. 2019. Attending to future tokens for bidi- rectional sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1-10, Hong Kong, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Elman", |
| "middle": [], |
| "last": "Mansimov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1173--1182", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D18-1149" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182, Brussels, Belgium. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Nltk: The natural language toolkit", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Loper", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bird", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics. Philadelphia: Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Regularizing and optimizing LSTM language models", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Merity", |
| "suffix": "" |
| }, |
| { |
| "first": "Nitish", |
| "middle": [], |
| "last": "Shirish Keskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In International Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Dependency recurrent neural language models for sentence completion", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Mirowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Vlachos", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "511--517", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P15-2084" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Mirowski and Andreas Vlachos. 2015. Depen- dency recurrent neural language models for sentence completion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 511-517, Beijing, China. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/1073083.1073135" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Language models are unsupervised multitask learners", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rewon", |
| "middle": [], |
| "last": "Child", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dario", |
| "middle": [], |
| "last": "Amodei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Neural machine translation of rare words with subword units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1715--1725", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-1162" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A new string-to-dependency machine translation algorithm with a target dependency language model", |
| "authors": [ |
| { |
| "first": "Libin", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinxi", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL-08: HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "577--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algo- rithm with a target dependency language model. In Proceedings of ACL-08: HLT, pages 577-585.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Neural language modeling by jointly learning syntax and lexicon", |
| "authors": [ |
| { |
| "first": "Yikang", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhouhan", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Chin", |
| "middle": [], |
| "last": "Wei Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yikang Shen, Zhouhan Lin, Chin wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. In Interna- tional Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Ordered neurons: Integrating tree structures into recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Yikang", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Shawn", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sordoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrat- ing tree structures into recurrent neural networks. In International Conference on Learning Representa- tions.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Insertion transformer: Flexible sequence generation via insertion operations", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Stern", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Chan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019", |
| "volume": "", |
| "issue": "", |
| "pages": "5976--5985", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible se- quence generation via insertion operations. In Pro- ceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 5976-5985.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Non-monotonic sequential text generation", |
| "authors": [ |
| { |
| "first": "Sean", |
| "middle": [], |
| "last": "Welleck", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiant\u00e9", |
| "middle": [], |
| "last": "Brantley", |
| "suffix": "" |
| }, |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 36th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "6716--6726", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sean Welleck, Kiant\u00e9 Brantley, Hal Daum\u00e9 III, and Kyunghyun Cho. 2019. Non-monotonic sequen- tial text generation. In Proceedings of the 36th In- ternational Conference on Machine Learning, vol- ume 97 of Proceedings of Machine Learning Re- search, pages 6716-6726, Long Beach, California, USA. PMLR.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A learning algorithm for continually running fully recurrent neural networks", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ronald", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zipser", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Neural computation", |
| "volume": "1", |
| "issue": "2", |
| "pages": "270--280", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronald J Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270- 280.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Texygen: A benchmarking platform for text generation models", |
| "authors": [ |
| { |
| "first": "Yaoming", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sidi", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiaxian", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Weinan", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "1097--1100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Con- ference on Research & Development in Information Retrieval, pages 1097-1100. ACM.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "Example of dependency parse tree.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "text": ", the dependency placeholders are [poss], [nsubj], [advmod], [xcomp], [dobj] and [ROOT].", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "text": "Example of iterative text generation.", |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "text": "Distribution of generated text length.", |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "uris": null, |
| "text": "Histogram of generated text tree depth.", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "Quality and diversity on EMNLP2017, with \u03c4 generating the closest text to the validation data.", |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>\u03c4</td><td colspan=\"4\">Test BLEU-5 Self BLEU-5 AWD-LSTM Transformer (quality \u2191) (diversity \u2193) perplex. \u2193 perplex. \u2193</td><td>GPT-2 perplex. \u2193</td></tr><tr><td colspan=\"2\">AWD-LSTM (w) 1.0</td><td>22.9</td><td>8.9</td><td>37.0</td><td>47.9</td><td>99.5</td></tr><tr><td colspan=\"2\">Transformer (w) 1.1</td><td>23.8</td><td>9.0</td><td>33.6</td><td>18.6</td><td>66.5</td></tr><tr><td colspan=\"2\">ITEXP (w) 1.0</td><td>23.7</td><td>9.4</td><td>40.8</td><td>40.7</td><td>85.2</td></tr><tr><td colspan=\"2\">AWD-LSTM (sw) 1.0</td><td>22.7</td><td>8.9</td><td>43.5</td><td>56.9</td><td>113.5</td></tr><tr><td colspan=\"2\">Transformer (sw) 1.1</td><td>22.1</td><td>7.6</td><td>37.5</td><td>31.6</td><td>77.1</td></tr><tr><td colspan=\"2\">ITEXP (sw) 1.0</td><td>23.6</td><td>9.2</td><td>45.2</td><td>49.2</td><td>97.1</td></tr><tr><td>Train sample</td><td>-</td><td>21.5</td><td>6.6</td><td>49.5</td><td>29.1</td><td>37.7</td></tr><tr><td>Valid sample</td><td>-</td><td>21.2</td><td>7.2</td><td>53.3</td><td>44.7</td><td>36.7</td></tr><tr><td>Table 2:</td><td/><td/><td/><td/><td/><td/></tr></table>", |
| "num": null |
| }, |
| "TABREF4": { |
| "text": "Human evaluation for the different models.", |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null |
| } |
| } |
| } |
| } |